Image Synthesis Methods and Systems

The present invention provides systems, devices, computer-implemented methods and computer program code products (software) operable to evaluate integrals using quasi-Monte Carlo methodologies, and in particular embodiments, adaptive quasi-Monte Carlo integration and adaptive integro-approximation in conjunction with techniques including a scrambled Halton Sequence, stratification by radical inversion, stratified samples from the Halton Sequence, deterministic scrambling, bias elimination by randomization, adaptive and deterministic anti-aliasing, anti-aliasing by rank-1 lattices, and trajectory splitting by dependent sampling and rank-1 lattices.

Skip to: Description  ·  Claims  · Patent History  ·  Patent History
Description
CROSS-REFERENCE TO RELATED APPLICATIONS AND INCORPORATION BY REFERENCE

This application for patent is a Continuation of U.S. patent application Ser. No. 11/465,717 filed Aug. 18, 2006 (Atty. Dkt. MENT-104-US). Patent application Ser. No. 11/465,717 claims the priority benefit of U.S. Provisional Patent App. 60/709,173 filed Aug. 18, 2005, and is a continuation-in-part of U.S. patent application Ser. No. 10/299,958 filed Nov. 19, 2002 (which issued as U.S. Pat. No. 7,167,175 on Jan. 23, 2007) (MENT-072). U.S. patent application Ser. No. 10/299,958 is a Continuation-in-Part of U.S. patent application Ser. No. 09/884,861 filed Jun. 19, 2001 (which issued as U.S. Pat. No. 7,227,547 on Jun. 5, 2007) (MENT-061), which claims priority from U.S. Provisional Patent Apps. 60/265,934 filed Feb. 1, 2001 and 60/212,286 filed Jun. 19, 2000 (both expired). Each of the patent applications noted above is incorporated herein by reference as if set forth herein in its entirety.

Also incorporated herein by reference is commonly owned U.S. patent application Ser. No. 08/880,418, filed Jun. 23, 1997, in the names of Rolf Herken and Martin Grabenstein (Attorney Docket MENT-002), now U.S. Pat. No. 6,529,193, entitled “System and Method for Generating Pixel Values for Pixels in an Image Using Strictly Deterministic Methodologies for Generating Sample Points,” hereinafter referred to as “Grabenstein.”

FIELD OF THE INVENTION

Hie present invention relates generally to methods, systems and computer program code products (software) for image synthesis in and by digital computing systems, such as for motion pictures and other computer graphics applications, and in particular, relates to methods, systems, devices, and computer software for efficient synthesis of realistic images.

The invention also relates to the field of systems, computer-implemented methods and computer program code products for evaluating integrals, and provides systems, computer-implemented methods and computer program code products for evaluating integrals using quasi-Monte Carlo methodologies, and in particular embodiments, adaptive quasi-Monte Carlo integration and adaptive integro-approximation in conjunction with techniques including a scrambled Halton Sequence, stratification by radical inversion, stratified samples from the Halton Sequence, deterministic scrambling, bias elimination by randomization, adaptive and deterministic anti-aliasing, anti-aliasing by rank-1 lattices, and trajectory splitting by dependent sampling and domain stratification induced by rank-1 lattices.

BACKGROUND OF THE INVENTION

The use of synthetic images has become increasingly important and widespread in motion pictures and other commercial and scientific applications. A synthetic image represents a two-dimensional array of digital values, called picture elements or pixels, and thus can be regarded as a two-dimensional function. Image synthesis, then, is the process of creating synthetic images from scenes.

As a general matter, digital images are generated by rasterization (as described in greater detail below and in the references cited in this document, which are incorporated herein by reference as if set forth in their entireties herein), or, in the case of photorealistic images of three-dimensional scenes, by ray tracing (also as described in greater detail below and in the references cited herein). Both approaches aim at determining the appropriate color for each pixel by projecting the original function into the pixel basis. Due to the discrete representation of the original function, the problem of aliasing arises, as described below.

Image synthesis is perhaps the most visible part of computer graphics. On the one hand it is concerned with physically correct image synthesis, which intends to identify light paths that connect light sources and cameras and to sum up their contributions. On the other hand it also comprises non-photorealistic rendering, such as the simulation of pen strokes or watercolor.

The underlying mathematical task of image synthesis is to determine the intensity I (k, l, t, λ), where (k, l) is the location of a pixel on the display medium. Computing the intensity of a single pixel requires an integration function over the pixel area. This integral is often highly complex, as discussed below, and cannot be solved analytically, thus requiring numerical methods for solution, which may include Monte Carlo and quasi-Monte Carlo methods. In particular, image synthesis is an integro-approximation problem for which analytical solutions are available only in exceptional cases. Therefore numerical techniques need to be applied. While standard graphics textbooks still recommend elements of classical Monte Carlo integration, the majority of visual effects in movie industry are produced by using quasi-Monte Carlo techniques.

However, typical numerical methods used in such applications have their own limitations and attendant problems. It would therefore be desirable to provide improved methods and systems for image synthesis whereby realistic images can be rendered efficiently.

In computer graphics, a computer is used to generate digital data that represents the projection of surfaces of objects in, for example, a three-dimensional scene, illuminated by one or more light sources, onto a two-dimensional image plane, to simulate the recording of the scene by, for example, a camera. The camera may include a lens for projecting the image of the scene onto the image plane, or if may comprise a pinhole camera in which ease no lens is used. The two-dimensional image is in the form of an array of picture elements, called “pixels” or “pels,” and the digital data generated for each pixel represents the color and luminance of the scene as projected onto the image plane at the point of the respective pixel in the image plane. The surfaces of the objects may have any of a number of characteristics, including shape, color, specularity, texture, and so forth, which are preferably rendered in the image as closely as possible, to provide a realistic-looking image.

Generally, the contributions of the light reflected from the various points in the scene to the pixel value representing the color and intensity of a particular pixel are expressed in the form of the one or more integrals of relatively complicated functions. Since the integrals used in computer graphics generally will not have a closed-form solution, numerical methods must be used to evaluate them and thereby generate the pixel value. Typically, a conventional “Monte Carlo” method has been used in computer graphics to numerically evaluate the integrals. Generally, in the Monte Carlo method, to evaluate an integral


<ƒ>=∫[0,1)sƒ(x)dx  (1.1)

where ƒ(x) is a real function on the s-dimensional unit cube [0,1)s, that is, an s-dimensional cube each of whose dimension includes “zero,” and excludes “one.” First, a number N of statistically-independent randomly-positioned points xi, i=1, . . . , N, are generated over the integration domain. The random points xi are used as sample points for which sample values ƒ(xi) are generated for the function ƒ(x), and an estimate ƒ for the integral is generated as

f f _ = 1 N i = 1 N f ( x i ) ( 1.2 )

As the number of random points used in generating the sample points ƒ(xi) increases, the value of the estimate ƒ will converge toward the actual value of the integral <ƒ>. Generally, the distribution of estimate values that will be generated for various values of N, that is, for various numbers of sample points, of being normal distributed around the actual value with a standard deviation σ which can be estimated by

σ = 1 N - 1 ( f _ 2 - f _ 2 ) ( 1.3 )

if the points xi used to generate the sample values ƒ(xi) are statistically independent, that is, if the points xi are truly positioned at random in the integration domain.

Generally, it has been believed that random methodologies like the Monte Carlo method are necessary to ensure that undesirable artifacts, such as Moiré patterns and aliasing and the like, which are not in the scene, will not be generated in the generated image. However, several problems arise from use of the Monte Carlo method in computer graphics. First, since the sample points xi used in the Monte Carlo method are randomly distributed, they may clump in various regions over the domain over which the integral is to be evaluated. Accordingly, depending on the set of points that are generated, in the Monte Carlo method for significant portions of the domain there may be no sample points xi for which sample values ƒ(xi) are generated. In that case, the error can become quite large. In the context of generating a pixel value in computer graphics, the pixel value that is actually generated using the Monte Carlo method may not reflect some elements which might otherwise be reflected if the sample points xi were guaranteed to be more evenly distributed over the domain. This problem can be alleviated somewhat by dividing the domain into a plurality of sub-domains, hut it is generally difficult to determine a priori the number of sub-domains into which the domain should be divided, and, in addition, in a multi-dimensional integration region, which would actually be used in computer graphics rendering operations, the partitioning of the integration domain into sub-domains, which are preferably of equal size, can be quite complicated.

In addition, since the method makes use of random numbers, the error | ƒ−<ƒ>|, where |x| represents the absolute value of the value x between the estimate value ƒ and actual value <ƒ> is probabilistic, and, since the error values for various large values of N are close to normal distribution around the actual value <ƒ>, only sixty-eight percent of the estimate values ƒ that might be generated are guaranteed to lie within one standard deviation of the actual value <ƒ>.

Furthermore, as is clear from Equation (1.3), the standard deviation σ decreases with increasing numbers N of sample points, proportional to the reciprocal of square root of N, that is,

1 N .

Thus, if it is desired to reduce the statistical error by a factor of two, it will be necessary to increase the number of sample points N by a factor of four, which, in turn, increases the computational load that is required to generate the pixel values, for each of the numerous pixels in the image.

Additionally, since the Monte Carlo method requires random numbers to define the coordinates of respective sample points xi in the integration domain, an efficient mechanism for generating random numbers is needed. Generally, digital computers are provided with so-called “random number generators,” which are computer programs which can be processed to generate a set of numbers that are approximately random. Since the random number generators use deterministic techniques, the numbers that are generated are not truly random. However, the property that subsequent random numbers from a random number generator are statistically independent should be maintained by deterministic implementations of pseudo-random numbers on a computer.

Grabenstein describes a computer graphics system and method for generating pixel values for pixels in an image using a strictly deterministic methodology for generating sample points, which avoids the above-described problems with the Monte Carlo method. The strictly deterministic methodology described in Grabenstein provides a low-discrepancy sample point sequence which ensures, a priori, that the sample points are generally more evenly distributed throughout the region over which the respective integrals are being evaluated. In one embodiment, the sample points that are used are based on the Halton sequence.

In a Halton sequence generated for number base b, where base b is a selected prime number, the k-th value of the sequence, represented by is generated by use of a “radical inverse” function Φb that is generally defined as

Φ b : N 0 I i = j = 0 a j ( i ) b j j = 0 a j ( i ) b - j - 1 ( 1.4 )

where (aj)j=0 is the representation of i in integer base b. Generally, a radical inverse of a value k is generated by technique including the following steps (1)-(3):

(1) writing the value k as a numerical representation of the value in the selected base b, thereby to provide a representation for the value as DMDM−1 . . . D2 D1, where Dm, (m= 1, 2, . . . , M) are the digits of the representation;

(2) putting a radix point, corresponding to a decimal point for numbers written in base ten, at the least significant end of the representation DMDM−1 . . . D2D1 written in step (1) above; and

(3) reflecting the digits around the radix point to provide 0. DMDM−1 . . . D2 D1, which corresponds to Hbk.

It will be appreciated that, regardless of the base h selected for the representation, for any series of values, one, two, . . . k, written in base b, the least significant digits of the representation will change at a faster rate than the most significant digits. As a result, in the Halton sequence Hb1, Hb2, . . . , Hbk, the most significant digits will change at the faster rate, so that the early values in the sequence will be generally widely distributed over the interval from zero to one, and later values in the sequence will fill in interstices among the earlier values in the sequence. Unlike the random or pseudo-random numbers used in the Monte Carlo method as described above, the values of the Halton sequence are not statistically independent; on the contrary, the values of the Halton sequence are strictly deterministic, “maximally avoiding” each other over the interval, and so they will not clump, whereas the random or pseudo-random numbers used in the Monte Carlo method may clump.

It will be appreciated that the Halton sequence as described above provides a sequence of values over the interval from zero to one, inclusive along a single dimension, A multi-dimensional Halton sequence can be generated in a similar manner, but using a different base for each dimension, where the bases are relatively prime.

A generalized Halton sequence, of which the Halton sequence described above is a special case, is generated as follows. For each starting point along the numerical interval from zero to one, inclusive, a different Halton sequence is generated. Defining the pseudo-sum x⊕py for any x and y over the interval from zero to one, inclusive, for any integer p having a value greater than two, the pseudo-sum is formed by adding the digits representing x and y in reverse order, from the most Significant digit to the least significant digit, and for each addition also adding in the carry generated from the sum of next more significant digits. Thus, if x in base b is represented by 0, X1X2 . . . XM−1XM, where each Xm is a digit in base b, and if y in base b is represented by 0, Y1Y2 . . . YN−1YN, where each Yn is a digit in base b, where M is the number of digits in the representation of x in base b, and where N is the number of digits in the representation of y in base b, and where M and N may differ, then the pseudo-sum z is represented by 0, Z1Z2 . . . ZL−1ZL, where each Z1 is a digit hi base b given by Z1=(X1+Y1+C1) mod b, where mod represents the modulo function, and

C l = { 1 for X t - 1 + Y l - 1 + Z l - 1 b 0 otherwise

is a carry value from the 1-1st digit position, with C1 being set to zero.

Using the pseudo-sum function as described above, the generalized Halton sequence that is used in the system described in Grabenstein is generated as follows. If b is an integer, and x0 is an arbitrary value on the interval from zero to one, inclusive, then the p-adic von Neumann-Kakutani transformation Tb(x) is given by

T p ( x ) := x p 1 b ( 1.5 )

and the generalized Halton sequence x0, x1, x2, . . . is defined recursively as


xn+1=Tb(xn)  (1.6)

From Equations (1.5) and (1.6), it is clear that, for any value for b, the generalized Halton sequence can provide that a different sequence will be generated for each starting value of x, that is, for each x0. It will be appreciated mat the Halton sequence Hbk as described above is a special case of the generalized Halton sequence in Equations (1.5) and (1.6) for x0=0.

The use of a strictly deterministic low-discrepancy sequence such as the Halton sequence or the generalized Halton sequence can provide a number of advantages over the random or pseudo-random numbers that have are used in connection with the Monte Carlo technique. Unlike the random numbers used in connection with the Monte Carlo technique, the low discrepancy sequences ensure that the sample points are more evenly distributed over a respective region or time interval, thereby reducing error in the image which can result from clumping of such sample points which can occur in the Monte Carlo technique. That can facilitate the generation of images of improved quality when using the same number of sample points at the same computational cost as in the Monte Carlo technique.

It would also be desirable to provide methods and systems that provide image synthesis by adaptive quasi-Monte Carlo integration and adaptive integro-approximation in conjunction with techniques including a scrambled Halton Sequence, stratification by radical inversion, stratified samples from the Halton Sequence, deterministic scrambling, bias elimination by randomization, adaptive and deterministic anti-aliasing, anti-aliasing by rank-1 lattices, and trajectory splitting by dependent sampling and domain stratification induced by rank-1 lattices.

SUMMARY OF THE INVENTION

One aspect of the present invention relates to the generation and synthesis of images, such as for display in a motion picture or other dynamic display. The invention provides improved methods and systems for image synthesis, including efficient methods for determining intensity whereby realistic images can be rendered efficiently within the limits of available computational platforms.

More particularly, the invention provides anew and improved system and computer-implemented method for evaluating integrals using quasi-Monte Carlo methodologies, and in particular embodiments, adaptive quasi-Monte Carlo integration aid adaptive integro-approximation in conjunction with techniques including a scrambled Halton Sequence, stratification by radical inversion, stratified samples from the Halton Sequence, deterministic scrambling, bias elimination by randomization, adaptive and deterministic anti-aliasing, anti-aliasing by rank-1 lattices, and trajectory splitting by dependent sampling and domain stratification induced by rank-1 lattices.

In brief summary, the invention provides a computer graphics system for generating a pixel value for a pixel in an image, the pixel value being representative of a point in a scene as recorded on an image plane of a simulated camera, the computer graphics system comprising a sample point generator and a function evaluator. The sample point generator is configured to generate a set of sample points, at least one sample point being generated using at least one dependent sample, the at least one dependent sample comprising at least one element of a low-discrepancy sequence offset by at least one element of another low-discrepancy sequence. The function evaluator is configured to generate at least one value representing an evaluation of a selected function at one of the sample points generated by the sample point generator, the value generated by the function evaluator corresponding to the pixel value.

Another aspect of the invention comprises a computer program product for use in a computer graphics system, for enabling the computer graphics system to generate a pixel value for a pixel in an image displayable via a display device, the pixel value being representative of a point in a scene, the computer program product, comprising a computer-readable medium having encoded thereon:

A. computer-readable program instructions executable to enable the computer graphics system to generate a set of sample points, at least one sample point being generated using at least one sample, the at least one sample comprising at least one element of a low-discrepancy sequence, and wherein the set of sample points comprises quasi-Monte Carlo points of low discrepancy; and

B. computer-readable program instructions executable to enable the computer graphics system to evaluate a selected function at one of the sample points to generate a value, the generated value corresponding to the pixel value, the pixel value being usable to generate a display-controlling electronic output.

BRIEF DESCRIPTION OF THE DRAWINGS

This invention is pointed out with particularity in the appended claims. The above and further advantages of this invention may be better understood by referring to the following description taken in conjunction with the accompanying drawings, in which:

FIG. 1 shows a diagram of a computer graphics system suitable for use in implementations of various aspects of the invention described herein.

FIG. 2 shows a diagram illustrating components of the computer graphics system and processor module shown in FIG. 1.

FIG. 2A shows a diagram illustrating further components of a computing system according to aspects of the present invention.

FIG. 3 shows a diagram of a network configuration suitable for use in implementations of various aspects of the invention described herein.

FIG. 4 shows a diagram illustrating components of the of the network configuration shown in FIG. 3.

FIGS. 5 and 6 show code fragments for generating imaging data in accordance with aspects of the invention.

FIGS. 7A and 7B show plots of the first two components of the Halton sequence.

FIGS. 8A and 8B show a pair of low-dimensional projections, including the Halton sequence and the scrambled Halton sequence for designated points.

FIG. 9 shows a plot of a sample pattern that is tiled over the image plane.

FIG. 10 shows a plot illustrating an interleaved adaptive supersampling technique according to a further aspect of the invention.

FIGS. 11A-H show a series of plots illustrating classical quasi-Monte Carlo points, along with their mutual minimum distance.

FIGS. 12A-C shows a series of drawings illustrating selection of lattices by maximum minimum distance.

FIG. 13 shows a computer-generated image of an infinite plane with a checkerboard texture.

FIGS. 14A-C are a series of alternative sampling patterns of computer graphics.

FIGS. 15A-E show sampling patterns using quasi-Monte Carlo points.

FIG. 16 shows an illustration of how samples in a pixel are determined by tiled instances of a Hammersley point set.

FIGS. 17A and 17B are plots illustrating how samples from the Halton sequence in the unit square are scaled to fit the pixel raster.

FIGS. 18A-C are a series of plots illustrating replications by rank-1 lattices.

FIGS. 19-22 show a series of flowcharts of general methods according to aspects of the invention.

DETAILED DESCRIPTION OF THE INVENTION

The present invention relates to the generation and synthesis of images, such as for display in a motion picture or other dynamic display. The techniques described herein are practiced as part of a computer graphics system, in which a pixel value is generated for each pixel in an image. The pixel value is representative of a point in a scene as recorded on an image plane of a simulated camera. The computer graphics system is configured to generate the pixel value for an image using a selected methodology.

The following discussion describes methods, structures, systems, and display technology in accordance with these techniques. It will be understood by those skilled in the art that the described methods and systems can be implemented in software, hardware, or a combination of software and hardware, using conventional computer apparatus such as a personal computer (PC) or equivalent device operating in accordance with (or emulating) a conventional operating system such as Microsoft Windows, Linux, or Unix, either in a standalone configuration or across a network. The various processing means and computational means described below and recited in the claims may therefore be implemented in the software and/or hardware elements of a properly configured digital processing device or network of devices. Processing may be performed sequentially or in parallel, and may be implemented using special purpose or reconfigurable hardware.

FIG. 1 attached hereto depicts an illustrative computer system 10 that makes use of such a strictly deterministic methodology. With reference to FIG. 1, the computer system 10 in one embodiment includes a processor module 11 and operator interface elements comprising operator input components such as a keyboard 12A and/or a mouse 12B (generally identified as operator input element(s) 12) and an operator output element such as a video display device 13. The illustrative computer system 10 is of the conventional stored-program computer architecture. The processor module 11 includes, for example, one or more processor, memory and mass storage devices, such as disk and/or tape storage elements (not separately shown), which perform processing and storage operations in connection with digital data provided thereto. The operator input element(s) 12 are provided to permit an operator to input information for processing. The video display device 13 is provided to display output information generated by the processor module 11 on a screen 14 to the operator, including data that the operator may input for processing, information that the operator may input to control processing, as well as information generated during processing. The processor module 11 generates information for display by the video display device 13 using, a so-called “graphical user interface” (“GUI”), in which information for various applications programs is displayed using various “windows.” Although the computer system 10 is shown as comprising particular components, such as the keyboard 12A and mouse 12B for receiving input information from an operator, and a video display device 13 for displaying output information to the operator, it will be appreciated that the computer system 10 may include a variety of components in addition to or instead of those depicted in FIG. 1.

In addition, the processor module 11 includes one or more network ports, generally identified by reference numeral 14, which are connected to communication links which connect the computer system 10 in a computer network. The network ports enable the computer system 10 to transmit information to, and receive information from, other computer systems and other devices in the network. In a typical network organized according to, for example, the client-server paradigm, certain computer systems in the network are designated as servers, which store data and programs (generally, “information”) for processing by the other, client computer systems, thereby to enable the client computer systems to conveniently share the information. A client computer system which needs access to information maintained by a particular server will enable the server to download the information to it over the network. After processing the data, the client computer system may also return the processed data to the server for storage. In addition to computer systems (including the above-described servers and clients), a network may also include, for example, printers and facsimile devices, digital audio or video storage and distribution devices, and the like, which may be shared among the various computer systems connected in the network. The communication links interconnecting the computer systems in the network may, as is conventional, comprise any convenient information-earning medium, including wires, optical fibers or other media for carrying signals among the computer systems. Computer systems transfer information over the network by means of messages transferred over the communication links, with each message including information and an identifier identifying the device to receive the message.

FIG. 2 shows a diagram illustrating the sample point generator 20, function evaluator 22, simulated rays 24 and simulated lens 26 processing aspects of a computer graphics system 10 and processor module 11 in accordance with the invention.

FIG. 2A shows a diagram illustrating additional components of the computing system 10 according, to further aspects of the invention, described below. As shown in FIG. 2A, the computing system 10 further includes a sample point generator 30 for generating a set of sample points, at least one sample point being generated using at least one sample, the at least one sample comprising at least one element of a sequence, and wherein the set of sample points comprises quasi-Monte Carlo points. The computing system 10 also includes a function evaluator 32 in communication with the sample point generator 30 for evaluating a selected function at one of the sample points to generate a value, the generated value corresponding to the pixel value. The pixel value is usable to generate an electronic output, that controls the display 13.

In addition to the computer system 10 shown in FIGS. 1 and 2, methods, devices or software products in accordance with the invention can operate on any of a wide range of conventional computing devices and systems, such as those depicted by way of example in FIG. 3 (e.g. network system 100), whether standalone, networked, portable or fixed, including conventional PCs 102, laptops 104, handheld or mobile computers 106, or across the Internet or other networks 108, which may in turn include servers 110 and storage 112.

In line with conventional computer software and hardware practice, a software application configured in accordance with the invention can operate within, e.g., a PC 102 like that shown in FIG. 4, in which program instructions can be read from CD ROM 116, magnetic disk or other storage 120 and loaded into RAM 114 for execution by CPU 118. Data can be input into the system via any known device or means, including a conventional keyboard, scanner, mouse or other elements 103.

Those skilled in the art will understand that the method aspects of the invention described below can be executed in hardware elements, such as an Application-Specific Integrated Circuit (ASIC) constructed specifically to carry out the processes described herein, using ASIC construction techniques known to ASIC manufacturers. Various forms of ASICs are available from many manufacturers, although currently available ASICs do not provide the functions described in this patent application. Such manufacturers include Intel Corporation and NVIDIA Corporation, both of Santa Clara, Calif. The actual semiconductor elements of such ASICs and equivalent integrated circuits are not part of the present invention, and will not be discussed in detail herein.

Those skilled in the art will also understand that method aspects of the present invention can be carried out within commercially available digital processing systems, such as workstations and personal computers (PCs), operating under the collective command of the workstation or PC's operating system and a computer program product configured in accordance with the present invention. The term “computer program product” can encompass any set of computer-readable programs instructions encoded on a computer readable medium. A computer readable medium can encompass any form of computer readable element, including, but not limited to, a computer hard disk, computer floppy disk, computer-readable flash drive, computer-readable RAM or ROM element or any other known means of encoding, storing or providing digital information, whether local to or remote from the workstation, PC or other digital processing device or system. Various forms of computer readable elements and media are well known in the computing arts, and their selection is left to the implementer. In each case, the invention is operable to enable a computer system to calculate a pixel value, and the pixel value can be used by hardware elements in the computer system, which can be conventional elements such as graphics cards or display controllers, to generate a display-controlling electronic output. Conventional graphics cards and display controllers are well known in the computing arts, are not necessarily part of the present invention, and their selection can be left to the implementer.

In particular, the systems illustrated in FIGS. 1-4 may be used, in accordance with the following described aspects of the invention, to implement a computer graphics system that evaluates integrals using a quasi-Monte Carlo methodology, which can include adaptive quasi-Monte Carlo integration and adaptive integro-approximation in conjunction with techniques including a scrambled Halton Sequence, stratification by radical inversion, stratified samples from the Halton Sequence, deterministic scrambling, bias elimination by randomization, adaptive and deterministic anti-aliasing; anti-aliasing by rank-1 lattices, and trajectory splitting by dependent sampling and domain stratification induced by rank-1 lattices.

Various aspects, examples, features, embodiments and practices in accordance with the present invention will be set forth hi detail in the present Detailed Description of the invention, which is organized into the following sections:

I. Introduction, Overview and Description of Quasi-Monte Carlo Methodologies in Which Sample Points Represent Dependent Samples Generated Using a Low-Discrepancy Sequence II. Image Synthesis by Adaptive Quasi-Monte Carlo Integration III. Additional Examples and Points Regarding Quasi-Monte Carlo Integration IV. General Methods

The present application is a continuation-in-part of pending, commonly owned U.S. patent application Ser. No. 10/299,958 filed Nov. 19, 2002 (Attorney Docket MENT-072, inventor: Alexander Keller), entitled “System and Computer-Implemented Method for Evaluating Integrals Using a Quasi-Monte Carlo Methodology in Which Sample Points Represent Dependent Samples Generated Using a Low-Discrepancy Sequence,” and the detailed description of the present invention begins by setting forth salient points from that application.

I. Introduction, Overview and Description of Quasi-Monte Carlo Methodologies in which Sample Points Represent Dependent Samples Generated Using a Low-Discrepancy Sequence

Aspects of the present invention provide a computer graphic system and method for generating pixel values for pixels in an image of a scene, which makes use of a strictly deterministic quasi-Monte Carlo methodology in conjunction with various sub-techniques, which can include, for example, trajectory splitting by dependent sampling for generating sample points for use in generating sample values for evaluating the integral or integrals whose function(s) represent the contributions of the light reflected from the various points in the scene to the respective pixel value, rather than the random or pseudo-random Monte Carlo methodology which has been used in the past. The strictly deterministic methodology ensures a priori that the sample points will be generally more evenly distributed over the interval or region over which the integral(s) is (are) to be evaluated in a low-discrepancy manner.

It will be helpful to initially provide some background on operations performed by the computer graphics system in generating an image. Generally, the computer graphic system generates an image that attempts to simulate an image of a scene that would be generated by a camera. The camera includes a shutter that will be open for a predetermined time T starling at a time to allow light from the scene to be directed to an image plane. The camera may also include a lens or lens model (generally, “lens”) that serves to focus light from the scene onto the image plane. The average radiance flux Lm,n through a pixel at position (m, n) on an image plane P, which represents the plane of the camera's recording medium, is determined by

L m , n = 1 A P · T · A L A P t 0 t 0 + T A L L ( h ( x , t , y ) , - ω ( x , t , y ) ) f m , n ( x , y , t ) y t x ( 1.7 )

where Ap refers to the area of the pixel, AL refers to the area of the portion of the lens through which rays of light pass from the scene to the pixel, and ƒm,n represents a filtering kernel associated with the pixel. An examination of the integral in Equation (1.7) will reveal that, for the variables of integration, x, y and t, the variable y refers to integration over the lens area AL, the variable t refers to integration over time (the time interval from t0 to t0+T) and the variable x refers to integration over the pixel area Ap.

The value of the integral in Equation (1.7) is approximated in accordance with a quasi-Monte Carlo methodology by identifying Np sample points xi in the pixel area, and, for each sample point, shooting NT rays at limes in the time interval to t0 to t0+T through the focus into the scene, with each ray spanning NL sample points yi,j,k on the lens area AL. The manner in which subpixel jitter positions xi, points in time ti,j and positions on the lens yi,j,k are determined will be described below. These three parameters determine the primary ray hitting the scene geometry in h(xi, ti,j, yi,j,k) with the ray direction ω(xi, ti,j, yi,j,k). In this manner, the value of the integral in Equation (1.7) can be approximated as follows:

L m , n 1 N i = 0 N p - 1 1 N T j = 0 N T - 1 1 N L k = 0 N L - 1 L ( h ( x i , t i , j , y i , j , k ) , - ω ( x i , t i , j y i , j , k ) ) f m , n ( x i , t i , j , y i , j , k ) , ( 1.8 )

where N is the total number of rays directed at the pixel.

It will be appreciated that rays directed from the scene toward the image plane can comprise rays directly from one or more light sources in the scene, as well as rays reflected off surfaces of objects in the scene. In addition, it will be appreciated that a ray that is reflected off a surface may have been directed to the surface directly from a light source, or a ray that was reflected off another surface. For a surface that reflects light rays, a reflection operator Tƒp is defined that includes a diffuse portion Tƒd, a glossy portion Tƒg and a specular portion Tƒs, or


Tfg=Tfd+Tfg+Tfs  (1.9)

In that case, the Fredholm integral equation L=Le+TƒrL governing light transport can be represented as


L=Le+Tfr−fsLe+Tfs(L−Le)+TfsL+Tfgf+sL+TfdTfdL  (1.10)

where transparency has beet) ignored for the sake of simplicity; transparency is treated in an analogous manner. The individual terms in Equation (1.10) are as follows;

    • (i) Le represents flux due to a light source;
    • (ii) Tƒg−ƒgLe (where Tƒg−ƒg−Tƒg=−Tƒg) represents direct illumination, that is, flux reflected off a surface that was provided thereto directly by a light source; the specular component, associated with the specular portion Tƒs of the reflection operator, will be treated separately since it is modeled using a δ-distribution:
    • (iii) Tƒg (L−Lg) represents glossy illumination, which is handled by recursive distribution ray tracing, where, in the recursion, the source illumination has already been accounted for by the direct illumination (item (ii) above):
    • (iv) TƒgL represents a specular component, which is handled by recursively using L for the reflected ray;
    • (v) TƒgTƒggL (where Tƒgg=Tƒg+Tƒg) represents a caustic component, which is a ray that has been, reflected off a glossy or specular surface (reference the Tƒgg operator) before bitting a diffuse surface (reference the Tƒg operator); this contribution can be approximated by a high resolution caustic photon map; and
    • (vi) TƒgTƒdL represents ambient light, which is very smooth and is therefore approximated using a low resolution global photon map.

As noted above, the value of the integral in Equation (1.7) is approximated by solving Equation (1.8) making use of sample points xi, ti,j, and yi,j,k, where xi refers to sample points within area AL of the respective pixel at location (m, n) in the image plane, ti,j refers to sample points within the time interval t0 to t0+T during which the shutter is open, and yi,j,k refers to sample points on the lens AL. In accordance with one aspect of the invention, the sample points xi comprise two-dimensional Hammersley points, which are defined as

( i N , Φ 2 ( i ) )

where 0≦i<N, and Φ2(i) refers to the radical inverse of i in base two. Generally, the s-dimensional Hammersley point set is defined as follows:

U N , s Hammersley : [ 0 , , N - 1 ] I s i x i := ( i N , Φ b 1 ( i ) , , Φ b s - 1 ( i ) ) ( 1.11 )

where 1s is the 5-dimensional unit cube [0,1)s (that is, an s-dimensional cube each of whose dimensions includes zero, and excludes one), the number of points N in the set is fixed and b1, . . . , bg−1 are bases. The bases do not need to be prime numbers, but they are preferably relatively prime to provide a uniform distribution. The radical inverse function Φb, in turn, is generally defined as

Φ b : N 0 I i = j = 0 a j ( i ) b j j = 0 a j ( i ) b - j - 1 ( 1.12 )

where (aj)j=0 is the representation of i in integer base b. At N=(2n)2, the two-dimensional Hammersley points are a (0, 2n, 2)-net in base two, which are stratified on a 2n×2n grid and a Latin hypercube sample at the same time. Considering the grid as subpixels, the complete subpixel grid underlying the image plane can be filled by simply abutting copies of the grid to each other.

Given integer subpixel coordinates (sx, sy) the instance i and coordinates (x,y) for the sample point xi in the image plane can be determined as follows.

Preliminary, examining

( i N , Φ 2 ( i ) ) i = 0 N - 1

one observes the following:

    • (a) each line in the stratified pattern is a shifted copy of another, and
    • (b) the pattern is symmetric to the line y=x, that is, each column is a shifted copy of another column.

Accordingly, given the integer permutation σ(k):=2nΦ2(k) for 0≦k<2n, subpixel coordinates (sx, sy) are mapped onto strata coordinates (j, k):=(sx mod 2n, sy mod 2n), an instance number i is computed as


i=j2n+σ(k)  (1.13)

and fee positions of the jittered subpixel sample points are determined according to

x j = ( s x + Φ 2 ( k ) , s y + Φ 2 ( j ) ) = ( s x + σ ( k ) 2 n , s y + σ ( j ) 2 n ) ( 1.14 )

An efficient algorithm for generating the positions of the jittered subpixel sample points xi will be provided below in connection with Code Segment 1. A pattern of sample points whose positions are determined as described above in connection with Equations (1.13) and (1.14) has an advantage of having much reduced discrepancy ever a pattern determined using a Halton sequence or windowed Halton sequence, as described in Grabenstein, and therefore the approximation described above in connection with Equation (1.8) gives in general a better estimation to the value of the integral described above in connection with Equation (1.7). In addition, if N is sufficiently large, sample points in adjacent pixels will have different patterns, reducing the likelihood that undesirable artifacts will be generated in the image.

A “ray tree” is a collection of paths of light rays that are traced from a point on the simulated camera's image plane into fee scene. The computer graphics system 10 generates a ray tree by recursively following transmission, subsequent reflection and shadow rays using trajectory splitting. In accordance with another aspect of the invention, a path is determined by the components of one vector of a global generalized scrambled Hammersley point set. Generally, a scrambled Hammersley point set reduces or eliminates a problem that can arise in connection with higher-dimensioned low-discrepancy sequences since the radical inverse function Φb typically has subsequences of b−1 equidistant values spaced by 1/b. Although these correlation patterns are merely noticeable in the full s-dimensional space, they are undesirable since they are prone to aliasing. The computer graphics system 10 attenuates this effect by scrambling, which corresponds to application of a permutation to the digits of the b-ary representation used in the radical inversion. For the symmetric permutation σ from the symmetric group Sb over integers 0, . . . , b−1, the scrambled radical inverse is defined as

Φ b : N 0 × S b I ( i , σ ) j = 0 σ ( a j ( i ) ) b - j - 1 i = j = 0 a j ( i ) b j ( 1.15 )

If the symmetric permutation σ is the identity, the scrambled radical inverse corresponds to the unscrambled radical inverse, in one embodiment the computer graphics system 10 generates the symmetric permutation a recursively as follows. Starting from the permutation σ2=(0, 1) for base b=2, the sequence of permutations is defined as follows:

(i) if the base b is even, the permutation ah is generated by first taking the values of 2σb/2 and appending the values of

2 σ b 2 + 1

and

(ii) if the base b is odd, the permutation σb is generated by taking the values of σb−1, incrementing each value that is greater than or equal to

b - 1 2

by one, and inserting the value

b - 1 2

in the middle.

This recursive procedure results in


σ2=(0,1)


σ3=(0,1,2)


σ4=(0,2,1,3)


σ5=(0,3,2,1,4)


σ6=(0,2,4,1,3,5)


σ7=(0,4,2,6,1,5,3,7)


σ8=(0,4,2,6,1,5,3,7) . . .

The computer graphics system 10 can generate a generalized low-discrepancy point set as follows. It is often possible to obtain a low-discrepancy sequence by taking any rational s-dimensional point x as a starting point and determine a successor by applying the corresponding incremental radical inverse function to the components of x. The result is referred to as the generalized low-discrepancy point set. This can be applied to both the Halton sequence and the Hammersley sequence. In the case of the generalized Halton sequence, this can be formalized as


xib1(i+i1),Φb2(i+i2), . . . , Φ(i+is))  (1.16)

where the integer vector (i1, i2, . . . , is) represents the offsets per component and is fixed in advance for a generalized sequence. The integer vector can be determined by applying the inverse of the radical inversion to the starting point x. A generalized Hammersley sequence can be generated in an analogous manner.

Returning to trajectory splitting, generally trajectory splitting is the evaluation of a local integral, which is of small dimension and which makes the actual integrand smoother, which improves overall convergence. Applying replication, positions of low-discrepancy sample points are determined that can be used in evaluating the local integral. The low-discrepancy sample points are shifted by the corresponding elements of the global scrambled Hammersley point set. Since trajectory splitting can occur multiple times on the same level in the ray tree, branches of the ray tree are decorrelated in order to avoid artifacts, the decorrelation being accomplished by generalizing the global scrambled Hammersley point set.

An efficient algorithm for generating a ray tree will be provided below in connection with Code Segment 2. Generally, in that algorithm, the instance number i of the low-discrepancy vector, as determined above in connection with Equation (1.13), and the number d of used components, which corresponds to the current integral dimension, are added to the data structure that is maintained for the respective ray in the ray tree. The ray tree of a subpixel sample is completely specified by the instance number i. After the dimension has been set to “two,” which determines the component of the global Hammersley point set that is to be used next, the primary ray is cast into the scene to span its ray tree. In determining the deterministic splitting by the components of low discrepancy sample points, the computer graphics system 10 initially allocates the required number of dimensions Δd. For example, in simulating glossy scattering, the required number of dimensions will correspond to “two.” Thereafter, the computer graphics system 10 generates scattering directions from the offset given by the scrambled radical inverses


Φbd(i,σbd), . . . , Φbd+Δd−1(i,σbd+Δd−1)

yielding the instances

( y i , j ) j = 0 M - 1 = ( Φ b d ( i , σ b d ) J M , , Φ b d + Δ d - 1 ( i , σ b d + Δ d - 1 ) Φ b d + Δ d - 2 ( j , σ b d + Δ d - 2 ) ) ( 1.17 )

where “⊕” refers to “addition modulo one.” Each direction of the M replicated rays is determined by yi,j and enters the next level of the ray tree with d′:=d+Δd as the new integral dimension in order to use the next elements of the low-discrepancy vector, and i′=i+j in order to decorrelate subsequent trajectories. Using an infinite sequence of low-discrepancy sample points, the replication heuristic is turned into an adaptive consistent sampling arrangement. That is, computer graphics system 10 can fix the sampling rate ΔM, compare current and previous estimates even ΔM samples, and, if the estimates differ by less than a predetermined threshold value, terminate sampling. The computer graphics system 10 can, in turn, determine the threshold value, by importance information, that is, how much the local integral, contributes to the global integral.

As noted above, the integral described above in connection with Equation (1.7) is over a finite time period T from t0 to t0+T, during which time the shutter of the simulated camera is open. During the time period, if an object in the scene moves, the moving object may preferably be depicted in the image as blurred, with the extent of blurring being a function of the object's motion and the time interval t0+T. Generally, motion during the time an image is recorded is linearly approximated by motion vectors, in which case the integrand in Equation (1.7) is relatively smooth over the time the shutter is open and is suited for correlated sampling. For a ray instance i, started at the subpixel position xi, the offset Φ3(i) into the time interval is generated and the NT−1 subsequent samples

Φ 3 ( i ) + j N T

mod 1 are generated for 0<j<NT, that is

t i , j := t 0 ( Φ 3 ( i ) j N T ) · T ( 1.18 )

It will be appreciated that the value of NT may be chosen to be “one,” in which case there will be no subsequent samples for ray instance i. Determining sample points in this mariner fills the sampling space, resulting in a more rapid convergence to the value of the integral in Equation (1.7). For subsequent trajectory splitting, rays are decorrelated by setting the instance i′=i+j.

In addition to determining the position of the jittered subpixel sample point xi, and adjusting the camera and scene according to the sample point ti,j for the time, the computer graphics system also simulates depth of field. In simulating depth of field, the camera to be simulated is assumed to be provided with a lens having known optical characteristics and, using geometrical optics, the subpixel sample point xi is mapped through the lens to yield a mapped point xi′. The lens is sampled by mapping the dependent samples

y i , j , k = ( ( Φ 5 ( i + j , σ 5 ) k N L ) , ( Φ 7 ( i + j , σ 7 ) Φ 2 ( k ) ) ) ( 1.19 )

onto the lens area AL using a suitable one of a plurality of known transformations. As with NT, the value of NL may be chosen to be “one.” Thereafter, a ray is shot from the sample point on the lens specified by yi,j,k through the point xi′ into the scene. The offset


5(i+j,σ5),Φ7(i+j,σ7))

in Equation (1.19) comprises the next components taken from fee generalized scrambled Hammersley point set, which, for trajectory splitting, is displaced by the elements

( k N L , Φ 2 ( k ) )

of the two-dimensional Hammersley point set. The instance of the ray originating from sample point yi,j,k is set to i+j+k in order to decorrelate further splitting down the ray tree. In Equation (1.19), the scrambled samples (Φ5(i+j, σ5), Φ7(i+j, σ7)) are used instead of the unscrambled samples of (Φ5(i+j), Φ7(i+j) since in bases five and seven, up to five unscrambled samples will lie on a straight line, which will not be the case for fee scrambled samples.

In connection with determination of a value for the direct illumination (Tƒg−ƒgLe above), direct illumination is represented as an integral over the surface of the scene ∂V, which integral, is decomposed into a sum of integrals, each over one of the L single area light sources in the scene. The individual integrals in turn are evaluated by dependent sampling, that is

( T f r - fs L e ) ( y , z ) = V L e ( x , y ) f r ( x , y , z ) G ( x , y ) x = k = 1 L supp L e , k L e ( x , y ) f r ( x , y , z ) G ( x , y ) x k = 1 L 1 M K j = 0 M k - 1 L e ( x j , y ) f r ( x j , y , z ) G ( x j , y ) ( 1.20 )

where suppLe,k refers to the surface of the respective k-th light source, in evaluating the estimate of the integral for the k-th light source, for the Mk-th query ray, shadow rays determine the fraction of visibility of the area light source, since the point visibility vanes much more than the smooth shadow effect. For each light source, the emission Le is attenuated by a geometric term G, which includes the visibility, and the surface properties are given by a bidirectional distribution function ƒr−ƒs. These integrals are local integrals in the ray tree yielding the value of one node in the ray tree, and can be efficiently evaluated using dependent sampling. In dependent sampling, the query ray comes with the next free integral dimension d and the instance i from which the dependent samples are determined in accordance with

x j = ( Φ b d ( i , σ b d ) j M k , Φ b d + 1 ( i , σ b d + 1 ) Φ 2 ( j ) ) ( 1.21 )

The offset


bd(i,σbd),Φbd+1(i,σbd+1))

again is taken from the corresponding generalized scrambled Hammersley point set, which shifts the two-dimensional Hammersley point set

( j M k , Φ 2 ( j ) )

on the light source. Selecting the sample rate Mk=2ny as a power of two, the local minima are obtained for the discrepancy of the Hammersley point set that perfectly stratifies the light source. As an alternative, the light source can be sampled using an arbitrarily-chosen number Mk of sample points using the following replication rule:

( j M k , Φ M k ( j , σ M k ) ) j = 0 M k - 1

Due to the implicit stratification of the positions of the sample points as described above, the local convergence will be very smooth.

The glossy contribution Tƒg(L−Lg) is determined in a manner similar to that described above in connection with area light sources (Equations (1.20) and (1.21)), except that a model ƒg used to simulate glossy scattering will be used instead of the bidirectional distribution function ƒr. In determining the glossy contribution, two-dimensional Hammersley points are generated for a fixed splitting rate M and shifted modulo “one” by the offset


bd(i,σbd),Φbdg(i,σbdg))

taken from the current ray tree depth given by the dimension field d of the incoming ray. The ray trees spanned into the scattering direction are decorrelated by assigning the instance fields i′=i+j in a manner similar to that done for simulation of motion blur and depth of field, as described above. The estimates generated for all rays are averaged by the splitting rate M and propagated up the ray tree.

Volumetric effects are typically provided by performing a line integration along respective rays from their origins to the nearest surface point in the scene, in providing for a volumetric effect the computer graphics system 10 generates from the ray data a corresponding offset Φbg(i) which it then uses to shift the M equidistant samples on the unit interval seen as a one-dimensional torus. In doing so, the rendering time is reduced in comparison to use of an uncorrelated jittering methodology. In addition, such equidistant shifted points typically obtain the best possible discrepancy in one dimension.

Global illumination includes a class of optical effects, such as indirect illumination, diffuse and glossy inter-reflections, caustics and color bleeding, that the computer graphics system 10 simulates in generating an image of objects in a scene. Simulation of global illumination typically involves the evaluation of a rendering equation. For the general form of an illustrative rendering equation useful in global illumination simulation, namely:


L({right arrow over (x)},{right arrow over (w)})=Le({right arrow over (x)},{right arrow over (w)})+∫S′ƒ({right arrow over (x)},{right arrow over (w)}→{right arrow over (w)})G({right arrow over (x)},{right arrow over (x)}′)V({right arrow over (x)},{right arrow over (x)}′)L({right arrow over (x)},{right arrow over (w)}′)dA′  (1.22)

it is recognized that the light radiated at a particular point {right arrow over (x)} in a scene is generally the sum of two components, namely, the amount of light, if any, that is emitted from the point and the amount of light, if any, that originates from all other points and which is reflected or otherwise scattered from the point {right arrow over (x)}. In Equation (1.22), L ({right arrow over (x)}, {right arrow over (w)}) represents the radiance at the point {right arrow over (x)} in the direction {right arrow over (w)}=(θ, φ) (where θ represents the angle of direction {right arrow over (w)} relative to a direction orthogonal, of the surface of the object in the scene containing the point {right arrow over (x)}, and φ represents the angle of the component of direction {right arrow over (w)} in a plane tangential to the point {right arrow over (x)}). Similarly, L ({right arrow over (x)}′, {right arrow over (w)}′) in the integral represents the radiance at the point {right arrow over (x)}′ in the direction {right arrow over (w)}′ (θ′, φ′) (where θ′ represents the angle of direction {right arrow over (w)}′ relative to a direction orthogonal of the surface of the object in the scene containing the point {right arrow over (x)}′, and φ′ represents the angle of the component of direction {right arrow over (w)}′ in a plane tangential to the point {right arrow over (x)}′), and represents the light, if any, that is emitted from point {right arrow over {right arrow over (x)}′ which may be reflected or otherwise scattered from point {right arrow over (x)}.

Continuing with Equation (1.22), Le ({right arrow over (x)}, {right arrow over (w)}) represents the first component of the sum, namely, the radiance due to emission from the point {right arrow over (x)} in the direction {right arrow over (w)}, and the integral over the sphere S′ represents the second component, namely, the radiance due to scattering of light at point {right arrow over (x)}. ƒ({right arrow over (x)}, {right arrow over (w)}′→{right arrow over (w)}) is a bidirectional scattering distribution function which describes how much of the light coming from direction {right arrow over (w)}′ is reflected, refracted or otherwise scattered in the direction {right arrow over (w)}, and is generally the sum of a diffuse component, a glossy component and a specular component, in Equation (1.22), the function G ({right arrow over (x)}, {right arrow over (x)}′) is a geometric term

G ( x , x ) = cos θcos θ x - x ( 1.23 )

where θ and θ′ are angles relative to the normals of the respective surfaces at points {right arrow over (x)} and {right arrow over (x)}′ respectively. Further in Equation (1.22), V({right arrow over (x)}, {right arrow over (x)}′) is a visibility function which equals the value one if the point {right arrow over (x)}′ is visible from the point {right arrow over (x)} and zero if the point {right arrow over (x)}′ is not visible from the point {right arrow over (x)}.

The computer graphics system 10 makes use of global illumination specifically in connection with determination of the diffuse component


TfdTfdL

and the caustic component


TfdTfg+fgL

using a photon map technique. Generally, a photon map is constructed by simulating the emission of photons by light source(s) in the scene and tracing the path of each of the photons. For each simulated photon that strikes a surface of an object in the scene, information concerning the simulated photon is stored in a data structure referred to as a photon map, including, for example, the simulated photon's color, position, and direction angle. Thereafter a Russian roulette operation is performed to determine the photon's state, i.e., whether the simulated photon will be deemed to have been absorbed or reflected by the surface, if a simulated photon is deemed to have been reflected by the surface, the simulated photon's direction is determined using, for example, a bidirectional reflectance distribution function (“BRDF”). If the reflected simulated photon strikes another surface, these operations will be repeated (see Grabenstein). The data structure in which information for the various simulated photons is stored may have any convenient form; typically k-dimensional trees, for k an integer, are used. After the photon map has been generated, it can be used in rendering the respective components of the image.

In generating a photon map, the computer graphics system 10 simulates photon trajectories, thus avoiding the necessity of discretizing the kernel of the underlying integral equation. The interactions of the photons with the scene, as described above, are stored and used for density estimation. The computer graphics system 10 makes use of a scrambled low-discrepancy strictly-deterministic sequence, such as a scrambled Halton sequence, which has better discrepancy properties in higher dimensions than does an unscrambled sequence. The scrambled sequence also has the benefit, over a random sequence, that the approximation error decreases more smoothly, which will allow for use of an adaptive termination scheme during generation of the estimate of the integral. In addition, since the scrambled sequence is strictly deterministic, generation of estimates can readily be parallelized by assigning certain segments of the low-discrepancy sequence to ones of a plurality of processors, which can operate on portions of the compulation independently and in parallel. Since usually the space in which photons will be shot, by selecting directions will be much larger than the area of the light sources from which the photons were initially shot, it is advantageous to make use of components of smaller discrepancy, for example, or Φ2 or Φ3 (where, as above, Φb refers to the radical inverse function for base b), for use in connection with angles at which photons are shot, and components of higher discrepancy, for example, scrambled Φ5 or Φ7, for use in connection with sampling of the area of the respective light source, which will facilitate filling the space more uniformly.

The computer graphics system if) estimates the radiance from the photons in accordance with

L r ( x , ω ) 1 A i B k ( x ) f r ( ω i , x , ω ) Φ i ( 1.24 )

where, in Equation (1.24), Φi represents the energy of the respective i-th photon, ωi is the direction of incidence of the i-th photon, Bk(x) represents the set of the k nearest photons around the point x, and A represents an area around point x that includes the photons in the set Bk(x). Since the energy of a photon is a function of its wavelength, the Φi in Equation (1.24) also represents the color of the respective i-th photon. The computer graphics system 10 makes use of an unbiased but consistent estimator for the area A for use in Equation (1.24), which is determined as follows. Given a query ball, that is, a sphere that is centered at point x and whose radius r (Bk(x)) is the minimal radius necessary for the sphere to include the entire set Bk(x), a tangential disk D of radius r (Bk(x)) centered on the point x is divided into M equal-sized subdomains Di, that is

i = 0 M - 1 D i = D and D i D j 0 for i j , where D i = D M = π r 2 ( B k ( x ) ) M ( 1.25 )

The set


P={Di|Di∩{xi|D|i∈Bk(x)}≠0}  (1.26)

contains all the subdomains Di that contain a point xi|D on the disk, which is the position of the i-th photon projected onto the plane defined by the disk D along its direction of incidence ωi. Preferably, the number M of subdomains will be on the order of √{square root over (k)} and the angular subdivision will be finer than the radial subdivision in order to capture geometry borders. The actual area A is then determined by

A = π r 2 ( B k ( x ) ) P M ( 1.27 )

Determining the actual coverage of the disk D by photons significantly improves the radiance estimate in Equation (1.24) in corners and on borders, where the area obtained by the standard estimate πr2(Bk(x)) would be too small, which would be the case at corners, or too large, which would be the case at borders. In order to avoid blurring of sharp contours of caustics and shadows, the computer graphics system 10 sets the radiance estimate L to black if all domains Di that touch point x do not contain any photons.

It will be appreciated that, in regions of high photon density, the k nearest photons may lead to a radius r(Bk(x)) that is nearly zero, which can cause an over-modulation of the estimate. Over-modulation can be avoided by selecting a minimum radius rmin, which will be used if r (Bk(x)) is less than rmin. In that, case, instead of Equation (1.24), the estimate is generated in accordance with the following equation:

L _ r ( x , ω ) = N k i B k ( x )   Φ i f r ( ω i , x , ω r ) ( 1.28 )

assuming each photon is started with 1/N of the total flux Φ. The estimator in Equation (1.28) provides an estimate for the mean flux of the k photons if r(Bk(x))<rmin.

The global photon map is generally rather coarse and, as a result, subpixel samples can result in identical photon map queries. As a result, the direct visualization of the global photon map is blurry and it is advantageous to perform a smoothing operation in connection therewith hi performing such an operation, the computer graphics system 10 performs a local pass integration that removes artifacts of the direct visualization. Accordingly, the computer graphics system 10 generates an approximation for the diffuse illumination term TƒgTƒgL as

T f d T f d L ( T f d L _ r ) ( x ) = S 2 ( x ) f d ( x ) L _ r ( h ( x , ω ) ) cos θ ω f d ( x ) M i = 0 M - 1 L _ r ( h ( x , ω ( arcsin u i , 1 , 2 π u i , 2 ) ) ) ( 1.29 )

with the integral over the hemisphere S2(x) of incident directions aligned by the surface normal in x being evaluated using importance sampling. The computer graphics system 10 stratifies the sample points on a two-dimensional grid by applying dependent trajectory splitting with the Hammersley sequence and thereafter applies irradiance interpolation. Instead of storing the incident flux Φi of the respective photons, the computer graphics system 10 stores their reflected diffuse power ƒd(xii with the respective photons in the photon map, which allows for a more exact approximation than can be obtained by only sampling the diffuse BRDF in the hit points of the final gather rays. In addition, the BRDF evaluation is needed only once per photon, saving the evaluations during the final gathering. Instead of sampling the full grid, the computer graphics system 10 uses adaptive sampling, in which refinement is triggered by contrast, distance traveled by the final gather rays in order to more evenly sample the projected solid angle, aid the number of photons that are incident form the portion of the projected hemisphere. The computer graphics system 10 fills in positions in the grid that are not sampled by interpolation. The resulting image matrix of the projected hemisphere is median filtered in order to remove weak singularities, after which the approximation is generated. The computer graphics system 10 performs the same operation in connection with, for example, hemispherical sky illumination, spherical high dynamic-range environmental maps, or any other environmental light source.

The computer graphics system 10 processes final gather rays that strike objects that do not cause caustics, such as plane glass windows, by recursive ray tracing. If the hit point of a final gather ray is closer to its origin than a predetermined threshold, the computer graphics system 10 also performs recursive ray tracing. This reduces the likelihood that: blurry artifacts will appear in corners, which might otherwise occur since for close hit points the same photons would be collected, which can indirectly expose the blurry structure of the global photon map.

Generally, photon maps have been taken as a snapshot at one point in time, and thus were unsuitable in connection with rendering of motion blur. Following the observation that averaging the result of a plurality of photon maps is generally similar to querying one photon map with the total number of photons from all of the plurality of photon maps, the computer graphics system 10 generates NT photon maps, where NT is determined as described above, at points in time

t b = t 0 + b + 1 2 N T T ( 1.30 )

0≦b<NT. As noted above, NT can equal “one,” in which case N photon maps are used, with N being chosen as described above. In that case,


ti=t03(i)T  (1.31)

and thus ti,j=ti,0, that is, ti, for NT=1. In the general case (Equation (1.30)), during rendering, the computer graphics system 10 uses the photon map with the smallest time difference |ti,j−tb| in connection with rendering for the time sample point ti,j.

The invention provides a number of advantages. In particular, the invention provides a computer graphics system that makes use of strictly deterministic distributed ray tracing based on low-discrepancy sampling and dependent trajectory splitting in connection with rendering of an image of a scene. Generally, strictly deterministic distributed ray tracing based on deterministic low-discrepancy sampling and dependent trajectory splitting is simpler to implement than an implementation based on random or pseudo-random numbers. Due to the properties of the radical inverse function, stratification of sample points is intrinsic and does not need to be considered independently of the generation of the positions of the sample points. In addition, since the methodology is strictly deterministic, it can be readily parallelized by dividing the image into a plurality of tasks, which cat be executed by a plurality of processors in parallel. There is no need to take a step of ensuring that positions of sample points are not correlated, which is generally necessary if a methodology based on random or pseudo-random numbers is to be implemented for processing in parallel.

Moreover, the methodology can be readily implemented in hardware, such as a graphics accelerator, particularly if Hammersley point sets are used, since all points with a fixed index i yield a regular grid. A graphics accelerator can render a plurality of partial images corresponding to these regular grids in a number of parallel tasks, and interleave the partial images in an accumulation buffer to provide the final image. Operating in this manner provides very good load balancing among the parallel tasks, since all of the tasks render almost the same image.

In addition, the methodology can readily be extended to facilitate rendering of animations. Generally, an animation consists of a series of frames, each frame comprising an image. In order to decorrelate the various frames, instead of initializing the field of integers used as identifiers for ray instances for each frame by i, i+iƒ can be used, where iƒ is a frame number. This operates as an offsetting of i by iƒ, which is simply a generalization of the Hammersley points. A user can select to initialize the field of integers for each frame by i, in which case the frames will not be correlated. In that case, undersampling artifacts caused by smooth motion will remain local and are only smoothly varying. Alternatively, the user can select to initialize the field of integers for each frame by i+iƒ, in which case the artifacts will not remain local, and will instead appear as noise or film grain flicker in the final animation. The latter is sometimes a desired feature of the resulting animation, whether for artistic reasons or to match actual, film grain. Another variation is to add iƒ directly to k and clip the result by 2n (reference Code Segment 1, below). In that case, the pixel sampling pattern will change from frame to frame and the frame number iƒ will need to be known in the post-production process in order to reconstruct the pixel sampling pattern for compositing purposes.

Generally, a computer graphics system that makes use of deterministic low-discrepancy sampling in determination of sample points will perform better than a computer graphics system that makes use of random or pseudo-random sampling, but the performance may degrade to that of a system that makes use of random or pseudo-random sampling in higher dimensions. By providing that the computer graphics system performs dependent splitting by replication, the superior convergence of low-dimensional low-discrepancy sampling can be exploited with the effect that the overall integrand becomes smoother, resulting in better convergence than with stratified random or pseudo-random sampling. Since the computer graphics system also makes use of dependent trajectory sampling by means of infinite low discrepancy sequences, consistent adaptive sampling of, for example, light sources, can also be performed.

In addition, it will be appreciated that, although the computer graphics system has been described as making use of sample points generated using generalized scrambled and/or unscrambled Hammersley and Halton sequences, it will be appreciated that generally any (t, m, s)-net or (t, s)-sequence can be used.

At a more general level, the invention provides an improved quasi-Monte Carlo methodology for evaluating an integral of a function ƒ on the s-dimensional unit cube [0, 1)s. In contrast with this methodology, which will be referred to as trajectory splitting by dependent splitting, in prior methodologies, fee sample points in the integration domain for which the sample values at which sample values for the function were generated were determined by providing the same number of coordinate samples along each dimension. However, for some dimensions of an integrand, it is often the case that the function ƒ will exhibit a higher variance than for other dimensions. The invention exploits this by making use of trajectory splitting by dependent samples in critical regions.

The partial integral

g ( x ) = I s 2 f ( x , y ) y 1 N 2 j = 0 N 2 - 1 f ( x , y j ) ( 1.32 )

(x and y comprising disjoint sets of the s-dimensions, and x∪y comprising the set of all of the dimensions), where N2 identifies the number of samples selected for the set y of dimensions, can be defined over the portion of the integration domain that is defined by unit cube (0, 1]s2, which, in turn, corresponds to the portion of the integration domain that is associated with set s2 dimensions. Evaluating g(x) using Equation (1.32) will affect a smoothing of the function ƒ in the s2 dimensions that are associated with set y.

The result generated by applying Equation (1.32) can then be used to evaluate the full integral

I s 1 I s 2 f ( x , y ) y x = I s 1 g ( x ) x 1 N 1 i = 0 N 1 - 1 1 N 2 j = 0 N 2 - 1 f ( x i , y j ) ( 1.33 )

where N1 identifies the number of samples selected for the set x of dimensions, that is, over the remaining dimensions of the integration domain that are associated with the dimensions that are associated with the set x. If the dimension splitting x,y is selected such that the function ƒ exhibits relatively high variance over the set y of the integration domain, and relatively low variance over the set x, it will not be necessary to generate sample values for the function N1-times-N2 times. In that case, it will suffice to generate sample only values N2 times over the integration domain. If the correlation coefficient of ƒ(ξ, η) and ƒ(ξ, η′), which indicates the degree of correlation between values of function evaluated, for the former, at (xi, yi)=(ξ, η), and, for the later, at b (xi, yi)=(ξ, η′), is relatively high, the time complexity required to evaluate the function


ƒ[0,1)g(x0, . . . , xs−1)

will be decreased.

The smoothness of an integrand can be exploited using a methodology that will be referred to as correlated sampling. Generally, that is, if correlated sampling is not used, in evaluating an integral each dimension will be associated with its respective sequence. However, in correlated sampling, the same sequence can be used for all of the dimensions over the integration domain, that is

1 M j - 1 N 1 N j i = 0 N j - 1 f j ( x i , y j ) 1 M j = 1 M I s f j ( x ) x = I s 1 M j = 1 M f j ( x ) x 1 N i = 0 N - 1 1 M j = 1 M f j ( x i ) ( 1.34 )

The methodology of trajectory splitting by depending sampling makes use of a combination of the trajectory splitting technique described above in connection with Equations (1.32) and (1.33) with the correlated sampling methodology described in connection with Equation (1.34).

Since integrals are invariant under toroidal shifting for zj∈Is2, that is,

S j : I s 2 I s 2 y ( y + z j ) mod 1 I s 2 g ( y ) y = I s 2 g ( S j ( y ) ) y ( 1.35 )

the values of the integrals also do not change. Thus, if, in Equation (1.33), the inner integral is replicated M times,

I s 1 I s 2 f ( x , y ) y x = I s 1 I s 2 1 M j = 0 M - 1 f ( x , S j ( y ) ) y x 1 N i = 0 N - 1 1 M j = 0 M - 1 f ( x i , S j ( y i ) ) = 1 N i = 0 N - 1 1 M j = 0 M - 1 f ( x i , ( y i + z j ) mod 1 ) ( 1.36 )

For index j, the functions ƒ(xi, Sj(yi)) are correlated, enabling the smoothness of the integrand in those dimensions that are represented by y to be exploited, as illustrated above in connection with Equation (1.19) (lens sampling). Equations (1.20) and (1.2.1) (area light sources) and Equation (1.29) (approximation for fee diffuse illumination term). It will be appreciated that the evaluation using the replication is the repeated application of fee local quadrature rule


UM,sg:=(zj)j=0M

shifted by random offset values yi. The use of dependent variables in this manner pays off particularly if there is some smoothness in the integrand along one or more dimensions. Splitting can be applied recursively, which yields a history tree, in which each path through the respective history tree represents a trajectory of a particle such as a photon.

The quasi-Monte Carlo methodology of trajectory splitting by dependent sampling makes use of sets of deterministic, low-discrepancy sample points both for the global quadrature rule


UN,s1+s2=(xi,yi)i=0N

that is, integration over all of the dimensions s1+s2 comprising the entire integration domain, as well as for the local quadrature rule


UM,sg:=(zj)j=0M

that is, integration over the dimensions s2 of the integration domain. The methodology unites splitting and dependent sampling, exploiting the stratification properties of low-discrepancy sampling. Accordingly, it will be possible to concentrate more samples along those dimensions in which the integrand exhibits high levels of variation, and fewer samples along those dimensions in which the integrand exhibits low levels of variation, which reduces the number of sample points at which the function will need to be evaluated. If the methodology is to be applied recursively a plurality of times, it will generally be worthwhile to calculate a series of values zj that are to comprise the set UM,s2. In addition, the methodology may be used along with importance sampling and, if U is an infinite sequence, adaptive sampling. In connection with adaptive sampling, the adaptations will be applied in the replication independently of the sampling rate, so that the algorithm will remain consistent. The low-discrepancy sample points sets UN,s1+s2 and UM,s2 can be chosen arbitrarily; for example, the sample point set UM,s2 can be a projection of sample point set UN,s1+s2. When trajectory splitting is recursively applied to build trajectory trees, generalizing the point set UN,s1+s2 for the subsequent branches can be used to decorrelate the separate parts of the respective tree.

FIG. 5 shows a code fragment 140, referred to herein as “Code Segment 1,” in the C++ programming language for generating the positions of the jittered subpixel sample points xi. FIG. 6 shows a code fragment 142, referred to herein as “Code Segment 2,” in the C++ programming language for generating a ray tree class Ray.

It will be appreciated that a system in accordance with the invention can be constructed in whole or in part from special purpose hardware or a general purpose computer system, or any combination thereof, any portion of which may be controlled by a suitable program. Any program may in whole or in part comprise part of or be stored on the system in a conventional manner, or it may in whole or in part be provided in to the system over a network or other mechanism for transferring information in a conventional manner. In addition, it will be appreciated that the system may be operated and/or otherwise controlled by means of information provided by an operator using operator input elements (not shown) which may be connected directly to the system or which may transfer the information to the system over a network or other mechanism for transferring information in a conventional manner.

With these points in mind, we next turn to image synthesis by adaptive quasi-Monte Carlo integration.

II. Image Synthesis by Adaptive Quasi-Monte Carlo Integration

Analyzing the implicit stratification properties of the deterministically scrambled Halton sequence leads to an adaptive interleaved sampling scheme that improves many rendering algorithms. Compared to uncorrelated adaptive random sampling schemes, the correlated and highly uniform sample points from, the incremental Halton sequence result in a faster convergence and much more robust adaptation. Since fee scheme is deterministic, parallelization and reproducibility become trivial, while interleaving maximally avoids aliasing. The sampling scheme described herein are useful in a number of applications, including, for example, industrial path tracing, distribution ray tracing, and high resolution compositing.

As discussed above, the process of image synthesis includes computing the color of each pixel in the image. The pixel color itself is determined by an integral. Due to high dimensionality and unknown discontinuities of the integrand, this integral typically must be approximated using a numerical technique, such as the Monte Carlo method. The efficiency of the image synthesis process can be significantly improved by using adaptive schemes that take into account variations in the complexity of the integrands.

Analytical integration methods developed for computer graphics perform very-well for small problems, i.e., low integrand dimension or untextured scenes. Approaches like discontinuity meshing or approximate analytic integration, however, utterly fail for, e.g., higher order shadow effects. Consequently, high-end rendering algorithms rely on sampling.

Starting from early computer graphics, many adaptive sampling schemes have been developed in order to control rendering effort. A number of these schemes rely on partitioning the integration domain along each axis. As long as only low-dimensional integration (e.g., pixel anti-aliasing) was of interest, the inherent curse of dimension of axis-aligned recursive refinement was not perceptible, (The term “curse of dimension” refers to the known issue that, computational cost, typically increases exponentially with the dimension of a problem.) These schemes are still applied today. However, due to the curse of dimension, for example, in distribution ray tracing and global illumination simulation, these schemes are applied only to the lowest dimensions of the integrands, e.g., to the image plane.

Many adaptive schemes rely on comparing single samples in order to control refinement. For example edges are detected by comparing contrast against a threshold. Such schemes fail in two aspects: If the contrast does not indicate refinement, nevertheless important contributions to the image can be missed. On the other hand refinement can be taken too far. This happens, for example, when sampling an infinite black-and-white checkerboard in perspective. At the horizon, refinement is driven to full depth, although the correct gray pixel color may already be obtained by one black and white sample.

In fact, the paradigm of sample-based refinement performs function approximation of the integrand itself although only averages of the integrand are required. This is considered by pixel-selective Monte Carlo schemes that consider the estimated variance of the estimate of the functional to be computed. However, Monte Carlo error estimation requires independent random samples, which limits the amount of uniformity of the samples and thus convergence speed.

Thinking of adaptive sampling as image processing it is easy to identify noise or edges in an image by computing derivatives between pixels in order to trigger refinement.

By considering image synthesis as computing families of functionals instead of independent pixel values only, a powerful adaptive sampling scheme has been developed. Therefore stratified sequences of sample points are extracted from the scrambled Halton sequence. Although these points are deterministic, aliasing is avoided maximally. Incorporating tone mapping directly into the pixel integral, in addition to high uniformity of the subsequences, yields a superior and smooth convergence. Consequently adaptation can be controlled robustly by applying simple image processing operators to the final pixel values rather than to single samples. Since everything is deterministic exact reproducibility as required in production is trivial. The superior performance of the new technique is described below with respect to various applications.

The scrambled Halton sequence is now described. For the purposes of the present discussion, filtering, tone mapping, and actual radiance computations are hidden in the integrand ƒ defined on the s-dimensional unit cube. The color of a pixel is then determined by an integral

[ 0 , 1 ) s f ( x ) x 1 N j = 0 N - 1 f ( x j )

which is numerically is approximated by averaging N function samples at the positions xj∈[0,1)s.

It has been demonstrated non-adaptive quasi-Monte Carlo integration is highly efficient in computer graphics. The used so-called quasi-Monte Carlo points are of low discrepancy, meaning that due to high correlation they are much more uniformly distributed than random samples can be. Due to their deterministic nature, however, unbiased error estimation from the samples themselves is not possible opposite to the Monte Carlo method.

In order to take advantage of the much faster and smoother convergence of quasi-Monte Carlo integration and to obtain a reliable adaptation control, important properties of the scrambled Halton sequence are described in the following discussion. This deterministic low discrepancy point is easily constructed as can be seen in the sample code, discussed below.

“Stratification by radical inversion” is now described. The radical inverse

Φ b : 0 -> [ 0 , 1 ) i = l = 0 α l ( i ) b l l = 0 α l ( i ) b - l - 1 ( 2.1 )

mirrors the representation of the index i by the digits


αg(i)∈{0, . . . , b−1}

in the integer base b∈N at the decimal point. In base b=2, this means that

Φ 2 ( i ) { [ 0 , 1 / 2 ) for i even , [ 1 / 2 , 1 ) else . ( 2.2 )

This observation has been generalized, and given the name periodicity. In fact, however, this property relates much more to stratification as formalized by the definition of (0, 1)-sequences. Therefore, a different derivation is presented herein that stresses the actual stratification property. The index i is chosen as


i=j·bn+k for k∈{0, . . . , bn−1}

and inserted into Equation (2.1) yielding

Φ b ( i ) = Φ b ( j · b n + k ) = l = 0 a l ( j · b n + k ) b - l - 1 = l = 0 a l ( j · b n ) b - l - 1 + l = 0 a l ( k ) b - l - 1 = l = 0 a l ( j ) b - n - l - 1 + Φ b ( k ) = b - n Φ b ( j ) [ 0 , b - n ) + Φ b ( k ) ( 2.3 )

by exploiting the additivity of the radical inverses that belong to the class of (0, 1)-sequences. The first term of the result depends on/and obviously is bounded by b−n, while the second term is a constant offset by the radical inverse of k. Since


k∈{0, . . . , bn−1}

it follows that


Φk∈{0,b−n,2b−n, . . . , (bn−1)b−n}

Fixing the n first digits by k then for j∈N0 2 yields radical inverses only in the interval


b(k),Φb(k)+b−n)

There are now described stratified samples from the Halton sequence. For quasi-Monte Carlo integration, multidimensional uniform samples are required. However, from Equation (2.2) it may be seen that the radical inverse is not completely uniformly distributed, i.e., cannot just replace the random number generator. Therefore, multidimensional uniform deterministic samples may be constructed, for example, by the Halton sequence


xi=(Φbg(i), . . . , Φbg(i))∈[0,1)g

where for the c-th component bc is the c-th prime number.

The one-dimensional observations above generalize to higher dimensions. Stratified sampling is implicitly embodied. This is seen by choosing the index

i = j · d = 1 s b d n d + k with 0 k < d = 1 s b d n d ( 2.4 )

yielding

Φ b c ( i ) = Φ b c ( j · d = 1 s b d n d + k ) = b c - n c Φ b c ( j · d = 1 d c s b d n d ) + Φ b c ( k )

analogous to Equation (2.3), and consequently

x i c = 1 s [ Φ b c ( k ) , Φ b c ( k ) + b c - n c )

for fixed k and j∈0 with the choice of the index i according to Equation (2.4). It may be seen that the


Πd=1sbdnd

disjoint strata selected by k are disjoint and form a partition of the unit cube [0, 1)s. However, the scheme suffers the curse of dimension, since stratifying all s dimensions results in an exponential growth of the number of strata. This may be the reason, why only the first four dimensions have been stratified.

The stratification property is illustrated in FIGS. 7A and 7B, which show plots 200 and 210, respectively, of the first two components xi=(Φ2(i), Φ3(i)) of the Halton sequence for 0≦i<j·6+k<24·33=216. The stratum with the emphasized points contains all indices i with k=1. Scaling the strata to be square, i.e.,


xi→xig=(21·Φ2(i),31·Φ3(i))

does not affect discrepancy, since it is a separable mapping.

Deterministic scrambling is now described. The Halton sequence exposes superior uniformity properties, i.e., low discrepancy and minimum distance property. However, low-dimensional projections exhibit correlation patterns. FIGS. 8A and 8B show a pair of low-dimensional projections 220 and 230. FIG. 8A shows the Halton sequence for the points


17(i),Φ19(i))i=0255

FIG. 8B shows the scrambled Halton sequence for the points


(Φ′17(i),Φ′19(i))i=0255

As illustrated by FIGS. 8A and 8B, scrambling can significantly improve uniformity.

While usually not perceptible, this low-dimensional correlation often interferes with the integrands in computer graphics that have low-dimensional structure. For example, correlation can slow down convergence in a sequence of two-dimensional scattering events as used in path tracing.

One remedy is to scramble the Halton sequence. The radical inverse is replaced by the scrambled radical inverse

Φ b : 0 -> [ 0 , 1 ) i = l = 0 α l ( i ) b l l = 0 π b ( a l ( i ) ) b - l - 1 ( 2.5 )

yielding a scrambled Halton sequence


x′i:=(Φ′b1(i), . . . , Φ′bs(i))

The scrambling permutations πδ applied to the digits ag(i) are determined by a recursion starting with the identity π2=(0, 1) If b is odd, πb is constructed from πb−1 by incrementing each value

b - 1 2

and inserting

b - 1 2

in the middle. Otherwise, πb is constructed from πb−1 by concatenating

2 π b 2 and 2 π b 2 + 1

This algorithm yields


π2=(0,1)


π3=(0,1,2)


π4=(0,2,1,3)


π5=(0,3,2,1,4)


π6=(0,2,4,1,3,5)


π7=(0,2,5,3,1,4,6)


π8=(0,4,2,6,1,5,3,7)

The scrambling improves the uniformity. This is especially visible for low-dimensional projections as illustrated in FIGS. 8A and 8B. In addition, the minimum distance of samples is increased, which indicates an increased uniformity. An implementation is described below.

The observations from Equation (2.3) and from the above discussion transfer to the scrambled Halton sequence in a straightforward way. This can be seen for two-dimensional stratification, since π2 and π3 are identities and consequently


Φ′2≡Φ2


and


Φ′3≡Φ3

There is now described a technique for bias elimination by randomization. By construction, radical inversion generates only rational numbers ∩[0, 1) as set forth in Equations (2.1) and (2.5). Nevertheless, it can be shown that quasi-Monte Carlo integration is biased but consistent for Riemann-integrable functions.

If required, the bias can be removed by randomly shifting the deterministic points


x′i→x′i+ξ mod 1=(Φb1(i)+ξ(g)mod 1, . . . , Φ′b2(i)+∈(s)mod 1)

of the scrambled Halton sequence modulo one, where


ξ=(ξ(g), . . . , ξ(s))∈[0,1)s

is a vector of s independent realizations of uniform random numbers on the unit interval. The resulting minimally randomized estimator has been analyzed, and a variance reduction of

σ 2 ( 1 N i = 0 N - 1 f ( x i + ξ mod 1 ) ) = ( ln 2 s N N 2 )

has been proofed for square integrable functions. Multiple realizations of this and other randomization techniques allow the estimation of the variance in order to control the integration error. Since the error is controlled in a different way, one instance of such a randomization is sufficient to cancel the bias. However, in fact, the bias of the scrambled Halton sequence compared to the randomly shifted points, and consequently the randomization becomes negligible.

Considering random scrambling leads to a second important observation. Often the uniformity of the samples is improved; however, some realizations also can decrease the uniformity of a point set by, e.g., lowering the mutual minimum distance. Our experiments show that random scrambling only marginally changes the uniformity of the scrambled Halton sequence, indicating that the deterministic permutations πb, which are themselves a subset of the permutations available for random scrambling, already are a very good choice. In addition implementing the scrambled Halton sequence, as described below, is simpler than random scrambling.

From the above observations, it can be concluded that randomization in not necessary for the presently described applications. The structure of the deterministic scrambled Halton sequence is described below with respect to specific implementations.

A new technique for image synthesis is now described. Image synthesis includes computing a matrix of pixel colors

I m , n := [ 0 , 1 ) s f m , n ( x ) x j = 0 N m , n - 1 w j , m , n R α ( L ( x j , m , n ) ) ( 2.6 )

The function ƒm, n to be integrated for the pixel at position (m, n) usually contains discontinuities and can be of high dimension. Efficient general analytical solutions are not available and consequently the integrals have to be numerically approximated. In addition the complexity of the integrands varies so that adaptive integration pays off.

Deterministic anti-aliasing is now described. The first component xi(1) is scaled by b1n1 and the second xi(2) by b2n2 as illustrated in FIGS. 7A and 78. Thus, a b1n1×b2n2 stratified sample pattern is obtained that is periodically tiled over the image plane. FIG. 9 shows a plot 240 of the tiled sample pattern.

Identifying each stratum with a pixel, the identification k is determined, for example by a table lookup, from the pixel coordinates, and a Halton sequence restricted to that pixel is obtained from


i=j·b1n1·b2n2+k for j∈0

This restriction means fixing the first n1 and n2 decimal digits of the first and second component, respectively, and does not change the superior uniformity properties of the Halton points. Consequently, the improved and smooth convergence of deterministic low discrepancy sampling is preserved. Convergence is improved substantially further by applying tone-mapping techniques that in fact bound the integrands. Then, adaptation triggered by image processing operators becomes very reliable. The number of strata b1n1×b2n2 determined by n1 and n2, which are chosen large enough, so that the strata covered by adjacent pixel reconstruction filters do not contain repeated patterns.

While for Monte Carlo integration it is easy to control adaptation by estimating the integration error from the samples, this is not possible for the correlated samples of quasi-Monte Carlo integration.

A pixel is refined, whenever a refinement criterion is met. As an example, a simple criterion indicates refinement by checking the image gradient


∥∇Im,n∥<T·m,n−α

against a threshold T. The exponent α∈[0.5, 1.5] can be used to adapt to the speed of convergence. Nm,n is the number of samples used for pixel (m, n).

As known from music equipment, clipping signals causes distortion. Therefore, instead of clipping, compression is used, meaning that an upper bound on the signal is achieved by using a continuously differentiable function. In addition, single signals are compressed before being mixed.

According to an aspect of the invention, the same is done for image synthesis. In Equation (2.6), the luminance L is compressed by

R α : 0 + [ 0 , 1 ) L { L L < α α + ( 1 - α ) L - α 1 + L - α else

before averaging. By α∈[0, 1], it is possible to blend between linear transfer that is clipped at α=1 and compression for α=0. The derivative of

x 1 + x

in x=0 is 1 and such the mapping Rα is continuously differentiable. Depending on the output, media, many other response curve mappings Rα are possible such as, for example, the sRGB compression for video displays or film material characteristics.

Performing the tone-mapping, i.e., compression, directly in the quadrature is a remedy to the overmodulation problems from, e.g., reflections or light source samples, and consequently convergence is increased because the integrands are bounded now. Thus, the noise level is reduced, contrary to the gradient domain method and advanced filter kernel construction becomes unnecessary.

In applying the above-described techniques to synthesize an image, more samples are used to resolve certain details in the final image. Adaptive samples are generated by refining the Halton sequence in the first two dimensions by some contrast criterion in screen space.

The above-described technique has a number of different applications, including high-resolution compositing, frameless rendering, and parallelization.

Based on the stratification properties of radical inversion a new adaptive integration algorithm for image synthesis has been presented. Using the stratification for interleaving, aliases are efficiently suppressed, while the smooth convergence of low-discrepancy sequences allows one to use a very efficient and robust termination criterion for adaptation. Since all sample positions are deterministic, storing the sampled function values allows high-resolution compositing. In a similar way, the convergence can be improved by using the microstructure of a (t, s)-sequence, such as, for example, the Sobol sequence. While the described techniques benefit from stratification in only two dimensions, it is not suited for general high-dimensional stratification due to the curse of dimension.

FIG. 10 shows a plot 250 illustrating an interleaved adaptive supersampling technique, according to a further aspect of the invention, which is tiled over the whole screen. The plotted points are represented by:


2(i),Φ3(i))=P(i)

The image is scaled by (1, 1.5), and a 2×3 stratification is used. As shown in FIG. 10, the strata are addressed by an offset, generating a point sequence in a subpixel. The points are enumerated by the offset O of the desired stratum plus the number of strata multiplied by the point number. In the illustrated section of the plot, the point numbers are:


i=6j+3


i∈{0,1295}

In the equation i=6j+3, j is multiplied by 6 because of the 2×3 stratification. Also, i∈{0, 1295}=24·34.

Using the illustrated interleaving technique, adjacent pixels are sampled differently, but strictly deterministically.

III. Additional Examples and Points Regarding Quasi-Monte Carlo Integration

Computer graphics textbooks teach that sampling images using deterministic patterns or lattices can result in aliasing. Aliasing can only be avoided by random, i.e., independent sampling of images. Thus, textbooks typically recommend random samples with blue noise characteristic. However, these types of samples are highly correlated due to their maximized minimum mutual distance. Contrary to the textbook approach, the systems and techniques described herein are based on parametric integration by quasi-Monte Carlo methods, and are strictly deterministic.

Image synthesis is the most visible part of computer graphics. One aspect, of image synthesis is concerned with the synthesis of physically correct images. Thus, one image synthesis technique includes identifying Sight paths that connect light sources and cameras and summing up their respective contributions. Another aspect of image synthesis is concerned with non-photorealistic rendering including, for example, the simulation of pen strokes or watercolor.

Image synthesis poses an integro-approximation problem for which analytical solutions are available in exceptional cases only. Therefore numerical techniques have to be applied. Prior art approaches typically use elements of classical Monte Carlo integration, in which random points are used to numerically approximate the solution to an image integral. However, as described herein, it is significantly more efficient to use quasi-Monte Carlo integration, in which sequences of quasirandom numbers are used to compute the solution to an image integral. The presently described systems and techniques are useful, for example, in a motion picture, which typically requires an extremely large number of high-quality images.

The underlying mathematical task is to determine the intensity I (k, l, t, λ), where (k, l) is the location of a pixel on the display medium. For the sake of clarity, there is omitted the dependency on the time t and the wavelength λ of a color component of a pixel in the sequel.

Determining the intensity of a single pixel I (k, l), i.e., measuring the light flux through a pixel, requires to compute a functional of the solution of the radiance transport integral equation

L ( x , w ) = L e ( x , w ) + S 2 L ( h ( x , w i ) , - w i ) f ( w i , x , w ) cos θ i w i = : ( T f L ) ( x , w ) .

As a Fredholm integral equation of the second kind, the radiance L in the point x into the direction ω is the sum of the source radiance Le and the reflected and transmitted radiance TƒL, which is an integral over the unit sphere S2. The cosine of the angle θi between the surface normal in x and the direction of incidence ωi accounts for the perpendicular incident radiance only, which is colored by the surface interface properties given by ƒ. Finally h determines the closest point of intersection of a ray from x into the direction ωi. The extension to participating media, which we omit here for lack of space, exposes the same structure.

Simultaneously computing all pixels


I(k,l):=∫∂VS2Rα(L(x,ω),k,l,x,ω)dωdx

of an image is an integro-approximation problem. The mapping Rα represents the mathematical description of the camera and its response to the radiance L. Rα often is non-linear in order to be able to compensate for the limited dynamic range of most display media.

In a physically correct setting the norm ∥Tƒ∥ must be bounded by 1 in order to guarantee energy conservation. Then the Neumann-series converges and the computation of the radiance

L - SL c := i = 0 T f i L e

can be reduced to an infinite sum of integrals with increasing dimension. The single integrals TƒsLe have a repetitive low dimensional structure inherited from stacking transport operators. Obviously lower powers of the transport operator are likely to be more important. Real world light sources are bounded and consequently the radiance L uniformly can be bounded by some b>0. In addition real world radiance

L ( y , w , t , λ ) L b 2

is a signal of finite energy and thus must be square integrable.

However, often singular surface properties, as for example specular reflection, are modeled by


(Tδω,L)(x,ω):=L(h(x,ω′),−ω′)

using Dirac's δ distribution, where ω′≡ω′(ω) is the direction of specular reflection. Then the operator norm of the solution operator can even reach 1 and the Neumann series can diverge. The additional problem of insufficient techniques is caused by


δ∉Lb2

because some transport paths cannot be efficiently sampled and force fee need of biased approximations like, e.g., the photon mapping algorithm for rendering caustics.

Both the radiance L and the intensity I are non-negative and piece wise continuous, where the discontinuities cannot be efficiently predicted. The actual basis of the function class to represent and approximate the intensity I (k, l, t, λ) in fact is determined by the display medium or image storage format, e.g., an interleaved box basis for the color components of TFT displays, cosines for JPEG compressed images, etc.

Due to the lack of efficient analytical solutions, rendering algorithms reduce image synthesis to numerical integro-approximation. Simulating a camera with anti-aliasing, motion blur, and depth of field already contributes 5 dimensions to fee integration domain of the intensity I. Area light sources and each level, of reflection contribute another 2 dimensions. Consequently the mathematical problem is high-dimensional, discontinuous, and in Lb2. Since tensor product techniques will fail due to dimensionality and a Jack of continuity, Monte Carlo and quasi-Monte Carlo methods are the obvious choice.

Monte Carlo methods use random sampling for estimating integrals by means. Quasi-Monte Carlo methods look like Monte Carlo methods, however, they use deterministic points for sampling an integrand. In contrast to random samples, the specifically designed deterministic point sets are highly correlated, which allows for a much higher uniformity and results in a faster convergence.

Real random numbers on the unit interval are characterized by independence, unpredictability, and uniformity. For Monte Carlo integration the independence is required to prove error bounds and the uniformity is required to prove the order of convergence. Since real random numbers are expensive to generate, usually efficient deterministic algorithms are used to simulate pseudorandom numbers, which then of course are perfectly predictable but seemingly independent. However, the independence cannot be observed any longer after averaging the samples.

Quasi-Monte Carlo integration is based on these observations. By neglecting independence and unpredictability it is possible to construct deterministic points, which are much more uniform than random number samples can be. There exist a lot of constructions for such deterministic point sets Pn={x0, . . . , xn−1}⊂[0, 1)s, which are based on (1) radical inversion based point sets; and (2) rank-1 lattice points:

(1) Radical Inversion Based Point Sets Determine Samples by

x i = ( i n , Φ b 1 ( i ) , , Φ b s - 1 ( i ) )

where

Φ b : 0 [ 0 , 1 ) i = l = 0 a l ( i ) b l l = 0 a l ( i ) b - l - 1 ( 3.1 )

is the radical inverse in an integer base b. The digit aj(i) is the j-th digit of the index i represented in base b. The Hammersley point set is obtained by choosing bc as the c-th prime number. The uniformity of these points has been improved by applying permutations to the aj(i) before computing the inverse. The permutation


πb(aj(i))=aj(i)+j mod b

has been used, and other permutations have been developed, generalizing and improving on these results. Choosing all bc=b along with an appropriate set of mappings applied to the digits aj(i) yields the construction and theory of (t, m, s)-nets. There has been a lot of research as to how to efficiently compute radical inverses. One method is to tabulate the sum of the least significant T digits and to reuse them while generating the points

j = 0 π b ( a j ( i ) ) b - j - 1 = j = T π b ( a j ( i ) ) b - j - 1 only every b T - th time + j = 0 T - 1 π b ( a j ( i ) ) b - j - 1 Table of size b T

This method has been developed in the context of scrambled radical inversion. Rather than using Gray-codes, this method generates the points in their natural order at comparable speed.

(2) Rank-1 Lattice Points

x i = i n ( 1 , g 1 , , g s - 1 ) mod 1

are faster to generate than radical inversion based points. Their quality depends on the integer generator vector


(1,g1, . . . gs−1)∈

However, the construction of good generator vectors is not obvious. In order to reduce the search space, the generator vectors have been determined by only one parameter α with gi=ai. Higher rank lattices cm be constructed by linear combinations of rank-1 lattices.

Both principles can be generalized to yield sequences of points, which allow for adaptive sampling without discarding previously taken samples, however, at the price of a slight loss of uniformity: the Halton sequence and its variations corresponding to the Hammersley points, (t, s)-sequences containing (t, m, s)-nets, and extensible lattice roles containing lattices.

The above constructions yield rational numbers in the unit interval. It is especially interesting to use the base b=2 and n=2m points, because then the points can be represented exactly in the actual machine numbers ⊂ as defined by the ANSI/IEEE 754-1985 standard for binary floating point arithmetic.

The different constructions of the previous section in fact have one common feature: They induce uniform partitions of the unit cube. This kind of uniformity has been characterized as follows:

Definition 1. Let (X, B, μ) be an arbitrary probability space and let M be a nonempty subset of B. A point set Pn of n elements of X is called (M, μ)-uniform if i = 0 n - 1 X M ( x i ) = μ ( M ) · n for all M M , where XM(xi) = 1 if xi ∈ M, zero otherwise.

Examples of (M, μ)-uniform point sets are samples from the Cartesian product midpoint rule and radical inversion based points, hi addition, rank-1 lattices are also (M, μ)-uniform. The Voronoi-diagram of a lattice partitions the unit cube into n sets of identical shape and volume

1 n .

This underlines that for (M, μ)-uniformity all μ(M) must have the same denominator n.

The function classes of computer graphics imply to use fee probability space


([0,1)g,B,λs)

with the Borel-sets B and the s-dimensional Lebesgue-measure λs.

A sequence of point sets is uniformly distributed if and only if its discrepancy vanishes in the limit. The deterministic constructions sketched in previous section can obtain so-called low discrepancy, which vanishes with roughly speaking 1/n, while independent random points only can obtain roughly

1 n

and points from the Cartesian product midpoint rule even only acquire

1 n .

There are some facts about discrepancy that make it problematic. Discrepancy is an anisotropic measure, because its concept is based on axis-aligned boxes. Consequently, discrepancy is influenced by rotating point sets. While samples from the Cartesian product midpoint rule result in bad discrepancy, lattice points from the Fibonacci lattices have low discrepancy, although some of them are just rotated rectangular grids. Discrepancy is not even shift-invariant since shifting a point set on the unit torus also changes discrepancy.

Definition 1, above, supports partitions which are not axis-aligned, as for example fee Voronoi-diagram of a rank-1 lattice. Maximum uniformity in this sense can be obtained by selecting the points such that the regions of the Voronoi-diagram approximate spheres as much as possible, i.e., by maximizing the mutual minimum distance

d min ( P n ) := min 0 i < n min i < j < n x j - x i T

among all sample points in Pn·∥•∥T is used to denote the Euclidean distance on the unit torus. The minimum distance measure is isotropic and shift-invariant thus overcoming these disadvantages of discrepancy.

FIGS. 11A-H show a series of plots 260-330 illustrating classical quasi-Monte Carlo points for n= 16 (top row) and n= 64 (bottom row) along with their mutual minimum distance dmin. The rank-1 lattice has been selected such that its minimum distance is maximal. It is interesting to observe that fee constructions with better discrepancy have larger minimum distance, too, as can be seen for the Hammersley points, the Sobol sequence, and the Larcher-Pillichshammer points. It also can be observed that the minimum distance of the Halton sequence, with permutations, is maximized as compared to the original Halton sequence.

The rank-1 lattices in FIGS. 11D and 11H are Korobov lattices, where the parameter a has been chosen to maximize the minimum distance. The rank-1 lattice at n=16 points in FIG. 11D in fact can be generated as a (t, 4, 2)-net in base b=2. This is not possible for the quality parameter t=0, because at least two points lie on one line parallel to fee x-axis. In this case the best quality parameter prevents the points from reaching maximum minimum distance.

Similarly, postulating gcd(gi, n)=1 restricts the search space for generator vectors of lattices in such a way that the lattices with maximum minimum distance cannot be found. Forcing Latin hypercube properties by t=0 or gcd(gi, n)=1, respectively, may be useful in some situations, however, prevents the sample points to optimally cover the unit torus in the minimum distance sense.

Maximizing the minimum distance for generating highly uniform point sets is a principle of nature. For example the distribution of the receptors in the retina is grown that way. Algorithmically, this principle is known as “Lloyd's relaxation scheme,” according to which samples with identical charges are placed on fee unit torus and then the points are allowed to repel each other until they reach some equilibrium. From a mathematical point of view, the points are moved to the center of gravity of their Voronoi cells during relaxation. The convergence of this scheme has been improved quadratically. Note that rank-1 lattices are invariant under this kind of relaxation scheme.

Point sets selected by maximizing the mutual Euclidean distance on the unit torus induced by ∥•∥T have the advantage that they seamlessly can be tiled in order to fill s-dimensional space. This property is intrinsic, to lattices. However, it is not true for radical inversion based points in general. While the points by Larcher and Pillichshammer tile seamlessly, the Hammersley points do not. The norm ∥•∥T in fact should be a weighted norm, which includes the size of the integration domain. FIGS. 12A-C shows a series of drawings 340-360, illustrating selection of lattices by maximum minimum distance in fee unit cube (FIG. 12A); in the unit cube scaled to the integration domain (FIG. 12B); and (in the integration domain (FIG. 12C). Considering the scale of the integration domain yields more uniform points. In particular, it will be see that maximizing the minimum distance on the unit cube does not imply nice uniformity when scaling the domain of integration.

It is common knowledge that quasi-Monte Carlo integration outperforms Monte Carlo integration in many applications. However, the classical error bound theorems often do not fit the function classes of the application.

The Koksma-Hlawka inequality deterministically bounds the integration error by the product of the discrepancy of the sample points used to determine the sample mean and the variation of the integrand in the sense of Hardy and Krause. The variation can be considered as the remainder of trying to obtain discrepancy as a factor in the error bound.

The class of functions of bounded variation is impractical in certain situations. For example, discontinuities that are not aligned with the coordinate axes already yield infinite variation. The error bound thus becomes useless in already simple settings of computer graphics such as an edge

f ( x , y ) = { 1 y > x 0 else ( 3.2 )

in the unit square, for which the variation is unbounded. Using other kinds, such as for example isotropic discrepancy, it is possible to find an error bound that works for this case. However, it becomes far too pessimistic in higher dimensions.

Similar to the Koskma-Hlawka inequality there exist deterministic error bounds for the integration by lattices. The used function class requires periodic functions and imposes certain constraints on the Fourier coefficients of the integrand, which do not apply for discontinuous functions as used in computer graphics.

In a vast number of publications experiments give numerical evidence that quasi-Monte Carlo methods outperform Monte Carlo methods in practice, however, in the majority of the cases classical theorems cannot explain, the observed results. The main reason is that general discontinuities cannot be accounted for by the classical error bounds.

As stated above, image synthesis is an Integro-approximation problem in Lb2 and quasi-Monte Carlo integro-approximation in fact successfully has been used in computer graphics. Therefore, the following theorem may be generalized in the sense of parametric integration:

Theorem 1. Let (X, B, μ) be an arbitrary probability space and let M = {M1, . . . ,Mk} be a partition of X with Mj ∈ B for 1 ≦ j ≦ k. Then for any (M, μ)-uniform point set P = {x1, . . . ,xn} and any bounded function f, which restricted to X is μ-integrable, we have 1 n i = 0 n - 1 f ( x i , y ) - x f ( x , y ) μ ( x ) j = 1 k μ ( M j ) sup x M j f ( x , y ) - inf x M j f ( x , y ) for any suitable norm ∥ · ∥.

This theorem may be proved as follows: For all y∈Y, consider an arbitrary stratum Mj∈M. Then,

i = 0 n - 1 χ M j ( x i ) inf x M j f ( x , y ) i = 0 x i M j n - 1 f ( x i , y ) i = 0 n - 1 χ M j ( x i ) sup x M j f ( x , y )

which implies

μ ( M j ) inf x M j f ( x , y ) 1 n i = 0 x i M j n - 1 f ( x , y ) μ ( M j ) sup x M j f ( x , y )

because P is an (M, μ)-uniform point set. Similarly,

μ ( M j ) inf x M j f ( x , y ) M j f ( x , y ) μ ( x ) μ ( M j ) sup x M j f ( x , y )

From the latter two sets of inequalities it follows that

1 n i = 0 x i M j n - 1 f ( x i , y ) - M j f ( x , y ) μ ( x ) μ ( M j ) ( sup x M j f ( x , y ) - inf x M j f ( x , y ) )

Since M is a partition of X,

1 n i = 0 n - 1 f ( x i , y ) - X f ( x , y ) μ ( x ) = 1 n j = 1 n - 1 i = 0 x i M j n - 1 f ( x i , y ) - j = 1 k M j f ( x , y ) μ ( x ) = j = 1 k ( 1 n i = 0 x i M j n - 1 f ( x i , y ) - M j f ( x , y ) μ ( x ) )

Using the previous inequality and applying the norm ∥•∥ to both sides of the resulting inequality yields the desired bound.

By the omission of y, the norm reduces to the absolute value and it remains the original theorem and proof of the following Theorem 2:

Theorem 2. Let (X, B, μ) be an arbitrary probability space and let M = {M1, . . . ,Mk} be a partition of X with Mj ∈ B for 1 ≦ j ≦ k. Then for any (M, μ)-uniform point set P = {x0, . . . , xn−1} and any bounded μ-integrable function f on X we have X f ( x ) μ ( x ) - 1 n i = 0 n - 1 f ( x i ) j = 1 k μ ( M j ) ( sup x M j f ( x ) - inf x M j f ( x ) )

By using the concept of (M, μ)-uniform point sets instead of discrepancy, proofs become simpler and results are more general, compared with earlier approaches. With (X, B, μ)=([0, 1)s, B, λs) both theorems are applicable in the setting of computer graphics.

For the example, the deterministic error O(n−1/2) bound can be obtained by selecting an (M, μ)-uniform point set with k=n. The difference of the supremum and the infimum can only be one in the O(n−1/2) sets of the partition, which are crossed by the discontinuity, otherwise it must be zero. With μ(Mj)=1/n, the bound is obtained. It will be seen that this argument does not use probabilistic arguments for a deterministic algorithm.

Quasi-Monte Carlo methods are biased, because they are deterministic, but consistent, because they asymptotically converge to the right solution. Randomizing these algorithms allows for unbiased estimators and for unbiased error estimators.

It is useful to consider the method of dependent tests

[ 0 , 1 ) s f ( x , y ) x = [ 0 , 1 ) s i = 0 n - 1 w i ( x , y ) f ( R i ( x ) , y ) x i = 0 n - 1 w i ( ω , y ) f ( R i ( ω ) , y ) ( 3.3 )

by first applying an equivalence transformation, which does not change the result, and then using one random sample ω in order to obtain an unbiased estimate.

This formulation is able to cover a wide range of techniques that increase the efficiency of the method of dependent tests. For example, random scrambling of arbitrary points, random translation on the unit torus, random padding of arbitrary point sets, stratification and stratification induced by rank-1 lattices, trajectory splitting, and many more techniques easily can be formulated by a tuple of replications Ri with associated weights wi.

Omitting y, the special case with equal weights

w i ( x ) = 1 n

where for fixed ω the set of points


(Ri(ω))i=0n−1

Ri(ω))n−1 is of low discrepancy, is defined to be randomized quasi-Monte Carlo integration.

Repeating independent realizations allows one to estimate the error of the approximation in an unbiased way. However, some convergence is sacrificed, since independent random samples cannot be as uniformly distributed as correlated samples can be. This is especially noticeable in the setting of computer graphics, where small numbers of samples are used.

Anti-aliasing is a central problem in computer graphics. FIG. 13 is a computer-generated image 370 of an infinite plane with a checkerboard texture that illustrates various difficulties. While in the front of the checkerboard, the fields clearly can be distinguished, extraneous patterns appear towards the horizon. While it is simple to compute the color of a pixel as an average as long as the checker board cells are clearly distinguishable, this is no longer possible at fee horizon, where through one pixel infinitely many cells can be seen. By common sense, it would be expected that, the pixels at the horizon would be gray, i.e., the average of black and white. However, surprisingly, zooming into a pixel reveals that the areas of black and white tiles are not equal in general. This means that no matter what quadrature is used, the horizon will not appear gray, but somehow patterned.

The lens of the human eye is not perfectly transparent and thus slightly blurs the light before it reaches fee retina. The amount of blur perfectly fits the resolution of fee receptors in the retina. A similar trick in computer graphics is to blur the texture before integration, where the strength of the blur depends on the distance from the eye. However, tins blurring technique does not help, if fee mathematical problems are not caused by textures. Another compromise is to filler the resulting image. However, a filtering technique causes not only aliases but also previously sharp details to become blurred.

Aliasing can only be hidden by random sampling. Taking one independent random sample inside each pixel, the horizon will appear as uncorrected black and white pixels. The structured aliases thus are mapped to noise, which is less disturbing, to the eye. However, taking more random samples per pixel, the quadrature will eventually converge and aliases will appear instead of the noise.

Assuming that situations like the ones mentioned before can be managed by suitable filtering, there are now discussed alternative sampling patterns of computer graphics. These sampling patterns are illustrated in FIGS. 14A-C. FIG. 14A shows stratified sampling 380; FIG. 14B shows Latin hypercube sampling 390; and FIG. 14C shows blue noise sampling 400. FIGS. 15A-E show a series of drawings 410-450, illustrating sampling using quasi-Monte Carlo points.

Stratified sampling, shown in (a), is at least as good as random sampling. However, stratified sampling suffers from the curse of dimension, since each axis of the integration domain has to be divided at least once.

Latin hypercube sampling, shown in (b), can never be much worse than random sampling and is available for any dimension and number of samples. The average observed performance is good, although it cannot be guaranteed.

Poisson disk sampling, shown in (c), simulates the distribution of receptors in the human eye. The mutual minimum distance of the points is maximized, which results in a reduced noise level in the rendered image. Although restricted by a guaranteed minimum distance, i.e., an empty disk around each sample, the samples points are randomly placed. Thus representable details remain sharp in the final image and aliases are efficiently mapped to noise. However, Poisson disk sampling patterns are typically expensive to generate.

The properties of the sampling patterns seem disjointed, however, there exist quasi-Monte Carlo points, which can be generated efficiently, and which unite the above properties.

There is now discussed an anti-aliasing technique, using (0, 2m, 2)-nets in base b=2.

The use of Hammersley points in the accumulation buffer algorithm has been investigated. Despite considerable improvements, each pixel had to use the same samples, which required higher sampling rates in order to avoid aliases. One solution to that problem is to exploit the structure of (0, 2m, 2)-nets in base b=2.

FIGS. 15A-E illustrate that (0, 2m, 2)-nets in base b=2 unite the properties of classical sampling patterns in computer graphics. The first two dimensions

x i = ( i ( 2 m ) 2 , Φ 2 ( i ) )

of the Hammersley points are an example of such a point set, which is stratified, a Latin hypercube sample, and has a guaranteed minimum distance of at least

1 ( 2 m ) 2 .

The method of dependent tests is realized by tiling the image plane with copies of the (0, 2m, 2)-net. FIG. 16 shows a plot 460, illustrating how the samples in a pixel are determined by tiled instances of a Hammersley point set. The solid lines indicate the unit square including one set of 16 Hammersley points and the dashed lines indicate screen pixel boundaries. Because of a lack of space, the illustration uses only 4 samples per pixel.

In order to reduce aliasing artifacts, neighboring pixels should have different samples. This is achieved by letting one Hammersley point set cover multiple pixels. The improvement in convergence is visible in the same figure, where we compare stratified random sampling and anti-aliasing by using the Hammersley point set. The improvements directly transfer to other rendering algorithms, such as for example the REYES architecture as used in PIXAR's RenderMan software.

Selecting rank-1 lattices by maximum minimum distance takes the principle of Lloyd relaxation and Poisson disk sampling to the limit. Since rank-1 lattices are available for any number n of points, the sampling rate can be chosen freely. A factorization of n as required for axis-aligned stratified sampling is not needed. In order to attenuate aliasing in each pixel a different random shift is added to the lattice points resulting in an unbiased estimate. The images obtained that way display minimal noise and aliasing. The samples of a randomly shifted lattice can be generated fast and the observed convergence is even faster as compared to the radical inversion based methods from the previous section. The method can be derandomized by extracting the shift from, e.g., a Hammersley point set using the stratification properties pointed out in the previous section.

The principles of the previous sections can be extended to approximate the full integro-approximation problem. Using the Neumann series the computation of L is reduced to a sum of integrals as sketched in fee introduction. Extensive investigations have been carried out on path tracing algorithms in connection with quasi-Monte Carlo methods and randomized versions of the latter methods. All the experiments resulted in improvements when using quasi-Monte Carlo methods as compared to classical algorithms of computer graphics.

As mentioned above, randomizing quasi-Monte Carlo methods allows for unbiased estimators and unbiased error estimates. The latter does not appear to be particularly interesting in computer graphics, as better adaptive methods already exist, as discussed below. Furthermore fee resulting images can be generated in the same time and quality no matter whether a quasi-Monte Carlo or its randomized counterpart is used.

However, it is interesting to look at the effect of randomization schemes on the minimum distance. While randomizing point sets by random shifts does not change their maximum minimum distance on the torus, random scrambling does.

Applying random scrambling to the classical low discrepancy points sets it can be observed that often the uniformity is improved and rarely decreased. This is true for both discrepancy and minimum distance. Experiments indicate that random scrambling only marginally changes the uniformity of the scrambled Halton sequence, indicating that deterministic permutations already are a very good choice.

Permutations for the Halton sequence are realizations of random scrambling and can be considered as deterministic scrambling long before random scrambling was introduced. Although the permutations have been introduced to improve on discrepancy, they also increase the maximum minimum distance of the Halton points. Implementing the scrambled Halton sequence is much simpler than random scrambling and avoids the risk that uniformity if influenced negatively.

In computer graphics, the difficulty of the integro-approximation problem differs from pixel to pixel and adaptive methods pay off. A possible criterion for refinement is to compare the image gradient


∥∇I(k,l)∥2>T·nk,l−α  (3.4)

against a threshold T. By the exponent α∈[0.5, 1.5] we can adapt to the speed of convergence and nk,l is the number of samples used for pixel (k, l). Alternatively refinement can be indicated by the routines that evaluate the radiance L, since these routines can access more information of the scene to be rendered.

In the following discussion, there is described a technique for adaptive anti-aliasing by elements of the Halton sequence. The method is superior to random sampling, since the points of the Halton sequence are more uniform and spoken visually exactly fail into the largest unsampled gaps of the sampling domain resulting in a smooth convergence.

The radical inverse in Equation (3.1) mirrors the representation of the index i by the digits αi(i)∈{0, . . . , b−1} in the integer base b∈N at the decimal point. In base b=2 this means that

Φ 2 ( i ) { [ 0 , 1 / 2 ) for i even , [ 1 / 2 , 1 ) else . ( 3.5 )

This observation has been generalized and named periodicity. In fact, however, this property relates much more to stratification as formalized by the definition of (0, 1)-sequences. Therefore, a different derivation is presented that stresses the actual stratification property. The index i is chosen as


i=j·bn+k for k∈{0, . . . bn−1}

and inserted into (1), yielding

Φ b ( i ) = Φ b ( j · b n + k ) = l = 0 a l ( j · b n + k ) b - l - 1 = l = 0 a l ( j · b n ) b - l - 1 + l = 0 a l ( k ) b - l - 1 = l = 0 a l ( j ) b - n - l - 1 + Φ b ( k ) = b - n Φ b ( j ) [ 0 , b - n ) + Φ b ( k ) ( 3.6 )

by exploiting the additivity of the radical inverses that belong to the class of (0, 1)-sequences. The first term of the result depends on/and obviously is bounded by b−n, while the second term is a constant offset by the radical inverse of k. Since


k∈{0, . . . , bn−1}

it follows that


Φk∈{0,b−n,2b−n, . . . , (bn−1)b−n}

Fixing the n first digits by k then for j∈N 0 yields radical inverses only in the interval


b(k),Φb(k)+b−n)

The one-dimensional observations from She previous section generalize to higher dimensions. For the Halton sequence


xi=(Φbg(i), . . . , Φbg(i))∈[0,1)g

where for the c-th component bc is the c-th prime number, this is seen by choosing the index

i = j · d = 1 s b d n d + k with 0 k < d = 1 s b d n d ( 3.7 )

yielding

Φ b c ( i ) = Φ b c ( j · d = 1 s b d n d + k ) = b c - n c Φ b c ( j · d = 1 d c s b d n d ) + Φ b c ( k )

which is analogous to Equation (3.6) and consequently

x i c = 1 s [ Φ b c ( k ) , Φ b c ( k ) + b c - n c )

for fixed k and j∈0 with the choice of the index/according to Equation (3.7). The


Πd=1gbdnd

disjoint strata selected by k are disjoint and form a partition of the unit cube 10, [0, 1)s. However the scheme suffers from the curse of dimension, since stratifying all s-dimensions results in an exponential growth of the number of strata. The increment


Πd=1sbdnd

grows even foster than 2s since except the first, all bd>2.

A tensor product approach has been compared with an approach using the Halton sequence. An illustration has been constructed of the stratification of path space for photons that are emitted on the light source and traced for 2 reflections. The problem requires samples from the 8-dimensional unit cube. The sampling domain was stratified into 1288 strata of identical measure. Then one stratum was selected and 8 random samples have been drawn from if to determine the photon trajectories. Stratification by the Halton sequence was been used to determine the 8 paths. In spite of an enormously large increment that hardly fits into the integer representation of a computer, the trajectories start to diverge after the 2nd reflection.

For a stratification as fine as the tensor product approach large increments


Πd=1sbdnd

are required that hardly fit the integer representation on of a computer. In addition, the assumption that close points in the unit cube result in close paths in path space is not valid for more complex scenes. Depending on what part of the geometry is hit the photons can be scattered in completely different parts of the scene, although their generating points bad been close in the unit cube. The longer the trajectories the more diverging they will be. The above observations easily can be transferred to (t, s)-sequences in base b.

While stratification by the Halton sequence is not useful in high dimensions, it can be very useful in small dimensions as, for example, in pixel anti-aliasing. The properties in two dimensions are illustrated in FIGS. 17A and 17B, which are plots 470 and 480 illustrating how the samples from the Halton sequence in the unit square were scaled to fit the pixel raster. The plots show, respectively, the first two components xi=(Φ2(i), Φ3(i)) of the Halton sequence for 0≦i<23·33=216. The solid points have the indices i≡ik(j)=11·31·j+k selected by k=1. The stratum with the emphasized points contains all indices i≡ik(j) with k=1. In order to match the square pixels the coordinates are scaled, i.e.,


xi→x′i=(21·Φ2(i),31·Φ3(i))

In general the first component xi(1) is scaled by b1n1 and the second xi(2) by b2n2. Thus a b1n1×b2n2 stratified sample pattern is obtained that can be periodically tiled over the image plane, as described above. Identifying each stratum with a pixel, the identification k easily is determined (for example by a table lookup) by the pixel coordinates and a Halton sequence restricted to that pixel is obtained from


i≡ik(j)=j·b1n1·b2n2+k for j∈0

Exemplary images have been computed using a path tracer, as described above, with the scrambled Halton sequence. Refinement was triggered by the gradient (4). Note that the scrambling does not change Φ2 and Φ3. Consequently, the above algorithm can be applied directly and benefits from the improved uniformity of the scrambled sequence.

Trajectory splitting can increase efficiency in rendering algorithms. A typical example is volume rendering. While tracing one ray through a pixel it is useful to send multiple rays to the light sources along that ray. It has been shown that taking equidistant samples on fee ray that all have been randomly shifted by fee same amount is much more efficient than the original method that used jittered sampling.

A general approach to trajectory splitting is to restrict the replications in Equation (3.3) to some dimensions of the integrand. For quasi-Monte Carlo methods, this approach has been used in an implementation, according to which a strictly deterministic version of distribution ray tracing was developed. A systematic approach has been taken, according to which randomization techniques from fee field of randomized quasi-Monte Carlo methods have been parameterized. Instead of using random parameters, deterministic quasi-Monte Carlo points have been applied. Seen from a practical point of view, trajectory splitting can be considered as low-pass filtering of the integrand with respect to the splitting dimensions.

The most powerful method is to split trajectories using domain stratification induced by rank-1 lattices. For the interesting s-dimensions of the problem domain, a rank-1 lattice is selected. The matrix B contains the vectors spanning fee unit cell as identified by the Voronoi-diagram of the rank-1 lattice. Then

R i : [ 0 , 1 ) s A i x ( i n · ( 1 , g 1 , , g s - 1 ) + Bx ) mod [ 0 , 1 ) s

maps points from fee unit cube to the i-th stratum A, of the rank-1 lattice, as depicted in FIG. 18B, discussed below. This scheme can be applied recursively yielding recursive Korobov filters.

For the special case of the Fibonacci lattice at n=5 points the recursive procedure has been used for adaptive sampling in computer graphics. Starting with the lattice Z2, the next refinement level is found by rotating Z2 by arctan(½) and scaling it by 1/√{square root over (5)}, as indicated in FIG. 18C, discussed below. The resulting lattice again is a rectangular lattice and the procedure recursively can be continued. Thus, the construction is completely unrelated to rank-1 lattices.

FIGS. 18A-C show a series of plots 490-510 illustrating replications by rank-1 lattices. The Voronoi-diagram of a rank-1 lattice induces a stratification, shown in FIG. 18A. All cells Ag are of identical measure and in fact rank-1 lattices are (M, μ)-uniform. A cell Ag is anchored at the i-th lattice point xi and is spanned by the basis vectors (b1, b2). This can be used for recursive Korobov-filters, shown in FIG. 18B, where the points inside a lattice cell are determined by another set of lattice points transformed into that lattice cell. In computer graphics one special case (c) of this principle has been named √{square root over (5)} sampling, shown in FIG. 18C, because the length of the dashed lines is 1/√{square root over (5)}. It is in fact a recursive Korobov filter with points from the Fibonacci lattice at n=5 points.

Good results have been obtained from a distribution ray tracer that used randomly shifted rank-1 lattices with maximized minimum distance. Trajectory splitting was realized using rank-1 lattices with maximized minimum distance, too.

One random vector was used per pixel to shift the lattice points in order to obtain an unbiased estimator and to decorrelate neighboring pixels. The resulting images exposed minimal noise and while aliasing artifacts are pushed to noise. Compared to previous sampling methods the convergence was superior, due to the high uniformity of the lattice points.

As lattice points are maximally correlated, this is a good example for correlated sampling in computer graphics. In this context; quasi-Monte Carlo integro-approximation by lattice points can be considered as Korobov filtering.

While applications of quasi-Monte Carlo integration in finance attracted a lot of attention instantly, developments in computer graphics were not that spectacular. Today, however, about half of the rendered images in movie industry are synthesized using strictly deterministic quasi-Monte Carlo integro-approximation. In 2003, these techniques were awarded a Technical Achievement Award (Oscar) by the American Academy of Motion Picture Arts and Sciences. In contrast to academia, graphics hardware and software industry early recognized the benefits of quasi-Monte Carlo methods.

Deterministic quasi-Monte Carlo methods have the advantage that they can be parallelized without having to consider correlation as encountered when using pseudo-random number generators. By their deterministic nature the results are exactly reproducible even in a parallel computing environment.

Compared to classical algorithms of computer graphics the algorithms are smaller and more efficient, since high uniformity is intrinsic to the sample points. A good example is trajectory splitting by rank-1 lattice that have maximized minimum distance.

In computer graphics it is known that maximizing the minimum distance of point sets increases convergence speed. However, algorithms to create such points, such as Lloyd's relaxation method, are expensive. With quasi-Monte Carlo methods selected by maximized minimum distance, efficient algorithms are available and savings up to 30% of the computation time for images of the same quality as compared to random sampling methods can be observed.

In the selling of computer graphics quasi-Monte Carlo methods benefit from the piecewise continuity of the integrands in Lb2. Around the lines of discontinuity the methods are observed to perform no worse than random sampling, while in the regions of continuity the better uniformity guarantees for faster convergence. The observed convergence rate is between O (n−1) and O(n−1/2). It depends on the ratio of the number of sets in the partition induced by (M, μ)-uniform points and the number of these sets containing discontinuities. Since with increasing number of dimensions the integrands tend to contain more discontinuities the largest improvements are observed for smaller dimensions.

Since photorealistic, image generation comprises fee simulation of light transport by computing functionals of the solution of a Fredholm integral equation of the second kind, the quasi-Monte Carlo methods developed for computer graphics apply to other problems of transport theory as well.

The concept of maximized minimum distance as used in computer graphics nicely fits the concept of (M, μ)-uniformity as used in quasi-Monte Carlo theory. Rank-1 lattices selected by maximized minimum distance ideally fit both requirements and yield superior results in computer graphics.

IV. General Methods

FIGS. 19-22 are a series of flowcharts illustrating general methods according to further aspects of the present invention.

FIG. 19 is a flowchart of a computer-implemented method 600 for generating a pixel value for a pixel in an image displayable via a display device, the pixel value being representative of a point in a scene. The method includes the following steps:

Step 601; Generating a set of sample points, at least one sample point being generated using at least one sample, the at least one sample comprising at least one element of a sequence, and wherein the set of sample points comprises quasi-Monte Carlo points.

Step 602: Evaluating a selected function at one of the sample points to generate a value, the generated value corresponding to the pixel value, the pixel value being usable to generate a display-controlling electronic output.

FIG. 20 shows a flowchart of a computer-implemented method 620 for generating a pixel value for a pixel in an image display able via a display device, the pixel value being representative of a point in a scene. The method comprises the following steps:

Step 621: Generating a set of sample points, at least one sample point being generated using at least one sample, the at least one sample comprising at least one element of a low-discrepancy sequence, and wherein the generating includes using an adaptive, interleaved sampling scheme based on a deterministically scrambled Halton sequence to yield a deterministic, low-discrepancy set of sample points.

Step 622: Evaluating a selected function at one of the sample points to generate a value, the generated value corresponding to the pixel value, the pixel value being usable to generate a display-controlling electronic output.

FIG. 21 shows a flowchart of a computer-implemented method 640 for generating a pixel value for a pixel in an image displayable via a display device, the pixel value being representative of a point in a scene. The method comprises the following steps:

Step 641: Generating a set of sample points, at least one sample point being generated using at least one sample, the at least one sample comprising at least one element of a sequence, wherein the set of sample points comprises quasi-Monte Carlo points, and wherein the generating includes adaptively sampling by using radical inversion-based points.

Step 642: Evaluating a selected function at one of the sample points to generate a value, the generated value corresponding to the pixel value, the pixel value being usable to generate a display-controlling electronic output.

FIG. 22 shows a flowchart of a computer-implemented method 660 for generating a pixel value for a pixel, in an image displayable via a display device, the pixel value being representative of a point in a scene. The method comprises the following steps:

Step 661: Generating a set of sample points, at least one sample point being generated using at least one sample, the at least, one sample comprising at least one element of a sequence, wherein the generating includes sampling by using rank-1 lattice points.

Step 662: Evaluating a selected function at one of the sample points to generate a value, the generated valise corresponding to the pixel value, the pixel value being usable to generate a display-controlling electronic output.

The foregoing description provides detail of various embodiments, practices and examples of the invention. It will be understood that various additions, variations and modifications may be made to the invention, within the spirit and scope of the present invention, the scope of which is limited solely by the appended claims.

Claims

1. A computer-implemented method of generating a pixel value for a pixel in an image displayable via a display device, the pixel value being representative of a point in a scene, the method comprising:

A. generating a set of sample points, at least one sample point being generated using at least one sample, the at least one sample comprising at least one element of a sequence, and wherein the set of sample points comprises quasi-Monte Carlo points; and
B. evaluating a selected function at one of the sample points to generate a value, the generated value corresponding to the pixel value, the pixel value being usable to generate a display-controlling electronic output.

2. The method of claim 1 wherein the sequence comprises a scrambled Halton sequence.

3. The method of claim 1 wherein, in the sequence, a radical inverse function is replaced with a scrambled radical inverse function to yield a scrambled Halton sequence.

4. The method of claim 3 wherein the replacement of the radical inverse with the scrambled radical inverse is described by the following equation: Φ b ′ :   0 → ⋂ [ 0, 1 ) i = ∑ l = 0 ∞   a l  ( i )  b l ↦ ∑ l = 0 ∞   π b  ( a l  ( i ) )  b - l - 1, wherein πb is a scrambling permutation applied to the digits ag(i), and wherein the scrambling permutation is determined by a permutation of the set of integers {0,..., b−1}.

5. A computer-implemented method of generating a pixel value for a pixel in an image display able via a display device, the pixel value being representative of a point in a scene, the method comprising:

A. generating a set of sample points, at least one sample point being generated using at least one sample, the at least one sample comprising at least one element of a low-discrepancy sequence, and wherein the generating includes using an adaptive, interleaved sampling scheme based on a deterministically scrambled Halton sequence to yield a deterministic, low-discrepancy set of sample points; and
B. evaluating a selected function at one of the sample points to generate a value, the generated value corresponding to the pixel value, the pixel value being usable to generate a display-controlling electronic output.

6. A computer-implemented method of generating a pixel value for a pixel in an image displayable via a display device, the pixel value being representative of a point in a scene, the method comprising:

A. generating a set of sample points, at least one sample point being generated using at least one sample, the at least one sample comprising at least one element of a sequence, wherein the set of sample points comprises quasi-Monte Carlo points, and wherein the generating includes adaptively sampling by using radical inversion-based points; and
B. evaluating a selected function at one of the sample points to generate a value, the generated value corresponding to the pixel value, the pixel value being usable to generate a display-controlling electronic output.

7. The method of claim 6 wherein the sequence is a Halton sequence.

8. The method of claim 6 wherein the sequence is a (t, s) sequence.

9. The method of claim 6 wherein the generating includes constructing multi-dimensional, substantially uniform deterministic samples using a Halton sequence.

10. The method of claim 6 wherein the generating includes using sets of points having maximized minimum distance.

11. The method of claim 6 wherein the generating includes using point sets having maximized minimum distance with respect to low dimensional projections.

12. The method of claim 6 wherein the sequence is a scrambled Halton sequence, and wherein the generating further comprises extracting stratified sequences of sample points from the scrambled Halton sequence.

13. The method of claim 6 wherein the sequence is a (t, s) sequence, and wherein the generating further comprises extracting stratified sequences of sample points from the (t, s) sequence.

14. The method of claim 6 wherein the evaluating comprises evaluating a pixel integral, and further comprising applying a tone mapping function within the pixel integral, so as to improve convergence.

15. The method of claim 14 wherein applying a tone mapping function comprises applying a tone-mapping function that bounds the integrands, so as to improve convergence.

16. The method of claim 14 further comprising controlling adaptation by applying image processing operators.

17. The method of claim 14 further comprising controlling adaptation by applying image processing operators to final pixel values rather than to single samples.

18. The method of claim 12 further comprising providing bias elimination by randomization, wherein the randomization comprises any of scrambling, or randomly shifting deterministic points of the scrambled Halton sequence modulo one.

19. The method of claim 7 further comprising providing deterministic anti-aliasing by scaling a first component of a sampling function by a first scaling coefficient, and a second component by a second scaling coefficient, to obtain a stratified sample pattern that can be periodically tiled over an image plane, the stratified sample pattern having a plurality of strata;

identifying each stratum with a given pixel; and
after identifying each stratum with a given pixel, obtaining a per-stratum identification value from the pixel coordinates and generating a Halton sequence specific to a corresponding sample based on the corresponding identification value.

20. The method of claim 7 further comprising providing adaptive anti-aliasing by stratification by the Halton sequence.

21. The method of claim 19 further comprising determining the number of strata by selecting exponents, for the first and second scaling coefficients, large enough so that strata covered by adjacent pixel reconstruction filters do not contain repeated patterns.

22. The method of claim 6 wherein a pixel is deemed refined whenever a refinement criterion is met, and wherein the refinement criterion can include comparing the image gradient against a predefined threshold T.

23. The method of claim 22 further comprising selecting a value, for an exponent for a coefficient to be multiplicatively applied to the threshold T, to enable adaptation to the speed of convergence, wherein the coefficient is the sampling rate.

24. The method of claim 14 wherein the tone mapping comprises compression of a luminance value L prior to averaging, and wherein the luminance value L is compressed in accordance with the following compression equation: R α :   0 + → [ 0, 1 ] L ↦ { L L < α α + ( 1 - α )  L - α 1 + L - α else wherein α is a coefficient selected to be between 0 and 1, and Rα is a response curve mapping that can be selected by selection of coefficient α.

25. A computer-implemented method of generating a pixel value for a pixel in an image displayable via a display device, the pixel value being representative of a point in a scene, the method comprising:

A. generating a set of sample points, at least one sample point being generated using at least one sample, the at least one sample comprising at least one element of a sequence, wherein the generating includes sampling by using rank-1 lattice points; and
B. evaluating a selected function at one of the sample points to generate a value, the generated value corresponding to the pixel value, the pixel value being usable to generate a display-controlling electronic output.

26. The method of claim 25 further comprising selecting a rank-1 lattice such that its mutual minimum distance among sample points is maximal.

27. The method of claim 25 further wherein the generating includes using lattice sequences.

28. The method of claim 25 wherein the lattices are lattices in Korobov form.

29. The method of claim 25 wherein the lattices are rank-1 lattices or higher rank lattices in Korobov or general form.

30. The method of claim 25 further comprising using lattices with respect to low-dimensional projections of the points.

31. The method of claim 25 further comprising applying anti-aliasing by lattices, including adding to the lattice points a different random shift to generate a randomly shifted lattice, thereby to attenuate aliasing over the pixels.

32. The method of claim 31 further comprising de-randomizing the random shifts per pixel by determining a shift per pixel by elements of a low discrepancy point set or a deterministic point set with maximized minimum distance, with stratification that matches the pixels.

33. The method of claim 32 wherein the stratification is induced by a rank-1 lattice.

34. The method of claim 32 wherein the stratification is induced by the Voronoi diagram of a rank-1 lattice.

35. The method of claim 32 further comprising using recursive Korobov filters wherein the points inside a given lattice cell are determined by another set of lattice points transformed into the given lattice cell.

36. The method of claim 25 further comprising applying trajectory splitting using domain stratification induced by a rank-1 lattice with maximized minimum distance.

37. The method of claim 25 further comprising providing quasi-Monte Carlo integro-approximation by lattice points, and wherein point sets and sequences are selected by maximum minimum distance.

38. In a computer graphics system including a processor, a display device, user input elements, and one or more memory elements, the computer graphics system being operable to generate images displayable via a display device, the images representing a scene and comprising a plurality of pixels, a computer-implemented system for generating a pixel value for a pixel in an image displayable via the display device, the pixel value being representative of a point in a scene, the system comprising:

A. means for generating a set of sample points, at least one sample point being generated using at least one sample, the at least one sample comprising at least one element of a sequence, and wherein the set of sample points comprises quasi-Monte Carlo points; and
B. means, in communication with the means for generating a set of sample points, for evaluating a selected function at one of the sample points to generate a value, the generated value corresponding to the pixel value, the pixel value being usable to generate a display-controlling electronic output.

39. A computer program product for use in a computer graphics system, for enabling the computer graphics system to generate a pixel value for a pixel in an image displayable via a display device, the pixel value being representative of a point in a scene, the computer program product comprising a computer-readable medium having encoded thereon:

A. computer-readable program instructions executable to enable the computer graphics system to generate a set of sample points, at least one sample point being generated using at least one sample, the at least one sample comprising at least one element of a sequence, and wherein the set of sample points comprises quasi-Monte Carlo points; and
B. computer-readable program instructions executable to enable the computer graphics system to evaluate a selected function at one of the sample points to generate a value, the generated value corresponding to the pixel value, the pixel value being usable to generate a display-controlling electronic output.
Patent History
Publication number: 20090153576
Type: Application
Filed: Oct 6, 2008
Publication Date: Jun 18, 2009
Inventor: Alexander Keller (Ulm)
Application Number: 12/246,273
Classifications
Current U.S. Class: Attributes (surface Detail Or Characteristic, Display Attributes) (345/581)
International Classification: G09G 5/00 (20060101);