System and Methods for Shape Measurement Using Dual Frequency Fringe Patterns

A method obtains the shape of a target by projecting and recording images of dual frequency fringe patterns. Locations in each projector image plane are encoded into the patterns and projected onto die target while images are recorded. The resulting images show the patterns superimposed onto the target. The images are decoded to recover relative phase values for the patterns primary and dual frequencies. The relative phases are unwrapped into absolute phases and converted back to projector image plane locations. The relation between camera pixels and decoded projector locations is saved as a correspondence image representing the measured shape of the target. Correspondence images with a geometric triangulation method create a 3D model of the target. Dual frequency hinge patterns have a low frequency embedded into a high frequency sinusoidal, both frequencies are recovered in closed form by the decoding method, thus, enabling direct phase unwrapping.

Skip to: Description  ·  Claims  · Patent History  ·  Patent History
Description
CROSS-REFERENCE TO RELATED APPLICATION

This application claims priority to U.S. Provisional Patent Application No. 62/055,835, filed Sep. 26, 2014, which is incorporated herein. by reference.

FIELD OF THE TECHNOLOGY

The subject technology relates generally to measuring 3D shapes using structured light patterns, and more particularly, to computing depth values using dual frequency sinusoidal fringe patterns.

BACKGROUND OF THE TECHNOLOGY

Structured light methods are widely used as non-contact 3D scanners. Common applications of this technology are industrial inspection, medical imaging, and cultural heritage preservation. These scanners use one or more cameras to image the scene while being illuminated by a sequence of known patterns. One projector and a single camera is a typical setup, where the projector projects a fixed pattern sequence while the camera records one image for each projected pattern. The pattern sequence helps to establish correspondences between projector and camera coordinates. Such correspondences in conjunction with a triangulation method allow recovery of the scene shape. The pattern set determines many properties of a structured light 3D scanner such as precision and scanning time.

A general purpose 3D scanner must produce high quality results for a variety of materials and shapes to be of practical use. In particular, the general purpose 3D scanner must be robust to global illumination effects and source illumination defocus, or measurement errors would render the general purpose 3D scanner unsuitable for scanning non-Lambertian surfaces. Global illumination is defined as all light contributions measured at a surface point not directly received from the primary light source. Common examples are interreflections and subsurface scattering. Illumination defocus is caused by the light source finite depth of field. It is known that high frequency structured light patterns are robust to such issues.

However, most existing structured light based scanners are not robust to global illumination and defocus effects, and a few that are robust either use pattern sequences of hundreds of images, or fail to provide a closed form decoding algorithm. In both cases, the existing structured light based scanners cannot measure scene shapes as fast as required by many applications.

SUMMARY

In view of the above, a new shape measurement system and method, based on structured light patterns robust to global illumination effects and source illumination defocus, including fast encoding and decoding algorithms, is required.

The subject technology provides a 3D measurement method and system for real world scenes comprising a variety of materials.

The subject technology has a measurement speed which is significantly faster than existing structured light 3D shape measurement techniques without loss of precision.

The subject technology provides a closed term decoding method of the projected structured light patterns which improves from the state of the art methods.

The subject technology also allows for simultaneous measurements of multiple objects.

One embodiment of the subject technology is directed to a method for measuring shapes using structured light. The method includes encoding light source coordinates of a scene using dual frequency sinusoidal fringe patterns, modulating the light source with the dual frequency sinusoidal fringe patterns, and recording images of the scene while the scene is being illuminated by the modulated light source. The method further includes extracting the coded coordinates from the recorded images by using a closed form decoding algorithm.

Another embodiment of the subject technology is directed to a system for three-dimensional shape measurement including a first module for encoding light source coordinates using dual frequency sinusoidal fringe patterns, a light source for projecting such fringe patterns onto a scene while recording images of the scene under this illumination, a second module for extracting the encoded coordinates from the recorded images, and a third module for computing the scene shape using a geometric triangulation method.

Yet another embodiment of the subject technology is directed to a system for three-dimensional shape measurement including at least one projector, at least one camera, and at least one system processor. The system is configured to generate dual frequency sinusoidal fringe pattern sequences, projecting the generated fringe patterns onto a scene, capturing images of the scene illuminated by the dual frequency fringe patterns, and decoding the images to provide for three-dimensional shape measurement of the scene. In a preferred embodiment, the system processor includes one or more CPU's (Graphical Processing Unit).

Still another embodiment of the subject technology is directed to a system for three-dimensional shape measurement of a single object including at least one projector, at least one camera, at least one system processor, and a turntable. The system is configured to generate dual frequency sinusoidal fringe pattern sequences, projecting the generated fringe patterns onto a scene, capturing images of the object sitting on top of the turntable illuminated by the fringe patterns. The system includes projecting fringe patterns and recording images of the object under this illumination at different rotations of the turntable while keeping the object fixed on top the turntable. The system also includes decoding all recorded images and computing the object shape from all captured turntable rotations and generating a 3D model of the object.

Additional aspects and/or advantages will be set forth in part in the description, and claims which follows and, in part, will be apparent from the description and claims, or may be learned by practice of the invention. No single embodiment need exhibit each or every object, feature, or advantage as it is contemplated that different embodiments may have different objects, features, and advantages.

BRIEF DESCRIPTION OF THE DRAWINGS

For a more complete understanding of the invention, reference is made to the following description and accompanying drawings.

FIG. 1A is a schematic diagram illustrating one embodiment of the shape measurement system.

FIG. 1B is a schematic diagram illustrating the system processor of one embodiment of the shape measurement system.

FIG. 2 is a flow chart of a preferred embodiment of the subject technology.

FIG. 3 is a flow chart of the acquisition step of FIG. 2.

FIG. 4 is a plot of acquisition timings in accordance with the subject technology.

FIG. 5 is a flow chart of the decoding step of FIG. 2.

FIG. 6 is a flow chart of the decoding step in an alternative embodiment.

FIG. 7 is an example of index assignment to a square pixel array in accordance with the subject technology.

FIG. 8 is an example of index assignment to a diamond pixel array in accordance with the subject technology.

FIG. 9 is an example of a dual frequency pattern sequence in accordance with the subject technology.

FIG. 10 is an example of an image captured while the scene is illuminated by a dual frequency fringe pattern in accordance with the subject technology.

FIG. 11 shows the relation between a correspondence value and a projector index to illustrate the concept of triangulation in accordance with the subject technology.

FIG. 12 is a flow chart of single pixel decoding in accordance with the subject technology.

FIG. 13 is an example of a correspondence image generated by the subject technology.

FIG. 14 is an example of 3D model generated by the subject technology.

DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS

The subject technology overcomes many of the prior art problems associated with generating 3D models. The advantages, and other features of the technology disclosed herein, will become more readily apparent to those having ordinary skill in the art from the following detailed description of certain preferred embodiments taken in conjunction with the drawings which set forth representative embodiments of the present invention and wherein like reference numerals identify similar structural elements.

in brief overview, the subject technology includes a system that obtains the shape of a target object or scene by projecting and recording images of dual frequency fringe patterns. The system determines locations in image planes that are encoded into patterns and projected onto the target while images are being recorded. The resulting images show the patterns superimposed onto the target. The images are decoded to recover relative phase values for the patterns primary and dual frequencies. The relative phases are unwrapped into absolute phases and converted back to projector image plane locations. The relation between camera pixels and decoded projector locations is saved as a correspondence image representing the measured shape of the target. Correspondence images together with a geometric triangulation method create a 3D model of the target. Dual frequency fringe patterns have a low frequency embedded into a high frequency sinusoidal. Both frequencies are recovered in closed form by the decoding method, thus, enabling direct phase unwrapping. Only high frequency fringes are visible in the pattern images making the result more robust, for example, with respect to source illumination defocus and global illumination effects. Thus, the subject technology is applicable to shape measurement of targets of a variety of materials.

Referring now to FIG. 1A, a schematic diagram illustrates a shape measurement system 100. The shape measurement system 100 includes a system processor 102 connected to a light source 104, such as a projector, and a camera 106. The light source 104 is controlled by the system processor 102 to project fringe patterns onto one or more objects, herein referred to as a scene 108. The camera 106 is controlled by the system processor 102 to record scene images. Both camera 106 and light source 104 are oriented towards the target scene 108 for which the 3D shape is being measured. Preferably, the camera 106, the light source 104, and the scene 108 remain static while the shape measurement is being performed.

Referring now to FIG. 1B, a schematic diagram illustrating the system processor 102 is shown. As illustration, the system processor 102 typically includes a central processing unit 110 including one or more microprocessors in communication with memory 112 such as random access memory (RAM) and magnetic hard disk drive. An operating system is stored on the memory 112 for execution on the central processing unit 110. A hard disk drive is typically used for storing data, applications and the like utilized by the applications. Although not shown for simplicity, the system processor 102 includes mechanisms and structures for performing I/O operations and other typical functions. It is envisioned that the system processor 102 can utilize multiple servers in cooperation to facilitate greater performance and stability by distributing memory and processing as is well known.

The memory 112 includes several modules for performing the operations of the subject technology. An encoding module 114, an acquisition module 116, and a decoding module 118 all interact with data stored in a dual frequency patterns database 120, an images database, and other places.

The flow charts herein illustrate the structure or the logic of the present technology, possibly as embodied in computer program software for execution on particular device such as the system processor 102 or a modified computer, digital processor or microprocessor. Those skilled in the art will appreciate that the flow charts illustrate the structures of the computer program code elements, including logic circuits on an integrated circuit as the case may be, that function according to the present technology. As such, the present technology may be practiced by a machine component that renders the program code elements in a form that instructs equipment to perform a sequence of function steps corresponding to those shown in the flow charts.

Referring now to FIG. 2, there is illustrated a flowchart 200 depicting a process for measuring the shape of a scene 108. Generally, once the process begins, a set of sinusoidal fringe patterns are generated by the encoding module 114 of the system processor 102 in an encoding step 202. Second, during an acquisition step 204, the light source 104 is used to project the generated patterns, one by one, onto the scene 108. The camera 106 records an image of the Scene. 108 for each projected fringe pattern which is acquired by the acquisition module 116 for storage. Preferably, the pattern projection and image capture times are synchronized in such a way that a single pattern is projected while each image is being captured. No more than one image is captured during each pattern projection time. Finally, all the captured images are processed by the decoding module 118 in a decoding step 206 to generate a digital representation of the scene 108 related to the measured shape.

Encoding Step 202

The encoding step 202 encodes light source coordinates into dual frequency fringe patterns. In a preferred embodiment, the light source 104 projects 2-dimensional grayscale images. In this case, the light source coordinates are integer index values which identify a line in the light source 104 image plane. There exist different pixel array organizations among commercial projectors. FIG. 7 and FIG. 8 are examples of square and diamond pixel arrays respectively. In the square pixel array case, a projector index 120 identifies a pixel column (shown in hold in FIG. 7). In the diamond pixel array case, a projector index 120 identifies a diagonal line from the top-left corner to the bottom-right corner of the array (shown in bold in FIG. 8). The encoding step 202 generates a sequence of 2-dimensional grayscale images where the value at each pixel encodes the corresponding projector index 120 in the array.

Each projector index 120 is encoded using Equation (1) below where r is an output vector, a and a are constant values. S and A are matrices, T and s are also vectors, p being the projector index 120. The length of vector r is equal to the number of imager to be generated in the sequence. The first component of r is the encoded pixel value of index p in the first image in the sequence, the second component corresponds to the encoded pixel value in the second image in the sequence, and so forth. Computing a vector r for each projector index 120 in the projector pixel array, and filling the image sequence pixel values using the components of r as described concludes the encoding step 202. The output of encoding step 202 is the sequence of images generated. The values required to evaluate Equation (1) are explained in detail in the following paragraphs.

Referring now to FIG. 9, a dual frequency pattern sequence 900 is shown. The dual frequency pattern sequence 900 is made of sinusoidal fringe patterns of F primary frequencies and phase shifts of the primary frequencies. A primary pattern frequency is the spatial frequency of the fringes in the pattern image. Each pattern has also an embedded dual frequency which is not visible in the image but is extracted by the pattern decode step 602 (see FIG. 6 and operation 1210 in FIG. 12). The value F and the number of shifts are chosen by the designer. However, F should be greater than one. The designer must also choose F real values {T1, T2, . . . , TF} which are used to compute vector T using Equation (2) below. All Ti values must be greater than one. The length of vector T is equal to the number of primary frequencies.

T = [ 1 T 1 , 1 T 1 T 2 , , 1 T 1 T 2 T F ] T ρ ( 2 )

Matrix A is a mixing matrix from Equation (3) below. Matrix A has F columns and F rows.

A = [ 1 1 1 ] ( 3 )

The designer must also choose a set {N1, N2, . . . NF} of frequency shifts. Each integer Ni in the set must be equal or greater than 2 and the set must satisfy Equation (4) below, A typical selection is to make N1=3 and Ni=2 for i>1.

N i = 1 F N i 2 F + 1 ( 4 )

Vector s is built by stacking altogether the shifts of each frequency as follows: N1 shifts of F1, N2 shifts of, and so forth. The length of vector s is N. Let si be a vector of length Ni containing the shifts of Fi; then, vectors and each si are defined as shown in Equation (5) below.

s = [ s 1 s 2 s F ] N , s i = [ 0 max ( N i , 3 ) 1 max ( N i , 3 ) N i - 1 max ( N i , 3 ) ] N i ( 5 )

S is a block diagonal matrix matching the shift vector s. Matrix S has F columns and N rows and is given in Equation (6) below.

S = [ S 1 S 2 S F ] , S i = [ 1 1 1 ] i N ( 6 )

Finally, the offset value a and the amplitude value a are constants proportional to the light source 104 dynamic range. For instance, values o=127 and a=127 would generate patterns images in the range [0, 255].

An example sequence generated using this method is shown in FIG. 9. In the example, the following parameters were used:


F=3, T1=64, T2=5, T3=5, N1=3, N2=2, N3=2   (7)

Still referring to FIG. 9, patterns 902a-c are the 3 shifts of the first primary frequency, patterns 902d, 902e are the 2 shifts of the second primary frequency, and patterns 902f, 902g are the 2 shifts of the third primary frequency. The 7 patterns 902a-g comprise the whole sequence.

Acquisition. Step 204

Referring again to FIG, 2, the acquisition step 204 is described in more detail below. Referring additionally to FIG. 3, a flowchart 300 implementing the acquisition step 204 is shown. Once started, the light source 104 projects a dual frequency pattern onto the scene 108. The dual frequency patterns are stored in a dual frequency database 120 in the memory 112. At step 304, the system processor 102 commands the camera 106 to capture one image and saves the image in an image database in the memory 112 for later processing. At step 306, the system processor 102 determines whether the projected pattern is the last in the sequence, in which case the acquisition step 204 ends. If not, the system processor 102 jumps to step 302 and advances to the next pattern in the sequence. In other words, the first time step 302 is executed, the first pattern in the sequence is projected; each additional time, the next fringe pattern in the sequence is projected.

Steps 302 and 304 must be executed with precise timings, as shown graphically in FIG. 4, which illustrates a projector and camera timing plot 400. The timing plot 400 includes the projector timing 402 synchronized with the camera timing 404. The moment that the projection of the first pattern in the sequence begins is designated t0. Each pattern is projected for a period of time tp (i.e., projection of the first pattern stops at time t0+tp). Capture by the camera must begin after a small delay td and continue for a time tc (i.e., the first image capture begins at t0−td and finish at t0+td+tc). Preferably, there is a delay td also between the projection end of a pattern and the projection beginning of the next one (i.e., projection of the second pattern begins at t1=t0+tp+td). The projection start of pattern i ∈ [1, N] is computed as ti=t0+(i−1)*(tp+td), image i capture begins at ti+td and ends at ti+td+te, projection of pattern i finishes at ti+tp.

Referring now to FIG. 10, an exemplary image 1000 captured by acquisition step 204 of a scene 108 by projecting a dual frequency fringe pattern 902a is shown. Such images for dual frequency fringe patterns 902a-g would be saved in image database 122.

Decoding Step 206

Referring now to FIG. 5, a flowchart 500 illustrating the details of the decoding step 206 is shown. During the decoding step 206, the system processor 102 processes a set of captured images at step 502 (e.g., pattern decoding), creates a correspondence image at step 504, and a 3D Model at step 508. In a preferred embodiment, the decoding step 206 includes a decoding pattern step 502, and a triangulation step 506.

Referring now to FIG. 6, another embodiment implements the decoding step 206 by only performing a decoding pattern step 602 and a correspondence Image step 604.

Referring again to FIG. 5, the correspondence image created at step 504 is a matrix with the same number of columns and rows as the input images. Referring additionally to FIG. 11, an example 1100 of triangulation in accordance with the subject technology is shown. In FIG. 11, the concept of triangulation is shown by the relationship between a correspondence value and a projector index,

Each location in the matrix, called a pixel, contains a correspondence value 1102. As shown in FIG. 11, the example correspondence value 1102 is five. The decoding step 206 creates a correspondence image at step 504 by setting each correspondence value equal to the projector index 120 (see FIGS. 7, 8 and 11), which is the index assigned to the projector pixel of a projector image 1104, which illuminated the point 1104 in the scene 104 imaged by the camera pixel 1102 in the same location as the correspondence value.

In other words with respect to the example in FIG. 11, a scene point 1104 is illuminated by a projector pixel 1106 with a projector index 120 equal to 5. The same scene point 1104 is being imaged by a camera pixel 1102 in column i and row j. Therefore, the correspondence value 1102 at column i and row j in a correspondence image 1108 of the camera 106 will be set to a value equal to 5. Some pixels in the correspondence image 1108 cannot be assigned to a valid projector index 120, either because the decoding step 206 cannot identify reliably a corresponding projector pixel, or, because the point imaged at that location is not illuminated by the light source 104. In both of these cases, the correspondence value is set to ‘unknown’.

The pattern decode step 502 computes a correspondence value for each pixel in the correspondence image 1108 independently. Referring additionally to FIG. 12, a flowchart 1200 detailing the pattern decode step 502 is illustrated.

Referring to FIG. 12, step 1202 computes a relative phase value of the primary frequencies, called raw phase. Subsequently step 1210 computes a relative phase value of the dual frequencies, called dual phase. Additionally step 1212 calculates an absolute phase value. The absolute phase value maps to a projector index.

Referring in more detail to FIG. 12, exemplary logic for generation of each correspondence is shown. The logic of FIG. 12 is executed in parallel by the pattern decode step 502 for each location in the correspondence image of step 504.

At step 1202 the raw phase value for the primary frequencies is computed by solving the linear system in Equation (8) below, where U and R are vectors and M is a fixed matrix.

U = argmin U R - MU ( 8 )

Vector R is called radiance vector, a vector built from the pixel values from the captured image set. The length of R is N, the number of images in the set. The first component of the radiance vector has the pixel value at the pixel location being decoded in the first image of the sequence. The second component has the pixel value at the same location in the second image of the sequence, and so forth. The decoding matrix 114 is shown in Equations (9) and (10) below. Values F and Ni corresponds to those used at the encoding step 2020. Matrix M has at least 2F+1 columns and N rows.

M = [ 1 M 1 1 M 2 1 M F ] ( 9 ) M i = [ cos ( 2 π 0 max ( N i , 3 ) ) - sin ( 2 π 0 max ( N i , 3 ) ) cos ( 2 π 1 max ( N i , 3 ) ) - sin ( 2 π 1 max ( N i , 3 ) ) cos ( 2 π N i - 1 max ( N i , 3 ) ) - sin ( 2 π N i - 1 max ( N i , 3 ) ) ] ( 10 )

At step 1202, the raw phase's ωi corresponding to the primary frequencies are computed from vector U as in Equation (11) below. The notation U(n) means the n-component of U.

ω i = arctan U ( 2 i + 2 ) U ( 2 i + 1 ) , i : 1 F ( 11 )

At step 1204, the system processor 102 computes an amplitude value a; for each primary frequency from vector U using Equation (12) below. At step 1206, each amplitude value ai is compared with a threshold value TAmp. If some of the amplitude values ai are below TAmp, the process proceeds to step 1208. At step 1208, the decoded value is unreliable and the correspondence value is set to ‘unknown’ for the current pixel, and the decoding process for this location finishes. The threshold value Tamp is set by the designer.


ai=√{square root over (U(2i+1)2+U(2i+2)2)}{square root over (U(2i+1)2+U(2i+2)2)}, i:1 . . . F   (12)

From step 1206, for the pixel locations where all amplitude value ai are above TAmp decoding continues to step 1210 by computing relative phase values {tilde over (ω)}i of the embedded frequencies using Equation (13) below.


  (13)

The process of calculating an absolute phase value for each relative phase value is called ‘phase unwrapping’. A relative phase value is the phase value ‘relative’ to the current sine period beginning that is a value in [0, 2π).|d2] An absolute phase value is the phase value measured from the origin of the signal. At this point, the set of relative phase values {ω} ∪ {{tilde over (ω)}1} may be unwrapped using an algorithm.

At step 1212, computation of the absolute phase unwraps both the raw phases and the dual phases as follows: raw phases ωi and dual phases {tilde over (ω)}i are put altogether in a single set and sorted by frequency, renaming to ν0 the phase corresponding to the lowest frequency and increasing until ν2F−1, the phase corresponding to the highest frequency. Equation (14) is applied to obtain the absolute phase values pi which correspond to the projector indices,

{ k 0 0 , k i = t i p i - 1 - v i 2 π if i > 0 p i = 2 k i π + v i t i ( 14 )

in Equation (14), the operator └·┘ takes the integer part of the argument, and the values i correspond to the frequency values. The values of the embedded frequencies Fi and the values of the primary frequencies fi are given in Equations (15) and (16) respectively, where fi corresponds to ωi and Fi corresponds to {tilde over (ω)}i. The value ti corresponds to the frequency of the relative phase that was renamed to νi.

F i = { 0 if i = 1 1 T 1 T 2 T i if i > 1 ( 15 ) f i = 1 T 1 + F i ( 16 )

The Pattern Decode step 502 ends assigning a single projector index 120 (see FIGS. 7 and 6) to the correspondence value 1102 (see FIG. 11) at the current location in the correspondence image of step 604. The values pi unwrapped above are already projector indices, if another unwrapping algorithm is used the absolute phase values must be converted to projector indices dividing them by the corresponding frequency value ti. Two possible ways of getting a single correspondence value from the multiple projector indices are: use the mean p of the indices corresponding to the primary frequencies as in Equation (17), or, set the correspondence to the index of the highest frequency as in Equation (18). In FIG. 12, step 1214 sets the correspondence to the index of the highest frequency.

p = 1 F i = F 2 F - 1 p i ( 17 ) p = p 2 F - 1 ( 18 )

The Pattern Decode step 502 of FIG. 5 stops once all pixels in the camera images have been decoded either as projector index values or as ‘unknown’. Note that even when projector indices are integer values, correspondence values are usually not integers because the scene point imaged by a camera pixel could be in any place between the discetized projector lines encoded by the indices.

Referring back to FIG. 2, the decoding step 206 continues by using the correspondence image to triangulate points at step 506 of FIG. 5 and generates a 3D Model at step 508 of FIG. 5. Step 506 calculates the intersection of a camera ray and as projector light plane. The camera ray begins at the origin of the camera coordinate system (as shown in FIG. 11) and passes through the current camera pixel center. The projector light plane is the plane that contains the projector line encoded by the projector index 120 and the origin of the projector coordinate system (as shown in FIG. 11).

The camera ray extends in the direction of the scene 108 and intersects the indicated plane exactly on the scene point being imaged by the current camera pixel location as can he seen in FIG. 11. The sought camera ray coincides with the dashed line of the camera light path but it has opposite direction. The projector light plane contains the dashed line representing the projector light path. The intersection 1104 between the projector and camera light paths is on the scene 108. Once the intersection 1104 is calculated, the result is a 3D point which becomes part of the 3D Model of step 508 in FIG. 5. Step 506 performs this intersection for each correspondence index value 1102 with a value different from ‘unknown’. After processing all pixel locations the 3D Model of step 508 is complete and the decoding step 206 ends.

Another embodiment of the subject technology assigns two sets of projector indices, one for rows and one for columns. Each set is encoded, acquired, and decoded as in the preferred embodiment but independently of each other, generating two correspondence images at step 504. At this time, 3D points are generated by computing the ‘approximate intersection’ of the camera ray, and the ray defined as the intersection the two light planes defined by the two correspondences assigned to each camera pixel location. The approximate intersection is defined as the point which minimizes the sum of the square distances to both rays.

FIG. 13 is a sample correspondence image 1300 and FIG. 14 shows a sample 3D Model 1400, both generated at the decoding step 206 of FIG. 2.

It will be appreciated by those of ordinary skill in the pertinent art that the functions of several elements may, in alternative embodiments, be carried out by fewer elements, or a single element. Similarly, in some embodiments, any functional element may perform fewer, or different, operations than those described with respect to the illustrated embodiment. Also, functional elements (e.g., modules, databases, interfaces, hardware, computers, servers and the like) shown as distinct liar purposes of illustration may be incorporated within other functional elements in a particular implementation.

All patents, patent applications and other references disclosed herein are hereby expressly incorporated in their entireties by reference. While the subject technology has been described with respect to preferred embodiments, those skilled in the art will readily appreciate that various changes and/or modifications can be made to the subject technology without departing from the spirit or scope of the invention as defined by the appended claims.

Claims

1. A method for measuring shapes comprising the steps of:

encoding light source coordinates of a scene using dual frequency sinusoidal fringe patterns;
modulating the light source with the dual frequency sinusoidal fringe patterns; and
recording images of the scene while the scene is being illuminated by the modulated light source.

2. A method as recited in claim 1, further comprising the step of extracting coded coordinates from the recorded images by using a closed form decoding algorithm.

3. A system for three-dimensional shape measurement comprising:

a light source for projecting such fringe patterns onto a scene;
a camera for recording images of the scene under illumination; and
a system for processing the recorded images including: a first module for encoding light source coordinates using dual frequency sinusoidal fringe patterns; a second module for extracting the encoded coordinates from the recorded images; and a third module for computing the scene shape using a geometric triangulation method.

4. A system as recited in claim 3, further comprising a turntable, wherein the scene is an object on top of the turntable, the light source projects fringe patterns and the camera records images of the object at different rotations of the turntable while keeping the object fixed on top the turntable,

the second module decodes all recorded images, and
the third module computes a shape of the object from all captured turntable rotations and generates a 3D model of the object.

5. A method for obtaining a shape of a target object comprising the steps of:

projecting dual frequency fringe patterns on the target object;
recording images of the illuminated target object;
encoding locations in each projector image plane into the projected dual frequency fringe patterns while images are recorded;
decoding the images to recover relative phase values for the projected dual frequency fringe patterns primary and dual frequencies;
unwrapping the relative phase values into absolute phases;
converted the absolute phases back to projector image plane locations; and
creating a correspondence image based on a relation between camera pixels and decoded projector locations, wherein the correspondence image represents a measured shape of the target object.

6. A method as recited in claim 5, further comprising the step of creating a 3D model of the target object based on the correspondence images together with a geometric triangulation of the target object.

7. A method as recited in claim 5, further comprising the step of using direct phase unwrapping.

Patent History
Publication number: 20160094830
Type: Application
Filed: Sep 25, 2015
Publication Date: Mar 31, 2016
Inventors: Gabriel Taubin (Providence, RI), Daniel Moreno (Norwich, CT)
Application Number: 14/866,245
Classifications
International Classification: H04N 13/00 (20060101); G06T 7/00 (20060101); G06T 17/00 (20060101); H04N 13/02 (20060101);