Coded visual markers for tracking and camera calibration in mobile computing systems

A method for determining a pose of a user is provided including the steps of capturing a video image sequence of an environment including at least one coded marker; detecting if the coded marker is present in the video images; if the marker is present, extracting feature correspondences of the coded marker; determining a code of the coded marker using the feature correspondences; and comparing the determined code with a database of predetermined codes to determine the pose of the user. According to an embodiment, the coded marker includes four color blocks arranged in a square formation and the determining a code of the at least one marker further includes determining a color of each of the four blocks. According to another embodiment, the marker includes a coding matrix and a code of the marker being determined by numbered squares of the coding matrix being covered by a circle.

Skip to: Description  ·  Claims  · Patent History  ·  Patent History
Description
PRIORITY

[0001] This application claims priority to an application entitled “DESIGN CODED VISUAL MARKERS FOR TRACKING AND CAMERA CALIBRATION IN MOBILE COMPUTING SYSTEMS” filed in the United States Patent and Trademark Office on Oct. 4, 2001 and assigned Serial No. 60/326,960, the contents of which are hereby incorporated by reference.

BACKGROUND OF THE INVENTION

[0002] 1. Field of the Invention

[0003] The present invention relates generally to computer vision systems, and more particularly, to a system and method for tracking and camera calibration in a mobile computing system using coded visual markers.

[0004] 2. Description of the Related Art

[0005] In certain real-time mobile computing applications, it is crucial to precisely track the motion and obtain the pose (i.e., position and orientation) of a user in real-time, also known as localization. There are several methods currently available to carry out the localization. For example, in augmented reality (AR) applications, magnetic or/and inertia trackers have been employed. However, it is not unusual that the performance of magnetic and inertia trackers are limited by their own characteristics. For example, the magnetic trackers are affected by the interference of nearby metal structures and the currently available inertia trackers can only be used to obtain information on orientation and are usually not very accurate in tracking very slow rotations. Additionally, infrared trackers have been employed but these devices usually require the whole working area or environment to be densely covered with infrared sources or reflectors, thus making them not suitable for a very large working environment.

[0006] Vision-based tracking methods have been used with limited success in many applications for motion tracking and camera calibration. Ideally, people should be able to track the motion or locate an object of interest based only on the natural features of captured scenes, i.e., viewed scenes, of the environment. Despite the dramatic progress of computer hardware in the last decade and a large effort to develop adequate tracking methods, there is still not a versatile vision-based tracking method available. Therefore, in controlled environments, such as large industrial sites, marker-based tracking is the preferred method of choice.

[0007] Current developments of computer vision-based applications are making use of the latest advances in computer hardware and information technology (IT). One such development is to combine mobile computing and augmented reality technology to develop systems for localization and navigation guidance, data navigation, maintenance assistance, and system reconstruction in an industrial site. In these applications, a user is equipped with a mobile computer. In order to guide the user to navigate through the complex industrial site, a camera is attached to the mobile computer to track and locate the user in real-time via a marker-based tracking system. The localization information then can be used for database access and to produce immersive AR views.

[0008] To be used for real-time motion tracking and camera calibration in the applications described above, the markers of a marker-based tracking system need to have the following characteristics: (1) sufficient number of codes available for identification of distinct markers; (2) methods available for marker detection and decoding in real-time; and (3) robust detection and decoding under varying illumination conditions, which ensures the applicability of the marker in various environments.

SUMMARY OF THE INVENTION

[0009] According to one aspect of the present invention, a method for determining a pose of a user is provided including the steps of capturing a video image sequence of an environment including at least one coded marker; detecting if the at least one coded marker is present in the video images; if the at least one marker is present, extracting feature correspondences of the at least one coded marker; determining a code of the at least one coded marker using the feature correspondences; and comparing the determined code with a database of predetermined codes to determine the pose of the user.

[0010] According to another aspect of the present invention, the at least one coded marker includes four color blocks arranged in a square formation and the determining a code of the at least one marker further includes determining a color of each of the four blocks.

[0011] According to a further aspect of the present invention, the detecting step further includes applying a watershed transformation to the at least one coded marker to extract a plurality of closed-edge strings that form a contour of the at least one marker.

[0012] According to another aspect of the present invention, the at least one marker includes a coding matrix including a plurality of columns and rows with a numbered square at intersections of the columns and rows, the coding matrix being surrounded by a rectangular frame and a code of the at least one marker being determined by the numbered squares being covered by a circle. The coding matrix includes m columns and n rows, where m and n are whole number, resulting in 3×2m×n−4.

[0013] According to a further aspect of the present invention, a system is provided including a plurality of coded markers located throughout an environment, each of the plurality of coded markers relating to a location in the environment, codes of the plurality of coded markers being stored in a database; a camera for capturing a video image sequence of the environment, the camera coupled to a processor; and the processor adapted for detecting if at least one coded marker is present in the video images, if the at least one marker is present, extracting feature correspondences of the at least one coded marker, determining a code of the at least one coded marker using the feature correspondences, and comparing the determined code with the database to determine the pose of the user. In one embodiment, the at least one coded marker includes four color blocks arranged in a square formation and a code of the at least one marker being determined by a color sequence of the blocks. In another embodiment, the at least one marker includes a coding matrix including a plurality of columns and rows with a numbered square at intersections of the columns and rows, the coding matrix being surrounded by a rectangular frame and a code of the at least one marker being determined by the numbered squares being covered by a circle.

[0014] In a further aspect, the camera and processor are mobile devices.

[0015] In another aspect, the system further includes a display device, wherein the display device will provide to the user information relative to the location of the at least one marker. Additionally, wherein based on a first location of the at least one marker, the display device will provide to the user information to direct the user to a second location.

[0016] In yet another aspect, the system further includes an external database of information relative to a plurality of items located throughout the environment, wherein when the user is in close proximity to at least one of the plurality of items, the processor provides the user with access to the external database. Furthermore, the system includes a display device for displaying information of the external database to the user and for displaying virtual objects overlaid on the at least one item.

[0017] In a further aspect, the system includes a head-mounted display for overlaying information of the at least one item in a view of the user.

BRIEF DESCRIPTION OF THE DRAWINGS

[0018] The above and other objects, features, and advantages of the present invention will become more apparent in light of the following detailed description when taken in conjunction with the accompanying drawings in which:

[0019] FIG. 1 is a block diagram of a system for tracking a user according to an embodiment of the present invention;

[0020] FIGS. 2(A) through 2(C) are several views of color coded visual markers used for tracking a user in an environment according to an embodiment of the present invention;

[0021] FIG. 3 is a flowchart illustrating a method for detecting and decoding the color coded visual markers of FIG. 2;

[0022] FIG. 4 is an image of a marker showing feature correspondences and lines projected onto the image to determine edges of the four blocks of the color coded visual marker;

[0023] FIGS. 5(A) through 5(C) are several views of black/white matrix coded visual markers used for tracking a user in an environment according to another embodiment of the present invention;

[0024] FIG. 6 is a flowchart illustrating a method for detecting and decoding the black/white matrix coded visual markers of FIG. 5;

[0025] FIG. 7 is an image of a marker depicting the method used to extract a corner point of the matrix coded visual marker according to the method illustrated in FIG. 6; and

[0026] FIG. 8 is a diagram illustrating the interpolation of marker points of a black/white matrix coded visual marker in accordance with the present invention.

DETAILED DESCRIPTION OF PREFERRED EMBODIMENTS

[0027] Preferred embodiments of the present invention will be described herein below with reference to the accompanying drawings. In the following description, well-known functions or constructions are not described in detail to avoid obscuring the invention in unnecessary detail.

[0028] The present invention is directed to coded visual markers for tracking and camera calibration in mobile computing systems, systems employing the coded visual markers and methods for detecting and decoding the markers when in use. According to one embodiment of the present invention, color coded visual markers are employed in systems for tracking a user and assisting the user in navigating a site or interacting with a piece of equipment. In another embodiment, black and white matrix coded visual markers are utilized.

[0029] Generally, the marker-based tracking system of the present invention includes a plurality of markers placed throughout a workspace or environment of a user. Each of the markers are associated with a code or label and the code is associated with either a location of the marker or an item the marker is attached to. The user directs a camera, coupled to a processor, to one or more of the markers. The camera captures an image of the marker or markers and determines the code of the markers. It then uses the codes to extract information about the location of the markers or about items in the close proximity to the markers.

[0030] It is to be understood that the present invention may be implemented in various forms of hardware, software, firmware, special purpose processors, or a combination thereof. In one embodiment, the present invention may be implemented in software as an application program tangibly embodied on a program storage device. The application program may be uploaded to, and executed by, a machine comprising any suitable architecture such as that shown in FIG. 1. Preferably, the machine 100 is implemented on a computer platform having hardware such as one or more central processing units (CPU) 102, a random access memory (RAM) 104, a read only memory (ROM) 106, input/output (I/O) interface(s) such as keyboard 108, cursor control device (e.g., a mouse) 110, display device 112 and camera 116 for capturing video images. The computer platform also includes an operating system and micro instruction code. The various processes and functions described herein may either be part of the micro instruction code or part of the application program (or a combination thereof) which is executed via the operating system. In addition, various other peripheral devices may be connected to the computer platform such as an additional data storage device 114 and a printing device. Preferably, the machine 100 is embodied in a mobile device such as a laptop computer, notebook computer, personal digital assistant (PDA), etc.

[0031] It is to be further understood that, because some of the constituent system components and method steps depicted in the accompanying figures may be implemented in software, the actual connections between the system components (or the process steps) may differ depending upon the manner in which the present invention is programmed. Given the teachings of the present invention provided herein, one of ordinary skill in the related art will be able to contemplate these and similar implementations or configurations of the present invention.

[0032] FIGS. 2(A) through 2(C) are several views of color coded visual markers used for tracking a user in an environment according to an embodiment of the present invention. The color based markers work well for relatively simple cases under friendly illuminative conditions. Each of these markers 202, 204, 206 includes four square blocks of either black or color. To simplify the marker detection and color classification, the color of the color blocks is limited to be one of the three primitive colors (i.e., red, green, and blue).

[0033] Referring to FIG. 2(A), the four blocks 208, 210, 212, 214 are centered at the four corners points of an invisible square 216, shown as a dashed line in FIG. 2(A). To determine the orientation of a marker, at least one and at most three of the four blocks of a marker are white patched 218. If there are two white patched blocks in one marker, the two blocks are preferably next to each other (not in diagonal) to ensure that there will be no confusion in determining the orientation.

[0034] The marker 202 is coded by the colors of the four blocks 208, 210, 212, 214 and the number of white patched blocks. For marker coding, the color coded visual markers use ‘r’ for red, ‘g’ for green, ‘b’ for blue, and ‘d’ for black. The order of the code is clockwise from the first white centered block 208, which is the block at the upper-left, and will include a letter for each color of the representative block.(Note, the lower-left block is preferably not white patched and at most the marker will include three white patched blocks). The number at the end of the code is the number of white patched blocks of the corresponding marker. For example, the marker shown in FIG. 2(A) is coded as drdr1(block 208 is black, block 210 is red, block 212 is black and block 214 is red), the marker shown in FIG. 2(b) is coded as rgbd2(block 220 is red, block 222 is green, block 224 is blue and block 226 is black), and the marker shown in FIG. 2(C) is coded as dddd3 (blocks 228, 230, 232 and 234 are all black). Therefore, a color coded marker system according to an embodiment of the present invention can have 3×44=768 different color markers.

[0035] With reference to FIG. 3, a method for detecting and decoding a color coded visual marker of an embodiment of the present invention will be described.

[0036] Initially, a user equipped with a mobile computer having a camera coupled to the computer will enter a workspace or environment that has the color coded markers placed throughout. A video sequence of the environment including at least one marker will be captured (step 302) to acquire an image of the marker. A watershed transformation is applied to the image to extract closed-edge strings that form the contours of the marker (step 304). Since the markers are using the three primitive colors for marker coding, the watershed transformation need be applied to the two color components from RGB with the lower intensities to extract the color blocks.

[0037] In step 306, strings which are less than a predetermined value for representing a square block in a marker are eliminated. Then, the closed-edge strings are grouped based on the similarity of their lengths. The four strings that have the least maximum mutual distance will be put in one group (step 308). The maximum mutual distance among a group of N closed-edge strings is defined as follows:

dmax:=max(S(di,j))  (1)

[0038] where, 1≦i≦N,1≦j≦N,and i≠j,di,j is the distance between the weight center of string i and the weight center of string j; S represent the set of di,j for all eligible i and j. The four weight centers of the strings in each group are used as correspondences of the centers of the four blocks of a marker to compute a first estimation of a homography from the marker model plane to the image plane (step 310). The homography is used to project eight straight lines to form the four blocks of the marker as shown in FIG. 4 (step 312). These back projected lines are then used as an initialization to fit straight lines on the image plane. The cross points of these straight lines are taken as the first estimation of the correspondences of the corner points of the marker.

[0039] Along the first estimated edges, a 1-D Canny edge detection method, as is known in the art, (in the direction perpendicular to the first estimated edges) is used to locate accurately the edge points of the square blocks (step 314). Then, the eight straight lines fitted from these accurate edge points are used to extract the feature correspondences, i.e., corner points, of the marker with sub-pixel accuracy. Once the corner points of the marker are extracted along with the edge points of the square blocks, the blocks of the marker can be defined and each block can be analyzed for its color.

[0040] To determine the color of the blocks of the marker (step 316), the average values of the red, green, and blue component (denoted as R, G, and B) of all the pixels inside the block (the white patch area excluded) are measured. Then, the intensity I, hue H, and saturation S of the averaged block color is computed as follows: 1 I = R + G + B ) / 3 S = 1.0 - 3.0 * min ⁡ ( R , G , B ) R + G + B H = cos - 1 ⁢ { 0.5 ⁡ [ ( R - G ) - ( R - B ) ] ( R - G ) 2 + ( R - B ) ⁢ ( G - B ) ( 2 )

[0041] The color of the corresponding square block is then determined by the values of I, H, and S as follows: if I≦Ithr, the color is black; else, if S≦Sthr, the color is still black; else, if 0≦H<2&pgr;/3, the color is red; if 2&pgr;/3≦H<4&pgr;/3, the color is green; if 4&pgr;/3≦H<2&pgr;, the color is blue. Here, Ithr and Sthr are user adjustable thresholds.

[0042] Once the color of each block of a marker is determined, the code for the marker is derived as described above (step 318), for example, drdr1. Once the code has been determined, the code can be matched against a database of codes, where the database will have information related to the code (step 320) and the pose of the marker can be determined. For example, the information may include a location of the marker, a type of a piece of equipment the marker is attached to, etc.

[0043] By applying these color coded visual markers in real-time tracking and pose estimation fast real-time marker detection and extraction of correspondences can be achieved. The color coded visual markers provide up to 16 accurate correspondences available for calibration. Additionally, by taking the cross points of the color block, the correspondences of the four center points of the blocks can be located with higher accuracy, where four points provides the least correspondences for computing the homography resulting in faster processing.

[0044] FIGS. 5(A) through 5(C) are several views of matrix coded visual markers used for tracking a user in an environment according to another embodiment of the present invention. Using the black/white matrix coded markers can avoid the problems caused by instability of color classification under unfriendly lighting conditions.

[0045] Referring to FIG. 5(a), a black/white matrix coded marker 502 is formed by a thick rectangular frame 504 and a coding matrix 506 formed by a pattern of small black circles 508 distributed inside the inner rectangular of the marker. For example, the markers shown in FIGS. 5(A)-(C) are coded with a 4×4 coding matrix.

[0046] The marker 502 with a 4×4 coding matrix is coded using a 12-bit binary number with each bit correspond to a numbered position in the coding matrix as shown in FIG. 5(A). The 4 corner positions labeled ‘a’, ‘b’, ‘c’, and ‘d’ in the coding matrix are reserved for a determination of marker orientation. If the corresponding numbered position is covered by a small black circle, then the corresponding numbered bit of the 12-bit binary number is 1, otherwise it is 0. The marker is thus labeled by the decimal value of the 12-bit binary number.

[0047] To indicate uniquely the orientation of marker 502, the position labeled a is always white, i.e., a=0, while the position labeled d is always covered by a black circle, d=1. In addition, in the case that b is black, then c has to be also black. A letter is added to the end of the marker label to indicate one of the three combinations: a for (a=0, b=1, c=1, d=1), b for (a=0, b=0, c=1, d=1), and c for (a=0, b=0, c=0, d=1). Therefore, for a 4×4 coding matrix, there can be up to 3×1212=12,288 distinct markers. Using a 5×5 coding matrix, there can be up to 3×221=6,291,456 distinct markers. Generally, using a m×n coding matrix, a black/white matrix coded visual marker system of an embodiment of the present invention can have 3×2m×n−4 markers. For some of the applications that need only a much smaller number of markers than the coding capacity, the redundant positions in the coding matrix can be used for implementation of automatic error-bit correction to improve the robustness of the marker decoding. Following the coding convention stated above, the marker shown in FIG. 5(B) is coded as 4095b (wherein the 12-bit number is 111111111111) and the marker shown in FIG. 5(C) is 1365a (e.g., 010101010101).

[0048] With reference to FIG. 6, a method for detecting and decoding a matrix coded visual marker of an embodiment of the present invention will be described.

[0049] Initially, a user equipped with a mobile computer having a camera coupled to the computer will enter a workspace or environment that has the black/white matrix coded markers placed throughout. A video sequence of the environment including at least one marker will be captured (step 602) to acquire an image of the marker. A watershed transformation is applied to the image to extract insulated low intensity areas and store their edges as closed-edge strings (step 604). The two closed-edge strings are found that have very close weight centers to form a contour of the marker, i.e.,

di,j≦dthr,

[0050] where, di,j is the distance between the weight centers of the closed-edge strings i and j, dthr is an adjustable threshold (step 606). An additional condition for the two closed-edge strings to be a candidate of a marker contour is

if li<lj, then clower lj≦li≦cupper lj;

else clower li≦lj≦cupper li

[0051] where li and lj are the lengths (in number of edge points) of the edge strings, clower and cupper the coefficients for the lower and upper limit of the string length. For example, when the width of the inner square is 0.65 times of the width of the outer square, clower=0.5 and cupper=0.8 can be chosen. In addition, another condition check can be applied to see whether a bounding box of the shorter edge string is totally inside the bounding box of the longer edge string. FIG. 7 shows an example of such candidate edge strings.

[0052] For most conditions, there is no extreme projective distortion on the images of the markers. Therefore, the method can extract image points of the outer corners of a marker from the candidate edge strings (step 608). First, the points are sorted in the longer edge string to an order that all the edge points are sequential connected. Then, a predetermined number, e.g., twenty, of evenly distributed edge points are selected from the edge string that evenly divide the sorted edge string into segments. With no extreme projective distortion, there should be 4 to 6 selected points on each side of the marker. As for the case shown in FIG. 7, the cross point of straight lines fitted using points 1 to 4 and points 5 to 8 will be the first estimation of the image correspondence of a corner point of the marker. The other corner points can be found similarly (step 610).

[0053] Based on the corner points obtained from the previous step, the estimation of the image correspondences of the marker corner can be improved by using all the edge points of the edge string to fit the lines and find the cross points (step 612). The 1-D Canny edge detection method is then applied to find the edge of the marker (step 614) and the final correspondences of the marker corners are computed. Once the marker has been detected, the image correspondences of the circles in the coding matrix need to be identified to determine the code of the marker.

[0054] There are two ways to extract the image correspondences of the circles of the matrix for decoding (step 616): (1)Project the marker to the image with the first estimation of a homography obtained from the correspondences of corner points c1,c2,c3 and c4. To get accurate back projection, a non-linear optimization is needed in the estimation of the homography. (2) To avoid the non-linear optimization, an approximation of the feature points can be approximated using linear interpolation. For this purpose, the interpolation functions of the 4-node-2-dimensional linear serendipity element from finite element method, as is known in the art, can be used. Shown in FIG. 8, the approximate image correspondence (u, v) of point (X, Y) can be obtained from: 2 u ⁡ ( X , Y ) = ∑ i = 1 4 ⁢   ⁢ ( N i ⁡ ( X , Y ) ⁢ u i ) v ⁡ ( X , Y ) = ∑ i = 1 4 ⁢   ⁢ ( N i ⁡ ( X , Y ) ⁢ v i ) ( 3 )

[0055] where the interpolation function Ni(X,Y) is expressed as 3 N i ⁡ ( X , Y ) = 1 4 ⁢ ( 1 + XX i ) ⁢ ( 1 + YY i ) , ( 4 )

[0056] for i=1, 2, 3, and 4.

[0057] Then, the 1-D Canny edge detection is also applied to accurately locate the correspondences of the corners of the inner square.

[0058] Once the circles of the matrix of a marker is determined, the code for the marker is derived as described above (step 618), for example, 4095b as shown in FIG. 6(B). Once the code has been determined, the code can be matched against a database of codes, where the database will have information related to the code (step 620) and the pose of the marker can be determined. Additionally, the centers of the black circles can be used as additional correspondences for camera calibration. For a marker using a 4×4 coding matrix, there can be up to 23 correspondences (i.e., the marker coded 4095a).

[0059] By using the black/white matrix coded markers as described above, the marker detection and decoding is based on the image intensity only. Therefore, the detection and decoding are not affected by a color classification problem, and stable decoding results can be obtained under various environments. For the purposes of detecting markers and finding correspondences, only an 8-bit gray level image is needed, resulting in processing a smaller amount of data and achieving better system performance. Additionally, the black/white matrix coded markers provide a larger number of different coded markers, resulting increased coding flexibility.

[0060] In some applications, it's not necessary to have a large number (e.g., tens of thousands) of distinctly coded markers but the marker decoding robustness is more important. To increase the decoding robustness, error-correcting coding can be applied to the decoding of markers. For example, if using the 4×4 decoding matrix, up to 12 bits are available for marker coding. Without considering automatic error-correction, up to 12,288 different markers are available. According to the Hamming bound theorem, as is known in the art, a 12-bit binary signal can have 25=32 codes with the least Hamming distance of 5 (to which a 2-bit automatic error correction can be applied). If only 1-bit automatic error correction coding is needed (the least Hamming distance is 3), up to 28=256 codes with 12-bit coding is available.

[0061] For example, assume the codes ‘000000001001’ and ‘000000000111’ are eligible codes from a set of codes that have at least a Hamming distance of 3 between any two of eligible codes. Then, by marker detection and decoding, a resulting code r=‘000000000011’ that is not in the set of eligible codes is obtained. There is at least 1 bit error in r. Comparing with all the eligible codes, the Hamming distance between r and the second code, ‘000000000111’, is 1, and the Hamming distance between r and the first code, ‘000000001001’, is 2. The Hamming distances between r and any other legal code is larger than or equal to 3. Therefore, by choosing the eligible code that has the least Hamming distance to r, the 1-bit error can be automatically corrected and the final decoding result is then set to ‘000000000111’, which is the second code.

[0062] The marker systems of the present invention can obtain accurate (sub-pixel) correspondences of more than 4 co-planar points using one marker or a set of markers in the same plane Since the metric information of the feature points on the markers are known, there are two cases when the information can be used to carry out camera calibration: (i) to obtain both intrinsic and extrinsic camera parameters; (ii) pose estimation, i.e., when the intrinsic camera parameters are known, to obtain the extrinsic parameters. In the first case, a homography-based calibration algorithm can be applied. For the second case, either the homography-based algorithm or a conventional 3-point algorithm can be applied. In many cases, the camera's intrinsic parameters can be obtained using Tsai's algorithm, as is known in the art, or the homography-based algorithm.

[0063] The coded visual markers of the present invention can be used in many applications, for example, for localization and data navigation. In this application, a user is equipped with a mobile computer that has (wireless) network connection with a main system, e.g. a server, so the user can access a related database. There is a camera attached to the mobile computer, for example, a SONY VAIO™ with a built-in USB camera and a built-in microphone or Xybernaut™ mobile computer with an plug-in USB camera and microphone. The system can help the user to locate their coordinates in large industrial environments and present to them information obtained from the database and the real-time systems. The user can interact with the system using keyboard, touch pad, or even voice. In this application, the markers coordinates and orientation in the global system are predetermined, the camera captures the marker and the system computes for the pose of the camera related to the captured marker, and thus obtain the position and orientation of the camera in the global system. Such localization information is then used for accessing related external databases, for example, to obtain the closest view of an on-site image with a 3-D reconstructed virtual structure overlay, or present the internal design parameters of a piece of equipment of interest. Additionally, the localization information can also be used to navigate the user through the site.

[0064] Furthermore, the coded visual markers of the present invention can be employed in Augmented Reality (AR) systems. A head mounted-display (HMD) is a key component to create an immersive AR environment for the users, i.e., an environment where virtual objects are combined with real objects. There are usually two kinds of HMDs, optical-see-through HMD and video-see-through HMD. The optical-see-through HMD directly uses a scene of the real world with the superimposition of virtual objects projected to the eye using a projector attached to eyeglasses. Since the real-world is directly captured by the eye, it usually requires the calibration of the HMD with the user's eyes to obtain good registration between the virtual objects and the real world. In addition, it also requires better motion tracking performance to reduce the discrepancies between the real and virtual world objects. The video-see-through uses a pair of cameras to capture the scenes of the real-world which is projected to the user. The superimposition of virtual objects is performed on the captured images. Therefore, only the camera needs to be calibrated for such AR processes. With the real-time detection and decoding features of the present invention, the coded markers described above are suitable for motion tracking and calibration in the HMD applications for both industrial and medical applications.

[0065] While the invention has been shown and described with reference to certain preferred embodiments thereof, it will be understood by those skilled in the art that various changes in form and detail may be made therein without departing from the spirit and scope of the invention as defined by the appended claims.

Claims

1. A method for determining a pose of a user comprising the steps of:

capturing a video image sequence of an environment including at least one coded marker;
detecting if the at least one coded marker is present in the video images;
if the at least one marker is present, extracting feature correspondences of the at least one coded marker;
determining a code of the at least one coded marker using the feature correspondences; and
comparing the determined code with a database of predetermined codes to determine the pose of the user.

2. The method as in claim 1, wherein the at least one coded marker comprises four color blocks arranged in a square formation.

3. The method as in claim 2, wherein the detecting step further comprises applying a watershed transformation to the at least one coded marker to extract a plurality of closed-edge strings that form a contour of the at least one marker.

4. The method as in claim 3, wherein the detecting step further comprises grouping at least four closed-edge strings with a least maximum mutual distance.

5. The method as in claim 4, wherein the extracting step further comprises;

locating a weight center for each of the at least four closed-edge strings; and
using the weight centers as a correspondence of each of the four blocks to compute a homography from the at least one coded marker to an image of the marker.

6. The method as in claim 5, wherein the extracting step further comprises projecting eight lines onto the marker image using the homography to locate the four blocks of the at least one marker.

7. The method as in claim 6, wherein the extracting step further comprises applying a 1-d Canny edge detection to locate the edge points of the four blocks.

8. The method as in claim 2, wherein the determining a code of the at least one marker further comprises determining a color of each of the four blocks.

9. The method as in claim 8, wherein the at least one of the four blocks of the at least one marker includes a white patch.

10. The method as in claim 1, wherein the at least one marker comprises a coding matrix including a plurality of columns and rows with a numbered square at intersections of the columns and rows, the coding matrix being surrounded by a rectangular frame and a code of the at least one marker being determined by the numbered squares being covered by a circle.

11. The method as in claim 10, wherein the coding matrix includes m columns and n rows, where m and n are whole number, resulting in 3×2m×n−4 codes.

12. The method as in claim 10, wherein the detecting step further comprises applying a watershed transformation to the at least one coded marker to extract a plurality of closed-edge strings that form a contour of the at least one marker.

13. The method as in claim 12, wherein the detecting step further comprises locating at least two closed-edge strings that have close weight centers.

14. The method as in claim 13, wherein the detecting step further comprises locating a corner of the rectangular frame of the at least one marker by determining a cross-point of the at least two closed-edge strings.

15. The method as in claim 12, wherein the detecting step comprises locating corners of the rectangular frame of the at least one marker by locating cross-points of the plurality of closed-edge strings.

16. The method as in claim 15, wherein the extracting step further comprises applying a 1-d Canny edge detection to locate the edge points of the rectangular frame.

17. The method as in claim 16, wherein the extracting step further comprises

computing a homography from the corners and edge points;
extracting image feature correspondences of the at least one marker; and
determining locations of the circles in the coding matrix by the image correspondences.

18. The method as in claim 17, wherein the extracting image feature correspondences is performed by linear interpolation.

19. The method as in claim 17, further comprising the step of calibrating a camera used to capture the video image sequence with the image correspondences of the least one marker.

20. The method as in claim 19, further comprising the step of determining a position and orientation of the camera relative to the at least one marker.

21. A program storage device readable by machine, tangibly embodying a program of instructions executable by the machine to perform method steps for determining a pose of a user, the method steps comprising:

capturing a video image sequence of an environment including at least one coded marker;
detecting if the at least one coded marker is present in the video images;
if the at least one marker is present, extracting feature correspondences of the at least one coded marker;
determining a code of the at least one coded marker using the feature correspondences; and
comparing the determined code with a database of predetermined codes to determine the pose of the user.

22. The program storage device as in claim 21, further comprising the step of determining a location of the user based on the pose of the user and a position of the at least one marker.

23. A system comprising:

a plurality of coded markers located throughout an environment, each of the plurality of coded markers relating to a location in the environment, codes of the plurality of coded markers being stored in a database;
a camera for capturing a video image sequence of the environment, the camera coupled to a processor; and
the processor adapted for detecting if at least one coded marker is present in the video images, if the at least one marker is present, extracting feature correspondences of the at least one coded marker, determining a code of the at least one coded marker using the feature correspondences, and comparing the determined code with the database to determine the pose of the user.

24. The system as in claim 23, wherein the at least one coded marker comprises four color blocks arranged in a square formation and a code of the at least one marker being determined by a color sequence of the blocks.

25. The system as in claim 23, wherein the at least one marker comprises a coding matrix including a plurality of columns and rows with a numbered square at intersections of the columns and rows, the coding matrix being surrounded by a rectangular frame and a code of the at least one marker being determined by the numbered squares being covered by a circle.

26. The system as in claim 23, wherein the camera and processor are mobile devices.

27. The system as in claim 23, wherein the camera and processor are housed in an integral mobile device.

28. The system as in claim 23, wherein based on a first location of the at least one marker, the processor being adapted to direct the user to a second location.

29. The system as in claim 23, further comprising a display device, wherein the display device will provide to the user information relative to the location of the at least one marker.

30. The system as in claim 23, further comprising a display device, wherein based on a first location of the at least one marker, the display device will provide to the user information to direct the user to a second location.

31. The system as in claim 23, further comprising an external database of information relative to a plurality of items located throughout the environment, wherein when the user is in close proximity to at least one of the plurality of items, the processor provides the user with access to the external database.

32. The system as in claim 31, further comprising a display device for displaying information of the external database to the user.

33. The system as in claim 31, further comprising a display device for displaying virtual objects overlaid on the at least one item.

34. The system as in claim 31, further comprising a head-mounted display for overlaying information of the at least one item in a view of the user.

Patent History
Publication number: 20030076980
Type: Application
Filed: Oct 2, 2002
Publication Date: Apr 24, 2003
Applicant: Siemens Corporate Research, Inc..
Inventors: Xiang Zhang (Lawrenceville, NJ), Nassir Navab (Plainsboro, NJ)
Application Number: 10262693
Classifications
Current U.S. Class: Target Tracking Or Detecting (382/103); Pattern Boundary And Edge Measurements (382/199)
International Classification: G06K009/00;