METHOD AND APPARATUS FOR TRACKING LOCATIONS USING WEBCAMS

Disclosed herein are a location tracking apparatus and method. The location tracking apparatus includes a head tracking unit, and a pointer tracking unit. The head tracking unit extracts a location, 3DOF and orientation of a head of a user by capturing the user from a screen on which a game is displayed using a first webcam and tracks the user's gaze based on the results of the extraction. The pointer tracking unit tracks a point indicated by a laser pointer projected onto the screen by capturing the screen using a second webcam, and designates the tracked point as cross hairs corresponding to a target to be shot at. The head tracking unit and the pointer tracking unit tracks the user's gaze and the point simultaneously.

Skip to: Description  ·  Claims  · Patent History  ·  Patent History
Description
CROSS REFERENCE TO RELATED APPLICATIONS

This application claims the benefit of Korean Patent Application Nos. 10-2010-0132875 and 10-2011-0037422, filed on Dec. 22, 2010 and Apr. 21, 2011, respectively, which are hereby incorporated by reference in their entirety into this application.

BACKGROUND OF THE INVENTION

1. Technical Field

The present invention relates generally to a method and an apparatus for tracking locations using webcams and, more particularly, to a method and an apparatus for tracking the locations of a user and a laser pointer using webcams.

2. Description of the Related Art

Human Computer Interaction (HO) relates to technology for ascertaining several events generated when a human uses a computer to perform operations and designing a method for enabling a human to use a computer more conveniently and safely.

Object tracking in HCI means predicting the trajectory of an object of interest from consecutive images.

Object tracking includes three types of auxiliary techniques: a modeling technique for enabling a computer to recognize the characteristics of an object, an object detection technique for detecting an object on an input image, and a location prediction technique for estimating the location of an object on a subsequent frame from the location of the object, obtained from a current frame, by taking the correlation between the image frames, received from a camera, into account.

In general, an object model on an image which is used in the modeling technique includes an articulated shape model, including points, a basic geometric shape, a silhouette/contour, and joints. Segmentation and background subtraction methods are used to determine whether a corresponding object exists in an image using the model (i.e., a feature point).

If the corresponding object exists, the object is tracked by training the appearance of an object model in advance using a machine learning algorithm based on a statistical method. Here, if new data enters into the trained model, the location of the object is tracked across a plurality of frames. In this case, a location prediction technique for estimating the location of an object of a subsequent frame using information about a previous frame is used. The location prediction technique may use the most representative tracking and training algorithm, such as meanshift, kalman filtering, particle filtering, and haar-like training techniques.

The location prediction technique is problematic in that a change in the user's viewpoint and a change in the target which are frequently used in a primary viewpoint game, such as a shooting game, cannot be predicted at the same time.

SUMMARY OF THE INVENTION

Accordingly, the present invention has been made keeping in mind the above problems occurring in the prior art, and an object of the present invention is to provide a method and an apparatus for tracking the locations of a user and a laser pointer, which predict a change in the user's viewpoint and a change in the target using webcams at the same time.

In order to accomplish the above object, the present invention provides a method in which an apparatus operating in conjunction with a First-Person Shooter (FPS) game tracks a user performing the game and a location of a laser pointer held by the user, the method including extracting a location, a 3-Degrees Of Freedom (3DOF) and an orientation of a head of the user by capturing the user from a screen on which the game is displayed using a first webcam, and tracking the user's gaze based on results of the extraction; and capturing the screen using a second webcam, and tracking a point, indicated by the laser pointer and projected onto the screen, based on the results of the capturing.

The above method of tracking the location is characterized in that the user's gaze and the point of the laser pointer can be tracked simultaneously.

The tracking the user's gaze may include extracting at least one feature point based on the results of the capturing of the head of the user, performed by the first webcam disposed above the screen; constructing a statistical model by applying the feature point to a machine learning algorithm; and extracting the location, 3DOF, and orientation of the head of the user based on the statistical model.

The feature point may include at least one of a face color, an eye, a nose, and a mouth of the user.

The tracking the point indicated by the laser pointer may include performing calibration on a frame including the point; measuring a pixel intensity of the calibrated frame; and comparing the pixel intensity and a critical value, and making a determination of whether to track the point depending on results of the comparison.

If the pixel intensity is equal to or higher than the critical value, the method may further include grouping pixels whose intensities are equal to or higher than the critical value; performing Kalman filtering on the grouped pixels; and smoothing results of the Kalman filtering and tracking the point.

If the pixel intensity is smaller than the critical value, the method may further include measuring the pixel intensity of the calibrated frame again.

In the tracking the point indicated by the laser pointer, the tracked point may be designated as cross hairs corresponding to a target in the game to be shot at.

In order to accomplish the above object, the present invention provides a location tracking apparatus, including a head tracking unit for extracting a location, 3DOF and orientation of a head of a user by capturing the user from a screen on which a game is displayed using a first webcam and tracking the user's gaze based on results of the extraction; and a pointer tracking unit for tracking a point indicated by a laser pointer projected onto the screen by capturing the screen using a second webcam, and designating the tracked point as cross hairs corresponding to a target to be shot at; wherein the head tracking unit and the pointer tracking unit track the user's gaze and the point simultaneously.

The head tracking unit may construct a statistical model based on results of the capturing performed by the first webcam disposed above the screen, and extract the location, 3DOF and orientation of the head of the user by recognizing the results of capturing the head of the user based on the statistical model.

The pointer tracking unit may perform calibration on a frame including the point, measure the pixel intensity of the calibrated frame, compare the pixel intensity with a critical value, and determine whether to track the point depending on the results of the comparison.

If the pixel intensity is equal to or higher than the critical value, the pointer tracking unit may group pixels whose intensities are equal to or greater than the critical value, perform Kalman filtering on the grouped pixels, smooth the Kalman filtered result, and track the point.

If the pixel intensity is smaller than the critical value, the pointer tracking unit may measure the pixel intensity of the calibrated frame again.

BRIEF DESCRIPTION OF THE DRAWINGS

The above and other objects, features and advantages of the present invention will be more clearly understood from the following detailed description taken in conjunction with the accompanying drawings, in which:

FIG. 1 is a diagram showing a service environment to which an apparatus for tracking locations using webcams is applied according to an embodiment of the present invention;

FIG. 2 shows the configuration of a location tracking apparatus 200 according to an embodiment of the present invention;

FIG. 3 is a flowchart illustrating a method of tracking locations using webcams according to an embodiment of the present invention;

FIG. 4 is a flowchart illustrating a method of tracking a user's gaze according to an embodiment of the present invention; and

FIG. 5 is a flowchart illustrating a method of tracking a laser pointer according to an embodiment of the present invention.

DESCRIPTION OF THE PREFERRED EMBODIMENTS

Reference now should be made to the drawings, throughout which the same reference numerals are used to designate the same or similar components.

The present invention will be described in detail below with reference to the accompanying drawings. Repetitive descriptions and descriptions of known functions and constructions which have been deemed to make the gist of the present invention unnecessarily vague will be omitted below. The embodiments of the present invention are provided in order to fully describe the present invention to a person having ordinary skill in the art. Accordingly, the shapes, sizes, etc. of elements in the drawings may be exaggerated to make the description clear.

A method and an apparatus for tracking locations using webcams according to the embodiments of the present invention will be described in detail below with reference to the accompanying drawings.

First, the method and an apparatus for tracking locations using webcams according to the embodiments of the present invention may belong to a technical field for intuitively sensing the intention of a user and transferring it to an application, such as a game, without using the HCI technique (i.e., a user interface using a mouse and a keyboard), but is not limited thereto.

FIG. 1 is a diagram showing a service environment to which an apparatus for tracking locations using webcams is applied according to an embodiment of the present invention.

First, the apparatus for tracking locations using webcams (hereinafter referred to as the “location tracking apparatus”) according to the embodiment of the present invention corresponds to an apparatus, such as an interface which operates in conjunction with a First-Person Shooter (FPS) game, but is not limited thereto.

Referring to FIG. 1, the service environment according to the embodiment of the present invention includes two webcams 10 and 20, a screen 30 where the two webcams 10 and 20 are placed, a projector 40, and a laser pointer 50.

The projector 40 may be coupled to, for example, a computer for executing a game, and projects a game screen onto the screen 30.

The webcam 10 disposed above the screen 30 consecutively captures a game participant (i.e., the face of a user) in front of the screen 30.

The webcam 20 in front of the projector 40 consecutively captures the screen 30 on which a game screen is being executed. That is, the webcam 20 consecutively captures a point at which the laser pointer 50 held by the user overlays on the screen 30.

In the service environment according to the embodiment of the present invention, the location tracking apparatus 200 can incorporate the intention of the user into the game in real time by tracking the head of the user and the laser pointer 50 at the same time using the results captured by the two webcams 10 and 20.

The location tracking apparatus 200 for tracking the locations of a user and the laser pointer using the webcams will now be described in detail with reference to FIG. 2.

FIG. 2 shows the configuration of the location tracking apparatus 200 according to an embodiment of the present invention.

Referring to FIG. 2, the location tracking apparatus 200 includes a head tracking unit 210 and a pointer tracking unit 220.

The head tracking unit 210 tracks a user's gaze in a FPS game using the webcam 10 disposed above the screen 30.

The pointer tracking unit 220 tracks a point indicated by the laser pointer 50 using the webcam 20 in front of the projector 40.

As described above, the location tracking apparatus 200 can incorporate a change in the viewpoint of a user in the game by tracking the change using head tracking, without using a mouse or a keyboard in order to change the viewpoint of the user in a primary viewpoint game, such as a shooting game. Furthermore, the location tracking apparatus 200 tracks a point, indicated by the laser pointer 50, simultaneously with head tracking, and designates the point as cross hairs corresponding to a target to be shot at.

A method of tracking locations using webcams will now be described in detail with reference to FIGS. 3 to 5.

FIG. 3 is a flowchart illustrating the method of tracking locations using webcams according to an embodiment of the present invention, FIG. 4 is a flowchart illustrating a method of tracking a user's gaze according to an embodiment of the present invention, and FIG. 5 is a flowchart illustrating a method of tracking a laser pointer according to an embodiment of the present invention.

First, an environment to which the method of tracking locations using the webcams according to the embodiment of the present invention is applied includes the two webcams 10 and 20, the screen 30 where the two webcams 10 and 20 are placed, the projector 40, and the laser pointer 50. Here, the location tracking apparatus 200 tracks the head of a user and the location of the laser pointer using the two webcams 10 and 20.

For example, the projector 40 may project a game screen onto the screen 30.

Referring to FIG. 3, the location tracking apparatus 200 tracks a user's gaze in a FPS game using the webcam 10 disposed above the screen 30 at step S100. Here, the webcam 10 disposed above the screen 30 consecutively captures the head of the user in front of the screen.

Referring to FIG. 4, the location tracking apparatus 200 extracts one or more feature points of the head based on the results of the capturing of the head of the user at step S110. The feature points of the head include points corresponding to characteristics, such as a face color, an eye, a nose, and a mouth.

The location tracking apparatus 200 constructs a statistical model by applying the extracted feature points of the head to a machine learning algorithm at step S120. The location tracking apparatus 200 includes the training data of users before constructing the statistical model. Thereafter, the location tracking apparatus 200 constructs a statistical model by training based on the training data using the machine learning algorithm.

If the results captured by the webcam 10 disposed above the screen 30 are received, the location tracking apparatus 200 extracts a location, the 3-Degrees Of Freedom (hereinafter referred to as the “3DOF”), and the orientation of the head of the user by recognizing the results at step S130.

More particularly, if a face image of the user is received from the webcam 10, the location tracking apparatus 200 checks the location of the face by recognizing the face image, recognizes the locations and orientations of an eye, a nose, and a mouth, and then extracts the orientation of the face.

That is, the location tracking apparatus 200 may track the user's gaze based on the location, the 3DOF, and the orientation of the head of the user extracted using the webcam 10. Here, the 3DOF includes X (i.e., a horizontal direction), Y (i.e., a vertical direction), and Z (i.e., depth).

Thereafter, the location tracking apparatus 200 tracks a point indicated by the laser pointer 50 using the webcam 20 in front of the projector 40 at step S200. Here, the webcam 20 in front of the projector 40 can capture 30 or more frames per second.

Referring to FIG. 5, the location tracking apparatus 200 performs calibration on a frame (640×480) including a point that is indicated by the laser pointer 50 projected onto the screen at step S210.

The location tracking apparatus 200 measures the pixel intensity for the frame at step S220, and determines whether the measured pixel intensity is equal to or greater than a critical value at step S230.

If the pixel intensity is smaller than the critical value, the location tracking apparatus 200 measures the pixel intensity for the frame again. If the pixel intensity is equal to or greater than the critical value, the location tracking apparatus 200 groups pixels whose intensities are equal to or greater than the critical value at step S240.

The location tracking apparatus 200 performs Kalman filtering, corresponding to a prediction-correction algorithm, on the grouped pixels at step S250.

The location tracking apparatus 200 tracks a point indicated by the laser pointer 50 by smoothing the results of Kalman filtering at step S260. Furthermore, the location tracking apparatus 200 designates the tracked point as cross hairs corresponding to a target to be shot at.

Accordingly, the location tracking apparatus 200 according to the embodiment of the present invention can predict a change in the viewpoint of a user and a change in the target of the laser pointer using the webcam 10 above the screen 30 and the webcam 20 in front of the projector 40 at the same time.

As described above, according to the embodiments of the present invention, the location tracking apparatus can predict a change in the viewpoint of a user and a change in the target of the laser pointer using webcams at the same time.

Although the preferred embodiments of the present invention have been disclosed for illustrative purposes, those skilled in the art will appreciate that various modifications, additions and substitutions are possible, without departing from the scope and spirit of the invention as disclosed in the accompanying claims.

Claims

1. A method in which an apparatus operating in conjunction with a First-Person Shooter (FPS) game tracks a user performing the game and a location of a laser pointer held by the user, the method comprising:

extracting a location, 3-Degrees Of Freedom (3DOF) and orientation of a head of the user by capturing the user from a screen on which the game is displayed using a first webcam, and tracking the user's gaze based on results of the extraction; and
capturing the screen using a second webcam, and tracking a point, indicated by the laser pointer and projected onto the screen, based on the results of the capturing.

2. The method as set forth in claim 1, wherein the user's gaze and the point of the laser pointer are tracked simultaneously.

3. The method as set forth in claim 1, wherein the tracking the user's gaze comprises:

extracting at least one feature point based on the results of the capturing of the head of the user, performed by the first webcam disposed above the screen;
constructing a statistical model by applying the feature point to a machine learning algorithm; and
extracting the location, 3DOF, and orientation of the head of the user based on the statistical model.

4. The method as set forth in claim 3, wherein the feature point comprises at least one of a face color, an eye, a nose, and a mouth of the user.

5. The method as set forth in claim 1, wherein the tracking the point indicated by the laser pointer comprises:

performing calibration on a frame including the point;
measuring a pixel intensity of the calibrated frame; and
comparing the pixel intensity and a critical value, and making a determination of whether to track the point depending on results of the comparison.

6. The method as set forth in claim 5, further comprising, if the pixel intensity is equal to or higher than the critical value:

grouping pixels whose intensities are equal to or higher than the critical value;
performing Kalman filtering on the grouped pixels; and
smoothing results of the Kalman filtering and tracking the point.

7. The method as set forth in claim 5, further comprising, if the pixel intensity is smaller than the critical value, measuring the pixel intensity of the calibrated frame again.

8. The method as set forth in claim 1, wherein in the tracking the point indicated by the laser pointer, the tracked point is designated as cross hairs corresponding to a target in the game to be shot at.

9. A location tracking apparatus, comprising:

a head tracking unit for extracting a location, 3DOF and orientation of a head of a user by capturing the user from a screen on which a game is displayed using a first webcam and tracking the user's gaze based on results of the extraction; and
a pointer tracking unit for tracking a point indicated by a laser pointer projected onto the screen by capturing the screen using a second webcam, and designating the tracked point as cross hairs corresponding to a target to be shot at;
wherein the head tracking unit and the pointer tracking unit track the user's gaze and the point simultaneously.

10. The location tracking apparatus as set forth in claim 9, wherein the head tracking unit constructs a statistical model based on the results of the capturing performed by the first webcam disposed above the screen, and extracts the location, 3DOF, and orientation of the head of the user by recognizing the results of capturing the head of the user based on the statistical model.

11. The location tracking apparatus as set forth in claim 9, wherein the pointer tracking unit performs calibration on a frame including the point, measures a pixel intensity of the calibrated frame, compares the pixel intensity with a critical value, and determines whether to track the point depending on results of the comparison.

12. The location tracking apparatus as set forth in claim 11, wherein if the pixel intensity is equal to or higher than the critical value, the pointer tracking unit groups pixels whose intensities are equal to or greater than the critical value, performs Kalman filtering on the grouped pixels, smoothes the Kalman filtered result, and tracks the point.

13. The location tracking apparatus as set forth in claim 11, wherein if the pixel intensity is smaller than the critical value, the pointer tracking unit measures the pixel intensity of the calibrated frame again.

Patent History
Publication number: 20120165084
Type: Application
Filed: Dec 21, 2011
Publication Date: Jun 28, 2012
Applicant: Electronics and Telecommunications Research Institute (Daejeon-city)
Inventor: Man-Kyu SUNG (Daejeon)
Application Number: 13/332,638
Classifications