HORSEBACK RIDING SIMULATOR AND METHOD FOR HORSEBACK RIDING SIMULATION

Disclosed is a horseback riding simulator configured to operate in an adaptive manner with respect to a user and a method using the same, the horseback riding simulator including a user identification unit to identify a user through user identification information based on user facial information extracted from an input image, a posture recognition unit to calculate user posture information by extracting information related to each of designated body parts of the user from the input image, a coaching unit to provide the user with instruction in horseback riding posture, based on the user identification information and the user posture information, and a sensory realization unit to control a horseback riding mechanism based on a user-intended posture calculated through analysis of the user posture information. Accordingly, user-customized horseback riding instruction is provided through identification of a user and recognition of a user's horseback riding posture.

Skip to: Description  ·  Claims  · Patent History  ·  Patent History
Description
CLAIM FOR PRIORITY

This application claims priority to Korean Patent Application No. 10-2012-0108476 filed on Sep. 28, 2012 in the Korean Intellectual Property Office (KIPO), the entire contents of which are hereby incorporated by reference.

BACKGROUND

1. Technical Field

Example embodiments of the present invention relate in general to the field of horseback riding simulation, and more specifically, to a horseback riding simulator for operating in an adaptive manner with respect to a user and a simulation method using the same.

2. Related Art

With the recent trend of an aging society, health is of great interest, and therefore a movement to improve motor skills has become more active. In particular, studies have been conducted on a new IT convergence solution technology to combine a physical fitness promotion technology for personal healthcare and sensory and motor skill improvement with image information processing and robot technology.

The IT convergence solution converged with health care and image processing/analysis technology is provided, by using a camera, to acquire gestures or motion information of a user who makes a motion, so that the posture of the user may be analyzed and corrected. In addition, the IT convergence solution is used for a personal motor learning or physical fitness promotion by providing user-customized coaching functions for different sports.

A technology for detecting a certain object region from an image acquired at a site to recognize and track the object region has been applied to sports, such as a golf, skating, and equestrianism (horseback riding), for correcting a user's posture, coaching, etc., so that a user can exercise with proper posture.

The existing user posture correction and coaching is applied to golf so that image information including a golfer's swing is acquired by use of a camera, and the golfer's swing is analyzed by use of the acquired image information. In addition, studies have been conducted to acquire motion information at a detailed level by attaching a sensor to a body and analyzing and correcting posture information based on the acquired motion information.

In particular, in the recent change in the sports industry from golf to horseback riding, a few developers are suggesting a method of coaching a user's horseback riding posture using a horseback riding simulator. However, there is a need for a method of teaching horseback riding posture using a horseback riding simulator.

SUMMARY

Accordingly, example embodiments of the present invention are provided to substantially obviate one or more problems due to limitations and disadvantages of the related art.

Example embodiments of the present invention provide a horseback riding simulator for providing user-customized teaching as well as a realistic horseback riding simulation through identification of a user and recognition of a user's horseback riding posture.

Example embodiments of the present invention also provide a horseback riding simulation method for providing user-customized teaching and a realistic horseback riding simulation through identification of a user and recognition of a user's horseback riding posture.

In some example embodiments, a horseback riding simulator includes a user identification unit, a posture recognition unit and a coaching unit. The user identification unit may be configured to identify a user through user identification information based on user facial information that is extracted from an input image. The posture recognition unit may be configured to calculate user posture information by extracting information related to each of designated body parts of the user from the input image. The coaching unit may be configured to provide the user with instruction in horseback riding posture, based on the user identification information and the user posture information.

The horseback riding simulator may further include a sensory realization unit configured to control a horseback riding mechanism based on a user-intended posture that is calculated through analysis of the user posture information.

The user identification unit may include a face detection unit and a facial feature extraction unit. The face detection unit may be configured to extract a facial region of a rectangular shape by recognizing a front of a user's face from the input image. The facial feature extraction unit may be configured to extract a facial feature point based on a contour of the user's face located in the facial region, and to calculate the user identification information using a facial feature vector that is based on directivity characteristics and histogram distribution characteristics of pixels distributed in a local area that is set in advance on the basis of the facial feature point.

The posture recognition unit may include a user region extraction unit and a posture information calculation unit. The user region extraction unit may be configured to extract a user region from the input image. The posture information calculation unit may be configured to divide the user region and calculate the user posture information, which includes information about a position, an angle and a movement of each of the designated body parts of the user, by use of profile information relative to the front of the user's face.

The coaching unit may include a movement data storage unit, a model matching unit, and a horseback riding instruction unit. The movement data storage unit may be configured to store a user movement model including the user identification information and the user posture information. The model matching unit may be configured to calculate a posture error by matching the user movement model to a standard movement model. The horseback riding instruction unit may be configured to provide the user with instruction in horseback riding posture through speech or images, based on the posture error.

The standard movement model may include information related to a standard horseback riding posture according to a gait mode of a horseback riding mechanism.

The horseback riding instruction unit may be configured to provide the user with a warning message if the posture error is equal to or greater than a predetermined value.

The sensory realization unit may include an intended posture calculation unit and a mechanism control unit. The intended posture calculation unit may be configured to calculate the user-intended posture by analyzing a gait pattern based on the user posture information. The mechanism control unit may be configured to control the horseback riding mechanism according to a gait mode including a walk, a canter and a trot while corresponding to the user intended posture.

In other example embodiments, a method for horseback riding simulation using a horseback riding simulator includes identifying a user through user identification information based on user facial information extracted from an input image; calculating user posture information by extracting information related to each of designated body parts of the user from the input image; and providing the user with instruction in horseback riding posture, based on the user identification information and the user posture information.

As is apparent from the above description, in a case in which a horseback riding simulator and a method for horseback riding simulation are used, user-customized horseback riding instruction can be provided through identification of a user and recognition of a user's horseback riding posture.

In addition, the mode of a horseback riding mechanism is converted so as to correspond to a user-intended posture, thereby enabling a user to mutually respond to the horseback riding mechanism.

BRIEF DESCRIPTION OF DRAWINGS

Example embodiments of the present invention will become more apparent by describing in detail example embodiments of the present invention with reference to the accompanying drawings, in which:

FIG. 1 is a conceptual diagram describing operation of a horseback riding simulator according to an example embodiment of the present invention;

FIG. 2 is a block diagram illustrating a configuration of the horseback riding simulator according to an example embodiment of the present invention;

FIG. 3 is a flowchart describing identification of a user and calculation of user posture information according to an example embodiment of the present invention;

FIG. 4 is a conceptual diagram describing calculation of user posture information according to an example embodiment of the present invention; and

FIG. 5 is a flowchart describing a method for horseback riding simulation according to an example embodiment of the present invention.

DESCRIPTION OF EXAMPLE EMBODIMENTS

Example embodiments of the present invention are disclosed herein. However, specific structural and functional details disclosed herein are merely representative for purposes of describing example embodiments of the present invention, however, example embodiments of the present invention may be embodied in many alternate forms and should not be construed as limited to example embodiments of the present invention set forth herein.

Accordingly, while the invention is susceptible to various modifications and alternative forms, specific embodiments thereof are shown by way of example in the drawings and will be described in detail herein. It should be understood, however, that there is no intent to limit the invention to the particular forms disclosed, but on the contrary, the invention is to cover all modifications, equivalents, and alternatives falling within the spirit and scope of the invention. Like reference numerals refer to like elements throughout the description of the figures.

It will be understood that, although the terms first, second, etc. may be used herein to describe various elements, these elements should not be limited by these terms. These terms are only used to distinguish one element from another. For example, a first element could be termed a second element, and, similarly, a second element could be termed a first element, without departing from the scope of the present invention. As used herein, the term “and/or” includes any and all combinations of one or more of the associated listed items.

It will be understood that when an element is referred to as being “connected” or “coupled” to another element, it can be directly connected or coupled to the other element or intervening elements may be present. In contrast, when an element is referred to as being “directly connected” or “directly coupled” to another element, there are no intervening elements present. Other words used to describe the relationship between elements should be interpreted in a like fashion (i.e., “between” versus “directly between”, “adjacent” versus “directly adjacent”, etc.).

The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used herein, the singular forms “a”, “an” and “the” are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms “comprises”, “comprising,”, “includes” and/or “including”, when used herein, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.

Unless otherwise defined, all terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs. It will be further understood that terms, such as those defined in commonly used dictionaries, should be interpreted as having a meaning that is consistent with their meaning in the context of the relevant art and will not be interpreted in an idealized or overly formal sense unless expressly so defined herein.

It should also be noted that in some alternative implementations, the functions/acts noted in the blocks may occur out of the order noted in the flowcharts. For example, two blocks shown in succession may in fact be executed substantially concurrently or the blocks may sometimes be executed in the reverse order, depending upon the functionality/acts involved.

Hereinafter, embodiments of the present invention will be described in detail with reference to the accompanying drawings. For clarity, elements that appear in more than one drawing or in more than one place in the description are consistently denoted by the same respective reference numerals, and such elements are only described in detail once, to avoid redundancy.

FIG. 1 is a conceptual diagram describing operation of a horseback riding simulator according to an example embodiment of the present invention.

Referring to FIG. 1, a horseback riding simulator according to an example embodiment of the present invention, when a user uses a horseback riding mechanism 20, may identify a user and calculate information related to a posture of the user by use of an image acquired through multiple visual sensors 10.

The user, by using the horseback riding mechanism 20, may experience the sensation of riding a real horse. That is, the horseback riding mechanism 20 may support operation modes according to various gait patterns to provide a user with various exercise effects that may be obtained through actual horseback riding.

The visual sensor 10 may be located at a front side or a lateral side of the horseback riding mechanism 20 to acquire images of a user of the horseback riding mechanism 20, taken from various directions. In addition, the visual sensor 10 may be located at different distances from the horseback riding mechanism 20, allowing images of a user of the horseback riding mechanism 20 to be acquired from various distances. A plurality of visual sensors 10 according to an example embodiment of the present invention may be installed, and there is no particular limitation on the number of the visual sensors 10.

The horseback riding simulator according to an example embodiment of the present invention may provide horseback riding posture instruction suitable for a user by interoperating with the horseback riding mechanism 20 and the visual sensors 10.

For example, the horseback riding simulator may identify the user by receiving various images acquired from the visual sensors 10. In addition, the horseback riding simulator may extract information related to a posture of the user by receiving the various images acquired from the visual sensors 10, and by use of the extracted information, may provide the user with customized horseback riding instruction.

In addition, the horseback riding simulator may calculate a posture intended by the user, by analyzing the information related to the posture of the user, and may control the horseback riding mechanism 20 according to the posture intended by the user.

Accordingly, the horseback riding simulator according to an example embodiment of the present invention may provide a horseback riding simulation that is specified by the individual user, and by analyzing the horseback riding posture of the user, may provide user-customized horseback riding instruction. In addition, the horseback riding simulator may automatically convert an operation mode of the horseback riding mechanism 20 by predicting a movement of the user by use of the information related to the posture of the user.

FIG. 2 is a block diagram illustrating a configuration of the horseback riding simulator according to an example embodiment of the present invention.

Referring to FIG. 2, the horseback riding simulator according to an example embodiment of the present invention includes a user identification unit 100, a posture recognition unit 200, a coaching unit 300, and a sensory realization unit 400.

The user identification unit 100 may identify the user by use of user identification information according to facial information about the user extracted from an input image. The input image may represent at least one image acquired by the multiple visual sensors 10. In particular, an input image acquired by one of the visual sensors 10 located at a front side of the horseback riding mechanism 20 may include the facial information about the user.

The user identification unit 100 may include a face detection unit 110 and a facial feature extraction unit 120. The face detection unit 110 may extract a facial region of a rectangular shape by recognizing the front of the user's face from the input image. In addition, the facial feature extraction unit 120 may extract a facial feature point based on a contour of the user's face located in the facial region, and calculate the user identification information by using a facial feature vector that is based on directivity and histogram distribution characteristics of pixels distributed in a local area that is set in advance on the basis of the facial feature point.

For example, the face detection unit 110 may extract a facial region from an input image by use of a facial detection algorithm. The facial region may be extracted as a rectangular shape. A plurality of rectangular regions represented around the facial region may be extracted, and the facial region may be extracted based on a position of an average of the plurality of rectangular regions.

The facial feature extraction unit 120 may extract a facial feature point at a facial contour, such as eyes, nose and mouth, in the facial region, and may calculate a facial feature vector according to characteristics related to the directivity of pixels distributed in a random local area on the basis of each facial feature point and histogram characteristics related to the distribution of the pixels. That is, the facial feature extraction unit 120 may identify a user by use of the facial feature vector.

Accordingly, the user identification unit 100 may store user identification information based on a facial feature vector for each user in advance, and may identify a user by comparing the stored user identification information with calculated user identification information. In addition, the user identification unit 100 may update the stored user identification information by use of newly calculated user identification information.

The posture recognition unit 200 may calculate user posture information by extracting information related to each of designated body parts of the user from the input image.

Since the input image includes an image related to a user and an image related to a background, only a user region, the image related to the user, is extracted from the input image.

In addition, the extracted user region is divided to extract information about a position or a movement of each of the body parts of the user, thereby calculating information about horseback riding posture of a user. That is, even though the horseback riding posture varies with a gait type of a horse, such as a walk, a canter and a trot, the horseback riding posture of the user may be recognized by extracting the positions of the face, hands and feet of the user along with a profile of the user.

The posture recognition unit 200 may include a user region extraction unit 210 and a posture information calculation unit 220. The user region extraction unit 210 may extract a user region from an input image. In addition, the posture information calculation unit 220 may calculate the user posture information, which includes a position, an angle and a movement of each of designated body parts of the user, by dividing the user region and using profile information of one side of the user's face.

For example, a user region as well as a designated body part of a user, such as hands and feet, may be extracted from an input image through a background image generation and update. The user region and the designated body part of the user may be extracted from an image acquired by use of the visual sensor 10 that is installed at a lateral side of the user.

In addition, whether the eyes of a user are oriented in the same direction as the movement of a horse may be checked depending on whether the front of the face of a user is detected. The user posture information according to a gait pattern of a horse may be detected by extracting angles, formed by a waist and a back of a user with respect to the horse, through the profile of the user.

In addition, the posture of a user according to a gait pattern of a horse may be detected by detecting information about the positions and movement of hands and feet of the user.

The coaching unit 300 may provide a user with instruction in horseback riding posture, based on the user identification information and the user posture information. That is, the coaching unit 300 may check whether a horseback riding posture is proper posture, according to the gait pattern of a horse, by recognizing the user posture information.

The coaching unit 300 may provide a user with instruction in horseback riding posture by allowing information about a position and angle of a designated body part of a user to be represented together with information about a position and angle of a designated body part of a specialist according to a standard horseback riding posture.

The coaching unit 300 may include a movement data storage unit 310, a model matching unit 320 and a horseback riding instruction unit 330.

In detail, the movement data storage unit 310 may store a user movement model including the user identification information and the user posture information, the model matching unit 320 may calculate a posture error by matching the user movement model to a standard movement model, and the horseback riding instruction unit 330 may provide a user with instruction in horseback riding posture through speech or images based on the posture error. The standard movement model may include information about a standard horseback riding posture according to a gait mode of the horseback riding mechanism 20.

In addition, in the case in which the posture error is equal to or greater than a predetermined value, the horseback riding instruction unit 330 may provide a user with a warning message. For example, in the case of an unexpected event as a result of estimation of a matching error between a standard movement model generated from a horseback riding posture of a specialist and a user movement model, the horseback riding instruction unit 330 delivers a warning message to the user, thereby providing the user with horseback riding instruction in a safe environment.

In addition, the coaching unit 300 may interoperate with an output device to provide a user with instruction, and the output device may include an audio device or a display device.

The sensory realization unit 400 may control the horseback riding mechanism 20, based on a user-intended posture calculated through analysis of the user posture information. That is, in order to achieve a realistic simulation, the sensory realization unit 400 may convert a mode of the horseback riding mechanism 20 (a gait pattern of a horse) so as to correspond to the user-intended posture calculated through analysis of the user posture information.

In detail, the sensory realization unit 400 may include an intended posture calculation unit 410 and a mechanism control unit 420. The intended posture calculation unit 410 may calculate the user-intended posture by analyzing a gait pattern based on user posture information, and the mechanism control unit 420 may control the horseback riding mechanism 20 according to a gait mode including a walk, a canter and a trot, while corresponding to the user-intended posture.

For example, the intended posture calculation unit 410 may receive user posture information from the posture recognition unit 200 and analyze the received user posture information. That is, the intended posture calculation unit 410 may recognize the intention of a user, such as a stop, deceleration, acceleration or change in direction of the horseback riding mechanism 20, by analyzing the user posture information.

In addition, the mechanism control unit 420 may automatically convert a mode of the horseback riding mechanism 20, through recognition of user's intention. That is, the mechanism control unit 420 may interoperate with the horseback riding mechanism 20.

Although the user identification unit 100, the posture recognition unit 200, the coaching unit 300 and the sensory realization unit 400 according to an example embodiment of the present invention are illustrated as being independent of one another in the above description, the user identification unit 100, the posture recognition unit 200, the coaching unit 300 and the sensory realization unit 400 may be provided as a single physical device. In addition, each of the user identification unit 100, the posture recognition unit 200, the coaching unit 300 and the sensory realization unit 400 may be embodied as a plurality of physical devices or groups, rather than a single physical device or group.

FIG. 3 is a flowchart describing identification of a user and calculation of user posture information according to an example embodiment of the present invention, and FIG. 4 is a conceptual diagram describing calculation of the user posture information according to an example embodiment of the present invention.

Referring to FIGS. 3 and 4, according to an example embodiment of the present invention, a user is identified and a posture of the user is recognized by having an image acquired through the multiple visual sensors 10 as an input image.

The input image includes an image of a user (a user region) and an image about a background (a background region). Accordingly, in order to extract a user region, the image about the background is acquired, and a difference method is performed on the input image acquired through the visual sensor 10 and the image about the background, thereby extracting a user region.

First, a procedure to identify a user is as follows.

Facial information about a user is acquired from a region corresponding to a head part of the user in the user region to identify the user.

Here, the input image may come upon an artifact of image characteristics due to changes in surrounding lighting, so that image pre-processing may be performed on the input image.

A facial region of the user may be extracted from the image subjected to the image pre-processing, and user identification information that may be represented as a facial feature vector by use of a facial feature point, which is based on a contour of the user's face located at the extracted facial region, may be calculated.

Thereafter, in a procedure to calculate user posture information, a user region is divided such that a designated body part of the user is recognized, and information about the position, angle and movement of the recognized designated body part of the user may be calculated.

For example, at the user region, a front of the user's face is detected by use of an image acquired by the visual sensor 10 located at the front side of the horseback riding mechanism 20, and by using the front of the user's face, it is checked whether the user is looking forward.

In addition, the positions of the hands and feet, as well as a profile of the user corresponding to the back and the waist, may be extracted by use of the visual sensor 10 located at the lateral side of the horseback riding mechanism 20, thereby extracting the user posture information.

FIG. 5 is a flowchart describing a method for horseback riding simulation according to an example embodiment of the present invention.

Referring to FIG. 5, a method for horseback riding simulation according to an example embodiment of the present invention includes identifying a user (100), calculating user posture information (200), providing the user with instruction in horseback riding posture (300), and controlling the horseback riding mechanism 20 (400).

In identifying the user (100), the user may be identified through user identification information according to facial information about the user extracted from an input image.

A facial region of a rectangular shape may be extracted by recognizing the front of the user's face from the input image. In addition, a facial feature point based on a contour of the user's face located in the facial region may be extracted, and the user identification information may be calculated by use of a facial feature vector that is based on directivity characteristics and histogram distribution characteristics of pixels distributed in a local area that is set in advance on the basis of the facial feature point.

In detail, a facial image for identification of a user may be acquired using the visual sensor 10 installed at the front of the user.

Image pre-processing technology is applied to reduce the influence of surrounding lighting, a front region of the user's face is detected by use of a facial detection algorithm, and a normalization process is performed such that the sizes of facial images being input in various sizes become uniform.

In order to achieve robust facial recognition even when the facial characteristics change according to lighting and time of day, a facial feature point existing on a contour of the user's face, such as eyes, nose and mouth, is extracted, and a facial feature vector is extracted based on directivity characteristics of pixels distributed in a local area around the facial feature point, as well as based on a histogram representing the distribution of the pixels.

In calculating the user posture information (200), the user posture information may be calculated by extracting information related to each of designated body parts of the user from an input image.

A user region may be extracted from the input image. The user region may be divided, and the user posture information including information about a position, an angle and a movement of each of the designated body parts of the user may be calculated using profile information relative to the front of the user's face.

For example, in order to detect a user region that is distinguished from a background region, a background image is acquired, and a difference method may be performed on the background image and an input image acquired by the visual sensor 10.

A front of the face of a user may be detected from a user region of an image acquired through the visual sensor 10 installed at a front side. In addition, a designated body part of the user, such as hands and feet, may be extracted from a user region of an image acquired through the visual sensor 10 installed at a lateral side, and a profile of the user corresponding to a back and a waist of the user may be extracted.

Whether the head of the user is upright and the eyes of the user are oriented in a gait direction of a horse in the horseback riding posture of the user may be detected by extracting the front region of the face of the body part of the user. Whether the user holds the waist erect or the user's waist is bent at a predetermined angle according to a gait pattern of the horse may be checked by use of the user profile information.

In providing the user with instruction in horseback riding posture (300), the user is given instruction regarding horseback riding posture based on the user identification information and the user posture information.

A user movement model including the user identification information and the user posture information may be acquired, and a posture error may be calculated by matching the user movement model to a standard movement model. In addition, instruction in horseback riding posture may be provided to the user through speech or images, based on the posture error. A warning message may be provided to the user if the posture error is equal to or greater than a predetermined value.

In controlling the horseback riding mechanism 20 (400), the horseback riding mechanism 20 may be controlled according to a gait mode corresponding to a user-intended posture calculated through analysis of the user posture information.

The user-intended posture may be calculated by analyzing a gait pattern based on the user posture information, and the horseback riding mechanism 20 may be controlled according to a gait mode including a walk, a canter and a trot while corresponding to the user-intended posture.

The method for horseback riding simulation according to the example embodiment of the present invention may be realized through the horseback riding simulator described above, whose parts will not be described in detail here.

The horseback riding simulator and the method for horseback riding simulation according to the example embodiment of the present invention can provide user-customized instruction through identification of a user and recognition for a horseback riding posture of the user, by use of the multiple visual sensors. In addition, the operation mode of a horseback riding mechanism is converted to correspond to a user intended posture, thereby enabling a user to mutually respond to the horseback riding mechanism.

While the example embodiments of the present invention and their advantages have been described in detail, it should be understood that various changes, substitutions and alterations may be made herein without departing from the scope of the invention.

Claims

1. A horseback riding simulator comprising:

a user identification unit configured to identify a user through user identification information based on user facial information that is extracted from an input image;
a posture recognition unit configured to calculate user posture information by extracting information related to each of designated body parts of the user from the input image; and
a coaching unit configured to provide the user with instruction in horseback riding posture, based on the user identification information and the user posture information.

2. The horseback riding simulator of claim 1, further comprising a sensory realization unit configured to control a horseback riding mechanism based on a user-intended posture that is calculated through analysis of the user posture information.

3. The horseback riding simulator of claim 1, wherein the user identification unit comprises:

a face detection unit configured to extract a facial region of a rectangular shape by recognizing a front of a user's face from the input image; and
a facial feature extraction unit configured to extract a facial feature point based on a contour of the user's face located in the facial region, and to calculate the user identification information using a facial feature vector that is based on directivity characteristics and histogram distribution characteristics of pixels distributed in a local area that is set in advance on the basis of the facial feature point.

4. The horseback riding simulator of claim 1, wherein the posture recognition unit comprises:

a user region extraction unit configured to extract a user region from the input image; and
a posture information calculation unit configured to divide the user region and calculate the user posture information, which includes information about a position, an angle and a movement of each of the designated body parts of the user, by use of profile information relative to the front of the user's face.

5. The horseback riding simulator of claim 1, wherein the coaching unit comprises:

a movement data storage unit configured to store a user movement model including the user identification information and the user posture information;
a model matching unit configured to calculate a posture error by matching the user movement model to a standard movement model; and
a horseback riding instruction unit configured to provide the user with instruction in horseback riding posture through speech or images, based on the posture error.

6. The horseback riding simulator of claim 5, wherein the standard movement model includes information related to a standard horseback riding posture according to a gait mode of a horseback riding mechanism.

7. The horseback riding simulator of claim 5, wherein the horseback riding instruction unit is configured to provide the user with a warning message if the posture error is equal to or greater than a predetermined value.

8. The horseback riding simulator of claim 2, wherein the sensory realization unit comprises:

an intended posture calculation unit configured to calculate the user-intended posture by analyzing a gait pattern based on the user posture information; and
a mechanism control unit configured to control the horseback riding mechanism according to a gait mode including a walk, a canter and a trot while corresponding to the user intended posture.

9. A method for horseback riding simulation using a horseback riding simulator, the method comprising:

identifying a user through user identification information based on user facial information that is extracted from an input image;
calculating user posture information by extracting information related to each of designated body parts of the user from the input image; and
providing the user with instruction in horseback riding posture, based on the user identification information and the user posture information.

10. The method of claim 9, further comprising controlling a horseback riding mechanism according to a gait mode corresponding to a user-intended posture that is calculated through analysis of the user posture information.

11. The method of claim 9, wherein the identifying of the user comprises:

extracting a facial region of a rectangular shape by recognizing a front of a user's face from the input image; and
extracting a facial feature point based on a contour of the user's face located in the facial region, and calculating the user identification information by use of a facial feature vector that is based on directivity characteristics and histogram distribution characteristics of pixels distributed in a local area that is set in advance on the basis of the facial feature point.

12. The method of claim 9, wherein the calculating of the user posture information comprises:

extracting a user region from the input image; and
dividing the user region and calculating the user posture information, which includes information about a position, an angle and a movement of each of the designated body parts of the user, by use of profile information relative to the front of the user's face.

13. The method of claim 9, wherein the providing of instruction in horseback riding posture to the user comprises:

acquiring a user movement model including the user identification information and the user posture information, and calculating a posture error by matching the user movement model to a standard movement model; and
providing the user with instruction in horseback riding posture through speech or images, based on the posture error.

14. The method of claim 13, wherein in the providing of instruction in horseback riding posture to the user, a warning message is provided to the user if the posture error is equal to or greater than a predetermined value.

Patent History
Publication number: 20140093851
Type: Application
Filed: Sep 19, 2013
Publication Date: Apr 3, 2014
Applicant: Electronics & Telecommunications Research Institute (Daejeon)
Inventors: Kye Kyung KIM (Daegu), Sang Seung KANG (Daejeon), Woo Han YUN (Daejeon), Su Young CHI (Daejeon), Jae Hong KIM (Daejeon), Jong Hyun PARK (Daejeon)
Application Number: 14/031,531
Classifications
Current U.S. Class: Physical Education (434/247)
International Classification: A63B 69/04 (20060101);