Motion Analysis Device

- Yamaha Corporation

A motion analysis device includes an observation data acquisition unit that acquires observation data indicating a trajectory of a target observation point which moves in conjunction with motion of a user (for example, a specified point in a club used by the user); a comparison unit that compares each of a plurality of reference data indicating a predetermined trajectory of the target observation point with the observation data acquired by the observation data acquisition unit; and an audio control unit that generates an audio signal according to a comparison result obtained by using the comparison unit.

Skip to: Description  ·  Claims  · Patent History  ·  Patent History
Description
BACKGROUND OF THE INVENTION

1. Field of the Invention

The present invention relates to a technology for analyzing a motion of a user. Priority is claimed on Japanese Patent Application No. 2012-279501, filed on Dec. 21, 2012, the content of which is incorporated herein by reference.

2. Description of Related Art

In the related art, various technologies have been proposed in order to analyze a motion of a user. For example, Japanese Unexamined Patent Application, First Publication No. H06-39070 discloses a technology which displays a moving image of a swing motion of the user concurrently with a moving image of a pre-recorded reference use swing motion (for example, a swing motion of a professional golfer) on an identical screen.

SUMMARY OF THE INVENTION

According to the technology disclosed in Japanese Unexamined Patent Application, First Publication No. H06-39070, the user analyzes their own swing motion by visually comparing their own swing motion with a reference use swing motion. However, in practice, it is difficult to understand the difference between both motions by accurately and precisely comparing the motion of the user with the reference-purpose motion while visually checking the moving images on the screen. In view of the above-described circumstances, the present invention aims to enable the user to easily understand the difference between the motion of the user and the reference use motion.

In order to solve the above-described problem, a motion analysis device of the present invention includes observation data acquisition means for acquiring observation data which indicates a trajectory of a target observation point moving in conjunction with a motion of a user; comparison means for comparing reference data which indicates a predetermined trajectory of the target observation point with the observation data which is acquired and generated by the observation data acquisition means; and audio control means used to generate an audio signal according to a comparison result from the comparison means.

According to the present invention, the audio signal can be generated according to the comparison result between the observation data and the reference data. Therefore, the user can easily understand a difference between the trajectory of the target observation point and the predetermined trajectory indicated by the reference data.

BRIEF DESCRIPTION OF THE DRAWINGS

FIG. 1 is an external view illustrating a motion analysis system according to a first embodiment of the present invention.

FIG. 2 is a block diagram illustrating the motion analysis system according to the first embodiment of the present invention.

FIG. 3 illustrates a target observation point according to the first embodiment of the present invention.

FIG. 4 is a schematic diagram illustrating observation data according to the first embodiment of the present invention.

FIG. 5 is a schematic diagram illustrating a reference data sequence according to the first embodiment of the present invention.

FIG. 6 is a schematic diagram illustrating comparison data according to the first embodiment of the present invention.

FIG. 7 is a flowchart illustrating a comparison process performed by a comparison unit according to the first embodiment of the present invention.

FIG. 8 is a graph illustrating an audio signal generated by an audio control unit according to the first embodiment of the present invention.

FIG. 9 is a graph illustrating an audio signal generated by an audio control unit according to a second embodiment of the present invention.

FIG. 10 is a block diagram illustrating a motion analysis system according to a third embodiment of the present invention.

DETAILED DESCRIPTION OF THE INVENTION First Embodiment

FIG. 1 is an external view of a motion analysis system 100 according to a first embodiment of the present invention. FIG. 2 is a block diagram of the motion analysis system 100.

As illustrated in FIGS. 1 and 2, the motion analysis system 100 includes a motion analysis device 10 and an acceleration sensor 20. The motion analysis device 10 analyzes a motion of a user U and notifies the user U of an analysis result, and is preferably used when practicing a specific action in various sports. The motion analysis device 10 of the first embodiment analyzes a motion of the user U swinging a golf club C (hereinafter, referred to as a “swing motion”). More specifically, the motion analysis device 10 analyzes movement of a point P which moves in conjunction with the swing motion of the user U (hereinafter, referred to as a “target observation point”). The target observation point P of the first embodiment is a specific point in the club C used by the user U.

More specifically, as illustrated in FIG. 3, a tip portion of a grip Cg fixed to a shaft Cs of the club C (end portion of as head Ch side) is set to be the target observation point P. Other points of the club C (for example, a point on the head Ch or on the shaft Cs) or a point within a body of the user U which moves in conjunction with the swing motion can be set to be the target observation point P.

The acceleration sensor 20 in FIGS. 1 and 2 is a detector which detects movement of the target observation point P (swing motion of the user U), and sequentially generates a sensor output Da corresponding to the movement of the target observation point P at a predetermined cycle. As illustrated in FIG. 3, the acceleration sensor 20 of the present embodiment is a three-axis acceleration sensor that detects acceleration in each direction of three axes (X-axis, Y-axis and Z-axis) which are fixed to the target observation P and are orthogonal to one another. The Z-axis is an axis which is parallel to a longitudinal direction of the shaft Cs of the club C. The Y-axis and the X-axis are axes on a plane which is orthogonal to the Z-axis. One sensor output Da is configured to include acceleration Ax in the X-axis direction, acceleration Ay in the Y-axis direction and acceleration Az in the Z-axis direction. Each sensor output Da which is sequentially generated by the acceleration sensor 20 is transmitted to the motion analysis device 10 in a time-series manner. The acceleration sensor 20 and the motion analysis device 10 perform data communication with each other in a wireless manner, but may perform the data communication by wire.

As illustrated in FIG. 2, the motion analysis device 10 is operated by a computer system which includes an arithmetic processing unit 12, a storage device 14 and a sound emitting device 16. The storage device 14 stores programs implemented by the arithmetic processing unit 12 and various data items used by the arithmetic processing unit 12 (for example, audio data W or reference data series Sref). Combinations of a semiconductor storage medium and a known recording medium such as a magnetic recording medium or a multiple types of the recording medium may be optionally employed as the storage device 14. The sound emitting device 16 is audio equipment (for example, a speaker) which reproduces a sound wave corresponding to an audio signal S generated by the arithmetic processing unit 12.

The arithmetic processing unit 12 allows a plurality of functions (observation data acquisition unit 32, comparison unit 34 and audio control unit 36) for analyzing the motion of the user U by implementing a program stored in the storage device 14. It is also possible to distribute each function of the arithmetic processing unit 12 to a plurality of devices.

The observation data acquisition unit 32 sequentially acquires observation data Db indicating a trajectory (hereinafter, referred to as an “observation trajectory”) Oa of the target observation point P corresponding to the swing motion of the user U. More specifically, the observation data acquisition unit 32 sequentially generates the observation data Db supplied by the acceleration sensor 20 in the time-series manner. As illustrated in FIG. 4, one observation data item Db is configured to include an observation value Bx, an observation value By and an observation value Bz. The observation value Bx is a difference (variation) in the acceleration Ax between two sensor outputs Da which are generated in succession, the observation value By is a difference in the acceleration Ay between two sensor outputs Da which are generated in succession, and the observation value Bz is a difference in the acceleration Az between two sensor outputs Da which are generated in succession. The cycle where the observation data acquisition unit 32 acquires the observation data Db is set to be a sufficiently short time (for example, one millisecond) as compared to the time for the user U to perform the swing motion.

The storage device 14 in FIG. 2 stores the audio data W and the reference data series Sref. The audio data W of the present embodiment is data which indicates specific audio waveforms. For example, a wind noise generated by the club C during the swing motion is recorded, and digital data sampled at a predetermined frequency (for example, 44.1 kHz) is stored in the storage device 14 in advance as the audio data W.

FIG. 5 is a schematic diagram of the reference data series Sref. The reference data series Sref indicates a trajectory (hereinafter, referred to as a “reference trajectory”) Oref of the target observation point P over predetermined time duration. As illustrated in FIG. 5, the reference data series Sref are time series of a plurality of reference data Dref. Each reference data Dref is compared with each observation data Db in order to evaluate the swing motion of the user U, and is configured to include a reference value Rx, a reference value Ry and a reference value Rz.

The reference trajectory Oref is a standard of an observation trajectory Oa specified by each observation data Db. For example, a trajectory of the target observation point P when an action performer such as a professional golfer skilled in the swing motion performs standard swing motion is preferably employed as the reference trajectory Oref. More specifically, when the standard actor performs the swing motion, time series of the plurality of observation data Db generated by the observation data acquisition unit 32 are stored in the storage device 14 in advance as the reference data series Sref (each reference data Dref). Therefore, the reference value Rx of each reference data Dref corresponds to a change amount of the acceleration Ax when performing the standard swing motion, the reference value Ry corresponds to a change amount of the acceleration Ay and the reference value Rz corresponds to a change amount of the acceleration Az.

The comparison unit 34 in FIG. 2 compares each observation data Db acquired by the observation data acquisition unit 32 with each reference data Dref of the reference data series Sref stored in the storage device 14. More specifically, the comparison unit 34 reads out the reference data Dref from the reference data series Sref of the storage device 14 in chronological order each time the observation data acquisition unit 32 acquires the observation data Db, and generates comparison data Dc by calculating the difference between the observation data Db and the reference data Dref.

As illustrated in FIG. 6, one comparison data Dc is configured to include a comparison value ΔTx, a comparison value ΔTy and a comparison value ΔTz. The comparison value ΔTx is a difference between the observation value Bx of the observation data Db and the reference value Rx of the reference data Dref. Similarly, the comparison value ΔTy is the difference between the observation value By and the reference value Ry, and the comparison value ΔTz is a difference between the observation value Bz and the reference value Rz. Time series of the plurality of observation data Db correspond to the observation trajectory Oa and the reference data series Sref correspond to the reference trajectory Oref. Accordingly, the comparison data Dc corresponds to data indicating a difference between the observation trajectory OA and the reference trajectory Oref.

The comparison unit 34 of the present embodiment compares the observation data Db with the reference data Dref in a predetermined analysis section within a section from the start to the completion of the swing motion performed by the user U. The analysis section is a section from a point of time when the user U starts downswing motion (action of swinging the club C down) after takeaway action in backswing (hereinafter, an “action start point”) until predetermined time duration T elapses.

The time duration T of the analysis section is set according to time duration from the action start point of the User U until the user U completes follow-through action (finishing action in swinging the club C). The time duration from an actual action start point until the swing is completed varies depending on a swing speed of the user U. In the present embodiment, an average swing speed of the user U is calculated based on results on the time series of the observation data Db which are previously measured multiple times. The time duration T of the analysis section corresponding to the average swing speed is selected for each user U and is stored in the storage device 14.

FIG. 7 is a flowchart in a process where the comparison unit 34 compares each observation data Db and each reference data Dref (hereinafter, referred to as a “comparison process”). For example, when the user U instructs the analysis to start by operating an input device (not illustrated), the comparison process in FIG. 7 is performed.

The comparison unit 34 detects the action start point by utilizing each observation data Db (S1). Considering that a change amount in the acceleration of the target observation point P has a tendency to increase immediately after the start of the downswing motion, the comparison unit 34 of the first embodiment detects the action start point according to a change amount ΔA of the acceleration indicated by each observation data Db.

More specifically, the comparison unit 34 sequentially determines whether or not the change amount ΔA in the acceleration indicated by the observation data Db which is sequentially supplied from the observation data acquisition unit 32 is beyond a predetermined threshold value ATH. For example, the change amount ΔA is the sum of an absolute value of the observation value Bx, an absolute value of the observation value By and an absolute value of the observation value Bz. The comparison unit 34 repeats Step S1 until the change amount ΔA is beyond the predetermined threshold value ATH (S1: NO), and detects a point in time when the change amount ΔA is beyond the predetermined threshold value ATH as the action start point (S1: YES). Then, the process proceeds to Step S2.

The comparison unit 34 stretches (expands or contracts) the reference data series Sref on a time axis according to the time duration T of the analysis section which is selected and stored in advance based on the average swing speed of the user U (S2). More specifically, time duration Tref from foremost reference data Dref of the reference data series Sref to backmost reference data Dref (time duration from a start point to an end point of the reference trajectory Oref) is adjusted to be the time duration T.

More specifically, when the time duration T is longer than the time duration Tref, the comparison unit 34 increases the amount of reference data Dref by performing an interpolation process on the reference data series Sref, thereby equalize the amount of the reference data Dref with the amount of observation data Db. For the interpolation process of the reference data series Sref, a known technology (for example, a linear interpolation process or a spline interpolation process) may be optionally employed.

On the other hand, when the time duration T is shorter than the time duration Tref, the comparison unit 34 decreases the amount of reference data Dref by performing a thinning process on the reference data series Sref, thereby equalizing the amount of reference data Dref with the amount of observation data DB. For the thinning process of the reference data series Sref, a known technology may be optionally employed. The reference trajectory Oref itself does not vary in the process of Step S2.

The comparison unit 34 compares each observation data Db which is sequentially generated by the observation data acquisition unit 32 with each of the reference data Dref of the reference data series Sref after adjustment in Step S2 (S3).

More specifically, the comparison unit 34 reads out each reference data Dref of the reference data series Sref after the adjustment, starting from the foremost, in chronological order each time the observation data acquisition unit 32 generates the observation data Db, and sequentially generates the comparison data Dc by setting a difference between the reference data Dref and the observation data Db to be the comparison data Dc. As illustrated in FIG. 7, the generation of the comparison data Dc (S3) is repeated from the action start point detected in Step S1 until it is determined in Step S4 that the time duration T of the analysis section elapses (S4: NO).

When it is determined that the time duration T elapses from the action start point (S4: YES), the comparison unit 34 completes the comparison process. As will be appreciated from the above description, the comparison data Dc indicating a difference between the observation trajectory Oa and the reference trajectory Oref is sequentially generated within the analysis section while the swing motion is performed.

The audio control unit 36 in FIG. 2 generates an audio signal S according to the comparison data Dc (comparison result of each observation data Db and each reference data Dref) which is sequentially generated by the comparison unit 34. More specifically, from the action start point detected by the comparison unit 34, the audio control unit 36 sequentially acquires each sample of audio data W from the storage device 14 in chronological order, and converts pitch and/or tempo of each sample of the audio data W according to the comparison data Dc generated by the comparison unit 34 immediately before each reading. The audio signal S which is generated by the audio control unit 36 is supplied to the sound emitting device 16 to be reproduced as a sound wave. A D/A converter which converts the digital audio signal S into the analog audio signal S is not illustrated for convenience.

More specifically, the audio control unit 36 changes a pitch in each sample of the audio data W according to the comparison data Dc. For example, the comparison unit 34 converts the audio data W so that a change in the pitch in each sample is larger as each comparison value (ΔTx, ΔTy and ΔTz) of the comparison data Dc is greater (that is, as a difference between the observation trajectory Oa and the reference trajectory Oref is larger). Each sample of the audio data W is sequentially (on a real-time basis) converted and output concurrently with the swing motion of the user U. That is, within the analysis section of the swing motion performed by the user U, the pitch in a reproduced sound varies moment by moment according to the difference between the observation trajectory Oa and the reference trajectory Oref.

Here, a known method is used in an adjustment process of the pitch using the modulation of each sample of the audio data W. As an example, a pitch adjustment method of adjusting a reading-out speed of the audio data W will be described below.

The sampled audio data W which is waveform data having predetermined time duration is configured to have a plurality of frames. Each frame is adapted to correspond to one section within a plurality of sections configuring the associated analysis section. In this case, if the user U performs the swing motion, the comparison unit 34 generates the comparison data Dc corresponding to each section. Based on the corresponding comparison data Dc, the audio control unit 36 determines the reading-out speed from the storage device 14 of the audio data W for the frame corresponding to each section, and reads out the audio data W of the corresponding frame at the determined reading speed, from the storage device 14. Here, when the reading speed is determined to be faster than a standard reading speed according to a value of the comparison data Dc, a sound of the corresponding frame is reproduced so as to have the pitch higher than a reference pitch. In this case, reading-out of the entire frame is completed until the time duration of the corresponding frame elapses. However, at a point of time when reading-out of the entire frame is completed at the fast speed, a reading process re-starts from the foremost sample of the corresponding frame. Until the time duration of the corresponding frame elapses, the reading process is continuously repeated. In contrast, when the reading speed slower than the standard reading speed is determined, reading of the entire frame is not completed until the time duration of the corresponding frame elapses. However, a method can be considered in which reading of the corresponding frame is stopped at a point of time when reaching the end of the time duration of the corresponding frame and the process proceeds to a new reading process for the sample of the subsequent frame. In a case of the fast reading speed as well as in a case of the slow reading speed, the waveform can be discontinuous in a connecting portion between frames. However, it can be considered that smooth waveform connection between the frames is achieved by using a known cross-fade process.

The above-described pitch adjustment method is also called a cut and splice method, and is disclosed in the related art of U.S. Pat. No. 5,952,596, for example.

FIG. 8 is an explanatory view of a pitch in a reproduced sound according to a difference between the observation trajectory Oa and the reference trajectory Oref.

In FIG. 8, a portion of the observation trajectory Oa (Oa1, Oa2) in the analysis section and a time variation in a pitch Pa (Pa1, Pa2) of the reproduced sound are illustrated in parallel. A ball hitting point Q in FIG. 8 corresponds to the target observation point P at a point of time when the head Ch hits a ball. In FIG. 8, each observation trajectory Oa is shown together with the reference trajectory Oref. Each pitch Pa is illustrated as a relative value in which a pitch in the audio data W is adapted to be a reference pitch Pref.

As illustrated in FIG. 8, the audio control unit 36 converts pitch of the audio data W according to each comparison data Dc so that the pitch Pa in the reproduced sound is raised as the observation trajectory Oa is separated toward the user U side when viewed from the reference trajectory Oref and the pitch Pa in the reproduced sound is lowered as the observation trajectory Oa is separated to the opposite side to the user when viewed from the reference trajectory Oref. A more specific description is as follows.

The pitch Pa1 in FIG. 8 is the pitch of the reproduced sound when the target observation point P is moved on the observation trajectory Oa1. The observation trajectory Oa1, before passing through the ball hitting point Q, is positioned at the opposite side to the user U when viewed from the reference trajectory Oref, and after passing through the ball hitting point Q, is positioned at the user U side when viewed from the reference trajectory Oref (from outside to inside). Therefore, when the target observation point P is moved on the observation trajectory Oa1, the pitch Pa1 of the reproduced sound is higher than the reference pitch Pref before the target observation point P passes through the ball hitting point Q, and is lowered as the target observation point P is closer to the ball hitting point Q. Then, the pitch Pa1 is lower than the reference pitch Pref after the target observation point P passes through the ball hitting point Q.

On the other hand, the pitch Pa2 in FIG. 8 is the pitch of the reproduced sound when the target observation point P is moved on the observation trajectory Oa2. The observation trajectory Oa2, before passing through the ball hitting point Q, is positioned at the user U side when viewed from the reference trajectory Oref, and after passing through the ball hitting point Q, is positioned at the opposite side to the user U side when viewed from the reference trajectory Oref (from inside to outside). Therefore, when the target observation point P is moved on the observation trajectory Oa2, the pitch Pa2 of the reproduced sound is lower than the reference pitch Pref before the target observation point P passes through the ball hitting point Q, and is raised as the target observation point P is closer to the ball hitting point Q. Then, the pitch Pa2 is higher than the reference pitch Pref after the target observation point P passes through the ball hitting point Q.

According to the above-described configuration, by checking a change in the pitch Pa in the reproduced sound, the user U can intuitively understand how the difference between the observation trajectory Oa and the reference trajectory Oref is changed at each point in time (with the lapse of time).

As described above, in the first embodiment, the audio signal S is generated according to the comparison result between the observation data Db and the reference data Dref. Therefore, the user U can easily understand the difference between the observation trajectory Oa of the target observation point P and the reference trajectory Oref indicated by the reference data Dref.

In addition, the audio signal S is generated with respect to the swing motion on a real time basis. Therefore, as compared to a configuration where a sound is reproduced after the swing motion is performed, the user U can instinctively understand a relationship of the actual swing motion and the difference between the observation trajectory Oa and the reference trajectory Oref.

That is, the comparison unit 34 sequentially compares the observation data with reference data concurrently with the swing motion of the user U, and the audio control unit 36 generates the audio signal concurrently with each comparison performed by the comparison unit 34. In the above-described configuration, an audio signal is generated with respect to action of a user on a real time basis. Therefore, as compared to a configuration where the audio signal is generated after the action to be analyzed is performed, the user can instinctively understand actual action and a difference between a trajectory of a target observation point and a predetermined trajectory.

In a configuration of fixing the time duration of the reference data series Sref, when the time duration of the swing motion is different from the time duration of the reference data series Sref, even though the observation trajectory Oa itself approximates to the reference trajectory Oref, it can be evaluated that observation trajectory Oa is different from the reference trajectory Oref. In the present embodiment, the reference data series Sref are stretched on the time axis according to an average swing speed of the user U. Therefore, it is possible to appropriately evaluate the difference between the observation trajectory of the swing motion of the user U and the reference trajectory Oref.

That is, the comparison unit 34 stretches time series of reference data on a time axis and compares each stretched reference data with observation data. In the above-described configuration, the time series of the reference data are stretched on the time axis. Therefore, for example, if the time series of the reference data are stretched according to an action speed of a user, it is possible to appropriately evaluate a difference between a trajectory of a target observation point and a predetermined trajectory, as compared to a case of fixed time duration of time series of the reference data.

Second Embodiment

A second embodiment of the present invention will be described below. In the following description, the reference numerals used in the above description will be given to configuring elements having an operation and a function which are the same as those in the first embodiment, and a detailed description thereof will be appropriately omitted here.

The storage device 14 of the second embodiment stores three types of audio data W (Wx, Wy and Wz) indicating a waveform of different sounds (for example, a warning sound such as a “beeping sound” having a different pitch or a sound quality). The audio control unit 36 of the present embodiment controls the audio data Wx to be reproduced/stopped according to a comparison result where the comparison unit 34 compares the observation trajectory Oa with the reference trajectory Oref in the X-axis direction (comparison value ΔTx), controls the audio data Wy to be reproduced/stopped according to a comparison result in the Y-axis direction (comparison value ΔTy), and controls the audio data Wz to be reproduced/stopped according to a comparison result in the Z-axis direction (comparison value ΔTz). The audio signal S is generated by adding the audio data Wx, the audio data Wy and the audio data Wz.

More specifically, when the comparison value ΔT (ΔTx, ΔTy and ΔTz) in each axis direction is below the predetermined threshold value TH (when a difference is small between the observation value B and the reference value R), the audio control unit 36 stops reproducing of the audio data W corresponding to the associated axis direction. When the comparison value ΔT is beyond the threshold value TH (when the difference is large between the observation value B and the reference value R), the audio control unit 36 reproduces the audio data W. It is also possible to individually set the threshold value TH for each axis direction.

FIG. 9 is an explanatory view illustrating the reproduction/stop of the audio data W for each period of time (t1, t2 and t3) of the observation trajectory Oa. Within the observation trajectory Oa, during the period of time t1 while the comparison value ΔTx and the comparison value ΔTz are below the threshold value TH and the comparison value ΔTy is beyond the threshold value TH, only the audio data Wy corresponding to Y-axis direction is reproduced, and the reproduction of the audio data Wx and the audio data Wz is stopped. Similarly, during the period of time t2 while all of the comparison value ΔTx, the comparison value ΔTy and the comparison value ΔTz are below the threshold value TH, any one of the audio data Wx to Wz is not reproduced. During the period of time t3 while the comparison value ΔTx and the comparison value ΔTy are beyond the threshold value TH and the comparison value ΔTz is below the threshold value TH, a mixing sound of the audio data Wx and the audio data Wy is reproduced and the audio data Wz is not reproduced.

Even in the second embodiment, the same advantageous effects can be achieved as those in the first embodiment. In addition, in the second embodiment, the comparison result between the observation trajectory Oa and the reference trajectory Oref is individually reflected on the audio signal S in each axis direction. Therefore, the user U can recognize which direction of three axis directions has caused the difference between the observation trajectory Oa and the reference trajectory Oref.

That is, the audio control unit 36 selects the audio data according to an instruction from the user within a plurality of audio data items indicating different sounds, and converts pitch and/or tempo of the selected data according to the comparison result obtained by using the comparison unit, thereby generating the audio signal. In the above-described configuration, it is possible to diversify types of the reproduced sound of the audio signal as compared to a configuration of generating the audio signal by modulating one type of audio data.

Third Embodiment

FIG. 10 is a block diagram of the motion analysis system 100 according to a third embodiment. As illustrated in FIG. 10, the motion analysis system 100 in the third embodiment is configured to additionally include a delay device 15 in the motion analysis system 100 of the first embodiment. The delay device 15 delays the audio signal S by delay time δ. Therefore, the audio signal S is reproduced in the sound emitting device 16 after the delay time δ elapses from when the audio control unit 36 starts the generation. The generation of the audio signal S (generation of the comparison data Dc) is started at the action start point. Accordingly, the reproduction of the audio signal S is started at a point of time when the delay time δ elapses from the action start point. That is, the audio signal S is not reproduced from the action start point until the delay time δ elapses. An element (buffer) which temporarily holds and outputs the audio signal S is used as the delay device 15.

Even in the third embodiment, the same advantageous effects can be achieved as those in the first embodiment. In addition, in the third embodiment, the reproduction of the audio signal S is started at the point of time when the delay time δ elapses from the action start point. Accordingly, it is possible to prevent concentration of the user U from being hindered before and after the action start point. For example, the time duration from the action start point until ball hitting is 500 milliseconds. Therefore, if the delay time δ is set to be approximately 500 milliseconds, there is an advantageous effect in that it is possible to prevent the user U from being hindered during the action from the action start point until the ball hitting, which is the time for the user U to particularly concentrate their attention. The configuration of the third embodiment (delay device 15) can also be applied to the second embodiment.

That is, the motion analysis device of the third embodiment includes the delay device 15 which delays the audio signal after the generation by using the audio control unit 36. In this configuration, since the audio signal is delayed, it is possible to prevent the concentration of the user from being hindered during a period from when the generation of the audio signal is started until the delayed time elapses, for example.

Modification Example

The above-described respective embodiments can be modified in various ways. Specific modification aspects will be described below. Two or more aspects which are optionally selected from the following example can be appropriately combined with one another.

(1) In each embodiment described, the change amount in the acceleration (Ax, Ay and Az) in each axis direction is set to be the observation data Db as an example. However, it is also possible to use the acceleration (Ax, Ay and Az) itself as the observation data Db. Similarly, a numerical value itself of the acceleration in each direction can be used as the reference data Dref.

In addition, an element for detecting (detector) the movement of the target observation point P (swing motion of the user U) is not limited to the acceleration sensor 20. For example, instead of the acceleration sensor 20 (or together with the acceleration sensor 20), it is also possible to use a speed sensor which detects a speed of the target observation point P or a direction sensor (for example, a gyro sensor) which detects the direction of the movement of the target observation point P.

In addition, it is also possible to identify the observation trajectory Oa from video images in which the swing motion of the user U is videotaped using a video camera.

As will be appreciated from the above description, the observation data Db may be time-series data indicating the observation trajectory Oa of the target observation point P. Similarly, the reference data Dref may be time-series data indicating the reference trajectory Oref.

(2) In each embodiment described above, the pitch in the reproduced sound is changed according to the difference between the observation trajectory Oa and the reference trajectory Oref, but the modulation method of the audio data W may be optionally used. For example, it is also possible to change a sound volume of the audio data W according to the difference between the observation trajectory Oa and the reference trajectory Oref (each comparison data Dc), for example. In addition, in a configuration where the audio control unit 36 provides the audio data W with various sound effects (for example, an echo effect), it is also possible to control the extent of the sound effect to be provided for the audio data W according to the difference between the observation trajectory Oa and the reference trajectory Oref.

As will be appreciated from the above description, the audio control unit 36 may generate the audio signal S according to the comparison result (comparison data Dc) using the comparison unit 34, and specific content in the process thereof is not limited thereto.

(3) It is also possible to selectively use a plurality of audio data W indicating different sounds. More specifically, it is preferable that the audio control unit 36 is configured so as to select and convert pitch and/or tempo of the audio data W according to an instruction of the user U out of the plurality of audio data W. For example, the plurality of audio data W indicating the wind noise generated by different types of the club C during the swing motion is stored in the storage device 14. The audio control unit 36 selects the audio data W according to the types of the club C used by the user U from the storage device 14, and generates the audio signal S by way of the modulation of the audio data W to which the comparison data Dc is applied. The types of the club C (for example, a driver or irons) are instructed from the user to the motion analysis device 10 by operating, for example, an input device.

According to the above-described configuration, it is possible to diversify the types of the reproduced sound.

(4) It is also possible to reproduce a specific sound (for example, sound effects) when the observation trajectory Oa approximates to the reference trajectory Oref. For example, the storage device 14 stores sound effect data indicating a wave form of the sound effects. For example, the sound effects are sounds such as a sound, a shout for joy, or a sound of applause when a ball enters a hole cup.

The audio control unit 36 counts the amount N of the comparison data Dc in which each comparison ΔT (ΔTx, ΔTy and ΔTz) out of the comparison data Dc which is sequentially generated by the comparison unit 34 is beyond a threshold value. When the amount N after the completion of the swing motion is below a predetermined threshold value (that is, when the observation trajectory Oa approximates to the reference trajectory Oref), the audio control unit 36 acquires sound effect data from the storage device 14 and supplies the sound effect data to the sound emitting device 16 as the audio signal S. That is, the audio signal S of the sound to which the sound effects are added immediately after of the sound indicated by the audio data W (wind noise generated by the club C during the swing motion) is reproduced.

In the above-described description, when the observation trajectory Oa approximates to the reference trajectory Oref, the sound effects are reproduced. Therefore, there is an advantage in that the user U can intuitively recognize whether their own swing motion is good or not in the observation trajectory Oa.

When the observation trajectory Oa is different from the reference trajectory Oref (when the above-described amount N is beyond the threshold value), the sound effects can be added to the audio signal S. That is, the audio control unit 36 controls whether to add the sound effects to the audio signal S according to a degree of approximation between the observation trajectory Oa and the reference trajectory Oref.

That is, in this modification example, according to a degree of approximation between a trajectory specified by observation data and a predetermined trajectory, the audio control unit 36 generates an audio signal in which predetermined sound effects are added to a sound according to a comparison result obtained by using the comparison unit 34. In this configuration, according to the degree of approximation between the trajectory of the target observation point and the predetermined trajectory, the sound according to the comparison result obtained by using the comparison unit 34 and the predetermined sound effects are reproduced. Therefore, there is an advantage in that the user can intuitively recognize whether the action is good or not.

(5) In the third embodiment, the delay device 15 delays the audio signal S by the predetermined delay time δ, but the delay time δ can be controlled so as to be variable. For example, in a configuration where the comparison unit 34 detects a point of time in the ball hitting by the club C according to a temporal change of the observation data Db (or the comparison data Dc), a configuration is employed where the delay device 15 delays the audio signal S from the action start point to the point of time a ball is hit (that is, the delay time δ is set to be the time from the action start point to the point of time a ball is hit). In the above-described configuration, the sound is not reproduced from the action start point to the point of time a ball is hit, but the sound is reproduced after the point of time a ball is hit.

(6) Each element described above as an example can be appropriately omitted. For example, it is possible to omit the storage device 14 by incorporating various data items from an external device separate from the motion analysis device 10. In addition, the sound emitting device 16 can be omitted in a configuration where the audio signal S generated by the audio control unit 36 is transmitted to the external device via a communication network or a portable recording medium and is reproduced in the sound emitting device 16 of the external device.

(7) In the first embodiment, the observation data acquisition unit 32 sequentially generates the observation data Db by using the sensor output Da supplied from the acceleration sensor 20. However, a configuration can also be employed where the observation data acquisition unit 32 receives the observation data Db which is sequentially generated by the acceleration sensor 20. That is, an element acquiring the observation data Db (observation data acquisition means) includes both an element generating the observation data Db from the detection result obtained by using the acceleration sensor 20 for itself and an element receiving the observation data Db from the external device (acceleration sensor 20).

(8) In each embodiment described above, the motion analysis device 10 which analyzes the swing motion of the golf club C has been described as an example. However, a motion in which the motion analysis device 10 can be used is not limited to the action in golf. For example, when analyzing swing motion of a bat in the baseball, swing motion of a racket in the tennis, throwing action of a fishing rod in fishing, the motion analysis system 100 (motion analysis device 10) can also be used.

(9) It is also possible to change the amount of reference data Dref for each unit time (sampling cycle) within the analysis section. For example, with regard to a section immediately before or immediately after an impact within the analysis section, it is preferable to configure the section so as to increase the amount of the reference data Dref for each time unit by comparing the section with other sections. As the section has a larger amount of reference data Dref within the unit time, the comparison between the observation data Db and the reference data Dref is performed at a shorter interval, and the observation trajectory Oa and the reference trajectory Oref are closely compared with each other. Therefore, it is possible to detailedly analyze a difference in each trajectory in the section immediately before and immediately after the impact, for example. In addition, the amount of reference data Dref is increased in a certain portion within the analysis section. Therefore, as compared to a configuration of increasing the amount in the entire analysis section, there is an advantage in that the amount of data can be reduced.

A consideration may be considered where the amount of reference data Dref is changed in advance inside and outside a predetermined section in the analysis section. However, when the comparison unit 34 increases or decreases the amount of reference data Dref by way of the interpolation process or the thinning process for the reference data series Sref, it is also possible to change the amount of reference data Dref for each unit of time inside and outside the predetermined section in the analysis section.

In addition, it is also possible to decrease the amount of reference data Dref for each unit time with regard to a section where a detailed analysis is not required within the analysis section.

The motion analysis device according to each embodiment described above is operated by hardware (electronic circuit) such as a digital signal processor (DSP) which is exclusively used to analyze motions of a user, or is also operated in cooperation of a program and a general-purpose arithmetic processing unit such as a central processing unit (CPU).

A program of the present invention causes a computer to implement an observation data acquisition process for acquiring observation data indicating a trajectory of a target observation point which moves in conjunction with action of a user; a comparison process for comparing each of a plurality of reference data indicating a predetermined trajectory of the target observation point with the observation data acquired by the observation data acquisition process; and an audio control process used to generate an audio signal according to a result of the comparison process.

The above-described program is provided by being stored in a computer-readable recording medium and is installed in the computer. For example, the recording medium is a non-transitory recording medium, and is preferably an optical recording medium (optical disk) such as CD-ROM. However, the recording medium can include any known type of recording medium such as a semiconductor recording medium and a magnetic recording medium.

In addition, for example, the program of the present invention can be provided in a distributing manner via a communication network, for example, by a distributing server device, and then can be installed in the computer.

While preferred embodiments of the invention have been described and illustrated above, it should be understood that these are exemplary of the invention and are not to be considered as limiting. Additions, omissions, substitutions, and other modifications can be made without departing from the spirit or scope of the present invention. Accordingly, the invention is not to be considered as being limited by the foregoing description, and is only limited by the scope of the appended claims.

Claims

1. A motion analysis device comprising:

at least one processor; and
at least one memory including computer program instructions, the at least one memory and the computer program instructions being configured to, in cooperation with the at least one processor, cause the motion analysis device to:
acquire observation data indicating a trajectory of a target observation point which moves in conjunction with motion of a user;
compare reference data indicating a predetermined trajectory with the observation data to generate a comparison result; and
generate an audio signal according to the comparison result.

2. The motion analysis device according to claim 1, wherein the at least one memory and the computer program instructions are configured to cause the motion analysis device further to delay an output of the audio signal.

3. The motion analysis device according to claim 1, wherein time series of the reference data is expanded or contracted on a time axis and each expanded or contracted reference data is compared with the observation data.

4. The motion analysis device according to claim 1, wherein, according to a degree of approximation between a trajectory specified by the observation data and the predetermined trajectory, an additional audio signal is generated in which predetermined sound effects are added to the audio signal according to the comparison result.

5. The motion analysis device according to claim 1, wherein, according to the comparison result, a pitch in the audio signal is changed.

6. The motion analysis device according to claim 1, wherein:

a pitch in the audio signal is raised when a trajectory of the target observation point is closer to the user than a reference trajectory obtained based on the reference data; and
the pitch in the audio signal is lowered when the trajectory of the target observation point is farther from the user than the reference trajectory.

7. The motion analysis device according to claim 1, further comprising a storage device which stores mutually different audio data items by corresponding to components in X-axis, Y-axis and Z-axis directions which configure the observation data and the reference data, wherein the at least one memory and the computer program instructions are configured to cause the motion analysis device further to

compare the observation data with the reference data for each component in the X-axis, Y-axis and Z-axis directions, select the audio data corresponding to the component in the X-axis, Y-axis or Z-axis direction according to a comparison result of each component in the X-axis, Y-axis and Z-axis directions, read the audio data out from the storage device, and output an audio signal according to the selected audio data.

8. The motion analysis device according to claim 1, wherein motion of the user is a golf swing.

9. A motion analysis method performed by one or more processors comprising:

causing an observation data acquisition unit to acquire observation data indicating a trajectory of a target observation point which moves in conjunction with motion of a user;
causing a comparison unit to compare reference data indicating predetermined trajectory with the observation data acquired by the observation data acquisition unit; and
causing an audio control unit to generate an audio signal according to a comparison result obtained by using the comparison unit.

10. The motion analysis method according to claim 9, wherein an output of the audio signal generated by the audio control unit is delayed.

11. The motion analysis method according to claim 9, further comprising causing the comparison unit to expand or contract time series of the reference data on a time axis and to compare each expanded or contracted reference data with the observation data.

12. The motion analysis method according to claim 9, further comprising causing the audio control unit to generate an audio signal in which predetermined sound effects are added to the audio signal according to the comparison result obtained by using the comparison unit, according to a degree of approximation between a trajectory specified by the observation data and the predetermined trajectory.

13. The motion analysis method according to claim 9, further comprising causing the audio control unit to change a pitch in the audio signal according to a comparison result obtained by using the comparison unit.

14. The motion analysis method according to claim 9, further comprising:

causing the audio control unit to raise a pitch in the audio signal when a trajectory of the target observation point is closer to the user than a reference trajectory obtained based on the reference data; and
causing the audio control unit to lower the pitch in the audio signal when the trajectory of the target observation point is farther from the user than the reference trajectory.

15. The motion analysis method according to claim 9, further comprising:

storing mutually different audio data items in a storage unit by corresponding to components in X-axis, Y-axis and Z-axis directions which configure the observation data and the reference data;
comparing the observation data with the reference data for each component in the X-axis, Y-axis and Z-axis directions;
selecting the audio data corresponding to the component in the X-axis, Y-axis or Z-axis direction according to a comparison result of each component in the X-axis, Y-axis and Z-axis directions and reading out the audio data from the storage means; and
outputting an audio signal according to the selected audio data.

16. The motion analysis method according to claim 9, wherein motion of the user is a golf swing.

Patent History
Publication number: 20140180632
Type: Application
Filed: Dec 18, 2013
Publication Date: Jun 26, 2014
Applicant: Yamaha Corporation (Hamamatsu-shi)
Inventor: Koji YATAKA (Hamamatsu-shi)
Application Number: 14/132,531
Classifications
Current U.S. Class: 3d Orientation (702/153); Orientation Or Position (702/150)
International Classification: A63B 24/00 (20060101); A63B 69/36 (20060101);