IMAGE ANALYSIS METHOD, APPARATUS AND PROGRAM
An image analysis method for analyzing the motion of an object by using the time-series image data. The method comprises inputting time-series image data, generating time-series object image data by extracting an object region from the time-series image data, and analyzing the motion of the object from the time-series object image data.
This application is based upon and claims the benefit of priority from the prior Japanese Patent Applications Nos. 2002-281206 and 2003-136385, filed Sep. 26, 2002 and May 14, 2003, respectively, the entire contents of which are incorporated herein by reference.
BACKGROUND OF THE INVENTION1. Field of the Invention
The present invention relates to an image analysis method, apparatus and program for analyzing the movement of an object from images thereof contained in time-series image data.
2. Description of the Related Art
In general, a technique for focusing on a certain object in an image and analyzing the movement of the object is an important technique concerning images. This technique is utilized to provide TV viewers with effective synthesized images during a sports program, or to analyze the movement of players in a tennis or golf school for quantitative instruction, or to enhance the techniques of athletes.
“Multi-motion-Gymnast Form/Trajectory Display System” (Image Laboratory, vol. 9, no. 4, pp. 55-60, published by Nippon Kogyo Publisher on April 1998) is a conventional example of a technique for analyzing the movement of an object from images thereof contained in time-series image data.
In this technique, firstly, only the region (object region) including a target object is extracted from time-series image data. The object region is extracted by, for example, computing differences between frames and digitizing a region of a large difference. Further, the object region can be extracted by a chroma-key method for pre-photographing a moving object using a one-color background, and extracting the object region utilizing a difference in color, or a manual method for inputting the outline of an object extracted by a user's operation with a mouse or graphic tablet, or a method for automatically correcting the rough outline of an object input by a user, as disclosed in Jpn. Pat. Appln. KOKAI Publication No. 2001-14477.
As described above, only an object is extracted from time-series image data, to thereby create object image data comprising time-series images of an object region.
After that, the time-series object image data items are superposed and synthesized in order, thereby creating a time-series image synthesized in a stroboscopic manner.
Thus, the trajectory of an object can be grasped from the display of synthesized time-series object image data. In other words, the movement of the object can be visually analyzed.
However, the image analysis method of displaying extracted time-series object image data items superposed on each other only enables viewers to see the movement of an object as successive images. This analysis is just a qualitative analysis as to how the position of the object changes, but not a quantitative analysis as to the degree of change in the position of the object.
BRIEF SUMMARY OF THE INVENTIONAn advantage of an aspect of the present invention is to provide an image analysis method, apparatus and program capable of analyzing the movement of an object in a quantitative manner from images thereof contained in time-series image data.
According to an aspect of the preset invention, there is provided an image analysis method comprising: inputting time-series image data; generating time-series object image data by extracting an object region from the time-series image data; and analyzing motion of an object from the time-series object image data.
Embodiments of the invention will be described in detail with reference to the accompanying drawings.
As seen from
The image analysis control section 10 has a function for receiving time-series image data from the input unit 12 or storage 16, and quantitatively analyzing the motion of an object from images thereof contained in the time-series image data. This function includes a function for measuring a change in the movement of an object, the speed of a particular point in the object, the trajectory of the particular point, the angle (curvature) or region defined in an image when the particular point moves, and also a function for digitizing or graphing them to analyze them.
The image analysis control section 10 is a function realized by executing, using the processor, the image analysis program stored in the memory. The image analysis control section 10 comprises function sections, such as a time-series image data input section 20, object region extracting section 22, motion analysis section 24 (object image data adjusting section 26, comparison analysis section 28), analysis result display section 30, etc. Each function section executes the following processing.
The time-series image data input section 20 receives time-series image data as an analysis target from the input unit 12 (image pickup means) or storage 16. The time-series image data input section 20 measures the position of a particular point of an object while attaching characterizing color data to data corresponding to the particular point, or picking up image data of the object (seventh embodiment).
The object region extracting section 22 extracts an object region as a motion analysis target from the time-series image data input by the time-series image data input section 20, thereby generating object image data.
The motion analysis section 24 performs a motion analysis of an object on the basis of information concerning the shape of the object region contained in the time-series object image data generated by the object region extracting section 22. In the motion analysis, the motion analysis section 24 performs the following processing—superposition and synthesis of background image data formed of additional line data and the time-series object image data, detection of the position of the center of gravity in an object region contained in the time-series object image data, superposition and synthesis of two time-series object image data items (first and second object image data items), superposition and synthesis of the line indicative of the trajectory of the object region (particular point) of the object image data, and the time-series object image data, detection of the movement speed of the particular point, computation of the curvature of the trajectory of the particular point, measurement of the closed region defined by the trajectory of the particular point, computation of the inclination of the trajectory of the particular point, etc.
The motion analysis section 24 comprises an object image data adjusting section 26 and comparison analysis section 28. The object image data adjusting section 26 performs deformation processing on the two time-series object image data items (first and second object image data items) extracted by the object region extracting section 22. The comparison analysis section 28 superposes and synthesizes the two time-series object image data items (first and second object image data items).
The analysis result display section 30 displays the analysis result of the motion analysis section 24. For example, the analysis result display section 30 synthesizes and displays time-series object image data, and displays the analysis result using a numerical value or graph.
The input unit 12 comprises a keyboard for inputting, for example, user's control instructions to the image analysis apparatus, a pointing unit, such as a mouse, and image pickup means, such as a video camera, for inputting time-series image data to be analyzed.
The display unit 14 displays, for example, the analysis result of the image analysis control section 10, object image data synthesized by the comparison synthesis section 28, and an analysis result digitized or graphed by the analysis result display section 30.
The tablet 15 is used by a user to perform an input operation for analyzing an object included in an image in the image analysis control section 10, and allows a pen to point the position of the object in the image. The tablet 15 is superposed on the display surface of the display unit 14, and hence the user can directly designate a position on an image that is displayed on the display unit 14.
The storage 16 stores time-series image data 18 to be analyzed, as well as various programs and data items. The time-series image data 18 includes first and second time-series image data items corresponding to different objects.
The embodiments of the invention will now be described.
FIRST EMBODIMENTFirstly, at the step A1 at which time-series image data is input by the time-series image data input section 20, time-series image data obtained by, for example, the input unit 12 (image pickup means such as a video camera), or time-series image data 18 prestored in the storage 16 is input. At this step, the time-series image data as shown in
Subsequently, at the step A2 at which an object region is extracted by the object region extracting section 22, it is extracted from the time-series image data shown in
As a method for extracting an object region, the following may be utilized: a method for computing differences between frames and digitizing a region of a large difference to extract an object region; a chroma-key method for pre-photographing a moving object using a one-color background, and extracting the object region utilizing a difference in color; a manual method for inputting the outline of an object extracted by a user's operation with a mouse or graphic tablet; or a method for automatically correcting the rough outline of an object input by a user, as disclosed in Jpn. Pat. Appln. KOKAI Publication No. 2001-14477. At the object region extracting step (step A2), the object region is extracted from image data utilizing a certain method.
After that, at the step A3 at which a motion analysis is performed by the motion analysis section 24, the motion of the object is analyzed from shape information contained in the extracted time-series object image data.
In an object motion analysis, in most cases, if the trajectory of motion is observed while it is compared with a certain reference line, the characteristics of the motion can be clearly seen.
In the case of, for example, golf swing, it is considered desirable to twist the body about the backbone when swinging a golf club. Therefore, it is important for the analysis of a swing whether or not the swing axis is kept constant during the swing motion.
Firstly, the motion analysis section 24 computes, in a reference computing step, the position of a reference line for the motion on the basis of object image data generated at the object region extracting step (step A3a1). In the case of image data corresponding to a golf swing, the swing axis of an object (player) is detected from shape information contained in, for example, the object image data as shown in
Subsequently, the motion analysis section 24 synthesizes, in a reference-additional-line synthesis step, superposes and synthesizes the object image data and an additional line indicative of the swing axis utilizing the position computed at the reference computing step (step A3a2).
The position of the additional line 50 may be determined not only from the object region in the initial frame, but also utilizing any one of the frames ranging from the initial one to final one. For example, the frame in which the head of the club hits a ball, or a frame near this frame is detected, and the additional line is determined on the basis of this frame. The frame near the hit time can be detected by detecting, for example, object image data indicating that the head of the golf club is lowest. Further, the club head can be detected if it is designated as a particular point, as in a fourth embodiment described later.
The analysis result display section 30 sequentially superposes and synthesizes time-series object image data items, each of which is obtained by synthesizing object image data and the additional line 50 in the reference additional line synthesis step using the motion analysis section 24. The display section 30 displays the synthesized object image data on the display unit 14. Thus, the image as shown in
Various additional lines other than the additional line 50 indicative of the swing axis shown in
For example, the upper end of the object (the top of the head) is detected from the shape information (concerning the player) contained in the object image data shown in
The upper end of the object (the tope of the head) can be detected, for example, in the following manner. The motion analysis section 24 horizontally scans, for example, the initial frame of object image data, beginning from the uppermost portion of the frame, thereby detecting an object image. When the object image is detected, it is determined whether the object image is the head or part of the golf club. The object image corresponding to the golf club (club head or shaft) is much narrower than the object image corresponding to the head, from which whether the object image is the head or part of the golf club is determined. For example, when the object image data shown in
Further, concerning the golf club, its color may be prestored so that it can be determined from the color of a detected object image, whether or not the object image corresponds to the golf club, as will be described in detail in the fourth embodiment.
Furthermore, when the horizontal additional line 52 is displayed, the position of the additional line 52 may be determined not only from the object region in the initial frame, but also utilizing any one of the frames ranging from the initial one to final one.
As a result, as shown in
Also, additional lines 54 arranged in the form of a lattice and time-series object image data may be superposed and synthesized for analyzing the motion of an object, as shown in
The position of one of the vertical lines included in the additional lines 54 shown in
In addition, in
As described above, in the first embodiment, synthesis of an additional line and time-series object image data enables the motion of an object to be quantitatively analyzed from images of the object contained in time-series image data. In other words, with reference to the additional line(s) 50, 52, 54, a movement of a player who is swinging a golf club can be measured.
SECOND EMBODIMENTIn a second embodiment, the position of the center of gravity in an object region extracted as time-series object image data is detected and used for the motion analysis of the object in a motion analysis step similar to that of the first embodiment.
In a center-of-gravity computing step, the motion analysis section 24 computes the position of the center of gravity in an object region from time-series object image data that is extracted from time-series image data by the object region extracting section 22 (step A3b1), thereby superposing the point (center-of-gravity point) indicative of the position of the center of gravity contained in object image data to obtain synthesis image data (step A3b2).
Further, to enable vertical changes in the center of gravity to be confirmed, object image data can be displayed together with a horizontal additional line that passes through the position of the center of gravity in the object region of the object image data.
Furthermore, to enable horizontal changes in the position of the center of gravity to be confirmed, object image data can be displayed together with a vertical additional line that passes through the position of the center of gravity in the object region of the object image data.
In addition, a dispersion value concerning the position of the center of gravity for each frame of time-series object image data can be obtained and displayed. If the displayed dispersion value is low, it is determined that changes in the position of the center of gravity fall within a narrow range, therefore the golf swing is stable. On the other hand, if the displayed dispersion value is high, it is determined that the range of changes in the position of the center of gravity is wide, therefore the golf swing is unstable.
Part of an object region can be detected utilizing, for example, color information contained in object image data. The motion analysis section 24 horizontally scans object image data, thereby generating a color histogram corresponding to each scanning line. In a color histogram for a head region, torso region or leg region, there is a tendency for a particular color to have a high value. For example, in the head region, the color of a cap or hair has a high value, while in the torso region, the color of clothing on the upper half of the body has a high value. The motion analysis section 24 detects, from the color histogram, a horizontal position at which a characterizing change in color occurs, and divides the object image at this horizontal position, thereby detecting a partial region corresponding to the head region or torso region. After that, the position of the center of gravity in each partial image is computed in the same manner as described above, whereby an image obtained by synthesizing each partial image with a center-of-gravity point indicative of the position of the center of gravity is displayed.
If the position of the center of gravity in each partial region (head region, torso region, etc.) is obtained in the same manner as in the entire object region (the entire player region), it can be used as useful data. Further, if, for example, the tip of a golf club is designated, the position of the club head is automatically computed.
As described above, in the second embodiment, the position of the center of gravity in an object region contained in extracted time-series object image data is detected and used for the motion analysis of an object corresponding to the object region. As a result, the motion of the object contained in time-series image data can be analyzed in a quantitatively manner.
THIRD EMBODIMENTSpecifically, the first time-series object image data shown in
In general, when the motions of different objects are photographed, the position or size differs between the objects, as illustrated in
At the object image data adjusting step (step B1), first and second time-series object image data items are adjusted so that they can be compared with each other. Specifically, the positions and/or sizes of the objects are adjusted, the dominant hands of the objects are adjusted (reverse of images), etc.
The object in
The object image data adjusting section 26 can also perform time-based adjustment. For example, two time-series object image data items indicative of respective golf swings can be adjusted in time so that the ball-hitting moments during the swings can be simultaneously displayed. Further, if two time-series object image data items have different frame rates, a synthesizing technique, such as morphing, is used to obtain synthesized images free from time differences, which facilitates the comparison between the two time-series object image data items.
Furthermore, it is not necessary to limit the first and second time-series object image data items to object image data items corresponding to different objects. They may be time-series object image data items obtained by photographing one motion of a single object in different directions.
Although in the above description, first and second time-series object image data items corresponding to an object are compared to thereby analyze the motion of the object, the same analysis method as the above can be used for three or more time-series object image data items.
After a first group of time-series object image data items are adjusted to a second group of time-series object image data to enable comparison therebetween at the object image data adjusting step (step B1), the first group of adjusted time-series object image data items are arranged in parallel with the second group of adjusted time-series object image data items, and in each group, the object image data items are superposed and synthesized. The synthesized images of the two groups are displayed as object motion analysis results on the display unit 14 as shown in
As described above, in the third embodiment, a first group of time-series object image data items are adjusted to a second group of time-series object image data items to enable comparison therebetween, and the first group of time-series object image data items are synthesized and arranged in parallel with the second group of time-series object image data items similarly synthesized, thereby enabling a quantitative analysis of objects from images thereof contained in different groups of time-series object image data.
FOURTH EMBODIMENTIn the motion analysis step employed in the first embodiment, extracted time-series object image data items can be superposed and synthesized, while displaying, in the form of a line and/or dots, the trajectory of a particular point in an object region indicated by each extracted time-series object image data item.
Firstly, the motion analysis section 24 detects, in a particular region tracing step, a particular region from object image data extracted by the object region extracting section 22. For example, concerning object image data indicative of a golf swing, the head of a golf club is detected as the particular region. The head as the particular region is detected as the distal portion of an object region in the object image data corresponding to a golf club, or detected by directly pointing the head on the screen using a pointing device such as a graphic tablet, as is shown in
In a tracing result synthesizing step, the motion analysis section 24 adds a line and/or dots indicative of the trajectory 70 of the particular region, superposes time-series object image data items, and displays the resultant synthesis image on the display unit 14 (step A3c2).
The particular region can be detected by the following method, as well as the above-described one. For example, the color of the particular region is pre-stored, and the region with the same color as the stored color is detected as a particular region from object image data. In the above-described example, the color of the club head is pre-registered, and the club head is detected as a particular region. The color designated as a particular region may be input by a user's manual operation through the input unit 12, or may be pointed on the display unit 14 using a pointing device (see a seventh embodiment).
Alternatively, the initial frame of time-series image data (object image data) may be displayed on the display unit 14, thereby causing a user to designate the position (region) to be set as a particular region on the frame by a manual operation. After that, on the basis of the color of the designated region, the particular region in the subsequent object image data may be traced.
Further, a zone (expected zone) in the form of, for example, a stripe, through which a particular region may pass, may be set in an image, thereby determining, as the particular region, a region of a predetermined size or more contained in the zone. In the case of a golf swing, a curved stripe zone, through which the head of a club is expected to pass, is set as an expected zone. The expected zone may be designated by a user's manual operation using the input unit 14 (pointing device), or may be predetermined to be a certain distance from an object image.
Furthermore, in the display example of
Although in the above description, a particular region is traced, a certain point in this region may be traced.
As described above, in the fourth embodiment, extracted time-series object image data items are superposed and synthesized while the trajectory of a particular point in an object region indicated by each extracted object image data item is displayed using a dot train or line, thereby enabling a quantitative analysis of the motion of an object from images thereof contained in time-series image data.
FIFTH EMBODIMENTIn the image analysis method for analyzing the movement of an object from images thereof contained in time-series image data, a change in a particular point in the object, such as the speed of movement or the curvature of the trajectory of the object, is computed from the movement of the particular point. For example, the motion analysis section 24 detects, in the motion analysis step, a particular point corresponding to a club head from object image data extracted by the object region extracting section 22 (in a manner similar to that employed in the fourth embodiment), thereby computing the movement speed or the curvature of the trajectory of the object from the detected particular point, and displaying the computation result as an object motion analysis result.
In a movement distance computing step (step A3d1), the motion analysis section 24 computes, from time-series object image data, the distance through which a particular region has moved. In the subsequent movement speed computing step (step A3d2), the motion analysis section 24 computes the movement speed of the particular region from the movement distance of the particular region computed at the movement distance computing step.
The distance through which the club head (particular point) moves corresponds to the distance between club head positions 72 and 73 in
Further, when the distance between club head positions of different time points is short, there is no problem even if the length 75 of the straight line that connects the club head positions 72 and 73 is approximated as the distance through which the club head moves. From the above, the distance L [m] through which the club head moves within the predetermined time period T [sec] is determined, thereby obtaining the head speed V=L/T [m/sec].
In the movement distance computing step, the following method can also be utilized.
For example, the motion analysis section 24 detects the position of a ball in an image, and detects object image data items that contain respective particular regions (indicating the club head) closest to the ball position on the left and right sides. On the basis of the positions of the particular regions of the two object image data items, the movement distance of the particular region is determined.
Alternatively, time-series object image data items may be superposed and displayed on the display unit 14, and a user may manually designate particular regions for computing the movement distance, using the input unit 12 (pointing device).
Furthermore, an absolute movement speed may be computed from a relative movement distance on an image.
For example, the height (cm) of a photographed player is input, and the number of pixels corresponding to the height of the player in an image (object image) is detected. On the basis of the ratio between the number of pixels corresponding to the movement distance of a particular region in the image, and the number of pixels corresponding to the height of the player, the actual movement distance (m) of the club head is computed. During a swing, the player may bend down or kneel. Accordingly, the height of the player in an image is lower than the actual height. To compensate for this, a coefficient value may be preset and the height of the player may be corrected beforehand using the coefficient value. Instead of inputting the height (m) of a photographed player, the height of an object region corresponding to the player may be preset to a certain value, thereby computing the movement distance of the club head in the same manner as the above. Further, instead of an object region corresponding to a player, the movement distance of the club head (particular region) may be computed in the same manner as the above on the basis of a dimension (e.g. the length) of the club head in an image. In this case, the dimension of the club head may be manually input by a user or may be set to a predetermined value.
In addition, a predetermined object region may be detected as a reference region from (e.g. the initial frame of) time-series image data, thereby computing the movement distance of the club head from the size of the reference region. For example, the region corresponding to a ball in an image is detected as the reference region. The golf ball has a predetermined size according to the golf rules, and is usually colored in white. Accordingly, a white circular region in an image is detected as a ball region. After that, the number of pixels corresponding to the diameter of the ball region is detected, whereby the actual movement distance (m) of the club head is computed on the basis of the ratio between the number of pixels corresponding to the diameter and the number of pixels by which a particular region in the image has moved.
Also, when a golf swing is photographed at a predetermined place, since the installation position of a camera and the position of a ball can be fixed, the size of a region corresponding to the ball in an image can be determined. On the basis of the predetermined size of the region corresponding to the ball, the movement distance of the club head can be computed in the same manner.
Firstly, in a trajectory generating step, the motion analysis section 24 generates, on the basis of time-series object image data, a trajectory that indicates changes in the position of a particular region (step A3e1). In the trajectory generating step, the method described referring to the flowchart of
In light of this, the motion analysis section 24 computes, from a synthesized image, the curvature R1 (first curvature) of a first-half swing arc 80 and the curvature R2 (second curvature) of a second-half swing arc 82.
Specifically, the motion analysis section 24, for example, detects, as a reference point, a position in an object region, about which the club is swung, and computes the curvature R1 of the first-half swing arc 80 on the basis of the detected position of rotation and particular points 83 and 84 corresponding to the opposite ends of the first-half swing arc 80. Similarly, the motion analysis section 24 computes the curvature R2 of the second-half swing arc 82 on the basis of particular points 86 and 87 corresponding to the opposite ends of the second-half swing arc 82.
Although in the above description, the shoulder of a player is used as a reference point, another position, such as the position of the center of gravity computed from the initial frame of an object image, may be used as the reference point. Further, although in the above-described example, the swing arc before the impact position is compared with that after the impact position, the curvature ratio may be computed using any other portions of the trajectory. For example, a user may manually designate any portions of the trajectory in an image.
Further, the motion analysis section 24 can use the ratio between the curvatures R1 and R2 of the first-half and second-half swing arcs 80 and 82 as a parameter for quantitatively estimating the degree of maturity of a golf swing.
It is advisable for a user to perform swing exercises to establish a swing arc relationship of R1>>R2, while confirming the motion analysis results of the motion analysis section 24.
As described above, in the fifth embodiment, the movement speed of a particular point in an object or the curvature of the trajectory of the particular point is computed from the movement of the particular point in the object, images of the object being contained in time-series image data. As a result, the movement of the object can be quantitatively analyzed.
Although the fifth embodiment employs the object image extracting step (step A2) at which time-series object image data is extracted, this step can be omitted if the trajectory of a particular region is computed from time-series image data.
For example, the method (particular region tracing step) described in the fourth embodiment is executed using time-series image data as a target. This enables the trajectory of a particular region corresponding to a club head to be obtained from time-series image data. After that, the movement speed of the particular region, the curvature of the trajectory of the region, and the curvature ratio between portions of the trajectory can be computed as in the fifth embodiment.
In this case, the motion analysis section 24 uses, for example, the movement speed of the particular region, the curvature of the trajectory of the region, and the curvature ratio as estimation values for determining whether or not a golf swing is good. The motion analysis section 24 compares the estimation values with preset reference values, thereby estimating the motion of an object image, i.e., the golf swing, and displaying an analysis result based on the estimation.
For example, the golf swing may be estimated to be good if the computed curvature ratio indicates that the first curvature R1 is greater than the second curvature R2, whereby a message indicative of this estimation may be displayed on the display unit 14 as an analysis result.
SIXTH EMBODIMENTIn the image analysis method for analyzing the movement of an object from images thereof contained in time-series image data, the characterizing amount of a closed region defined by the trajectory of a particular point in the object, such as the area of the region, or the inclination of the trajectory, is computed from the movement of the particular point. For example, the motion analysis section 24 detects, in the motion analysis step, a particular point from object image data extracted by the object region extracting section 22 (in a manner similar to that employed in the fourth embodiment), thereby obtaining a closed region defined by the trajectory of the detected particular point, also computing the area of the closed region or the inclination of the trajectory, and displaying the computation result as an object motion analysis result.
The motion analysis section 24 generates, in a trajectory generating step, a trajectory indicative of changes in the position of a particular region on the basis of time-series object image data (step A3f1). The trajectory generating step can employ the method illustrated in the flowchart of
In general, it is considered that the narrower (i.e., the closer to a line) the swing plane when viewed from behind, the better the swing.
In this embodiment, time-series data obtained by photographing a swing from behind as shown in
The motion analysis section 24 detects a particular point corresponding to a club head from object image data extracted by the object region extracting section 22 (in a manner similar to that employed in the fourth embodiment), thereby obtaining a closed region defined by the trajectory of the detected particular point, i.e., the swing plane area 90 as shown in
It is advisable for a user to perform golf swing exercises to make the area S of the swing plane area 90 closer to 0, while confirming the motion analysis result of the motion analysis section 24.
Further, the inclination of the swing plane in
The motion analysis section 24 detects, from particular points contained in object image data, an uppermost particular point 91 and lowermost particular point 92, for example, and computes the inclination of the swing plane from the particular points 91 and 92.
If it is detected that the trajectory of a particular point has an intersection, it can be determined from the trajectory whether the swing is an “outside-in” or “inside-out” swing. Furthermore, when a swing of a right-handed player is photographed, if the trajectory of a particular point travels clockwise, it is detected that the swing is an “outside-in” swing, while if the trajectory of a particular point travels counterclockwise, it is detected that the swing is an “inside-out” swing (see
As described above, in the sixth embodiment, the area of a closed region defined by the trajectory of a particular point of an object whose images are contained in time-series image data, or the inclination of the trajectory is computed from the movement of the particular point, thereby enabling the motion of the object to be quantitatively analyzed.
Although the sixth embodiment employs the object image extracting step (step A2) at which time-series object image data is extracted, this step can be omitted if the trajectory of a particular region is computed from time-series image data, and the area of a closed region defined by the trajectory is computed.
For example, the method (particular region tracing step) described in the fourth embodiment is executed using time-series image data as a target. This enables the trajectory of a particular region corresponding to a club head to be obtained from time-series image data. After that, the area of the closed region defined by the trajectory of the particular region can be computed as in the sixth embodiment.
In this case, the motion analysis section 24 uses, for example, the area of the closed region defined by the trajectory of the particular region, as an estimation value for determining whether or not a golf swing is good. The motion analysis section 24 compares the estimation value with a preset reference value, thereby estimating the golf swing, and displaying an analysis result based on the estimation.
SEVENTH EMBODIMENTIn the time-series image data input step employed in the first embodiment, in order to set the position of a particular point used for a motion analysis, a characterizing color can be beforehand attached to a particular point in an object to be photographed, or the position of the particular point in the object can be measured while picking up image data concerning the object.
For example, if, in the time-series image data input section 20, the head of a golf club is marked in red and the globes of a player are marked in yellow, the object region extracting section 22 can automatically detect a particular point from time-series image data at the object region extracting step, by executing image processing for detection of colors in an image.
The particular point detection can be performed without image processing. For example, if time-series image data and range data are simultaneously acquired using an instrument such as a range finder, or if a motion capture device capable of position detection is installed in a club head, a particular point in an object can be detected.
In addition, a user can input a particular point in an object if the object region extracting step performed by the object region extracting section 22 in the first embodiment incorporates an object image data generating step for extracting an object region from time-series image data, and a particular point position setting step for enabling users to designate and set the position of a particular point in the extracted object region.
At this time, a user uses a pointing device, such as a mouse or graphic tablet, to designate a particular point position in object image data contained in an image. As a result, the object region extracting section 22 inputs the particular point position for a motion analysis. After that, the section 22 marks, in a predetermined color (e.g., red), an object region corresponding to the particular point position.
As described above, in the seventh embodiment, a particular point position used for a motion analysis can be obtained from time-series object image data, with the result that an object whose images are contained in time-series image data can be analyzed quantitatively.
EIGHTH EMBODIMENTFirstly, at the time-series image data input step (step C1), time-series image data is obtained by image pickup means such as a video camera, and is input. Subsequently, at the object region extracting step (step C2), an object region is extracted from the time-series image data, thereby generating time-series object image data. At the motion analysis step (step C3), a motion analysis of an object is performed on the basis of the shape information contained in the extracted time-series object image data. Lastly, at the analysis result display step (step C4), the motion analysis result obtained at the motion analysis step (step C3) is digitized and displayed.
For example, as described in the second embodiment, in the motion analysis method for detecting the position of the center of gravity in an object region, vertical (or horizontal) changes in the center of gravity can be displayed using a numerical value or graph as shown in
Further, also in the case of computing, from the movement of a particular point of an object whose images are contained in time-series image data, the movement speed of the particular point, the curvature of the trajectory of the particular point, the area of the closed region defined by the trajectory of the particular point, and/or the inclination of the trajectory (swing plane) of the particular point, the computed values can be displayed directly or using a graph, as is described in the fifth and sixth embodiments.
To estimate a swing analysis result, personal data concerning a player or golf club stored as time-series image data may be pre-registered and used for a swing estimation. According to, for example, the height of a player, the length of the hands of the player, and/or the length of a golf club, the area S of the swing plane area 90, the inclination of the swing plane, or the shapes of swing arcs (first-half arc, second-half arc) as estimation standards vary.
The analysis result display section 30 changes the estimation standards in accordance with personal data, performs estimation on the basis of the changed estimation standards, and displays the estimation result in, for example, the area 100.
Further, the analysis result display section 30 may display the trajectory of a particular point obtained by the motion analysis section 24, in accordance with the direction of the movement of the particular point. For example, as described in the sixth embodiment, if the motion analysis section 24 has detected that the trajectory of the particular point travels clockwise or counterclockwise, therefore that the swing is an “inside-out” swing or “outside-in” swing, the detection result is displayed as an analysis result in the area 100.
Whether the swing is an “inside-out” swing or “outside-in” swing is determined by analyzing changes in the position of the particular point on the trajectory as shown in
If the motion analysis section 24 determines that the swing is either an “inside-out” swing or an “outside-in” swing, the analysis result display section 30 can display the swing plane area 90 in a color corresponding to the swing.
Also, the analysis result display section 30 can display the trajectory during a “down swing” and that during a “follow through” in respective colors.
As described above, in the eighth embodiment, the motion analysis result of an object is displayed using a numerical value or graph. Thus, the motion of the object can be quantitatively analyzed from images thereof contained in time-series image data.
The respective image analysis methods described in the first to eighth embodiments can be combined appropriately.
Furthermore, the methods described in the above embodiments can be written, as image analysis programs that can be executed in a computer, to various recording mediums such as a magnetic disk (flexible disk, hard disk, etc.), an optical disk (CD-ROM, DVD, etc.), a semiconductor memory, etc., whereby the methods can be provided for various apparatuses. In addition, the methods can be transmitted to various apparatuses via a communication medium. Computers for realizing the methods read an image analysis program stored in a recording medium, or receives a program via a communication medium, thereby performing the above-described processes under the control of the program.
Additional advantages and modifications will readily occur to those skilled in the art. Therefore, the invention in its broader aspects is not limited to the specific details and representative embodiments shown and described herein. Accordingly, various modifications may be made without departing from the spirit or scope of the general inventive concept as defined by the appended claims and their equivalents.
Claims
1-20. (canceled)
21. An image analysis method comprising:
- inputting time-series image data;
- generating time-series object image data by extracting an object region from the time-series image data; and
- analyzing motion of an object from the time-series object image data.
22. The method according to claim 21, wherein:
- the inputting includes inputting object image data including first time-series object image data and second time-series object image data;
- the analyzing includes: adjusting at least one of the first time-series image data and the second time-series object image data; synthesizing items of the first time-series object image data into first synthesis image data, and items of the second time-series object image data into second synthesis image data; and displaying the first synthesis image data and the second synthesis image data in parallel.
23. The method according to claim 21, wherein the analyzing includes:
- tracing a particular region in the object region from the time-series object image data; and
- synthesizing a line indicative of a trajectory of the particular region with the time-series object image data.
24. The method according to claim 21, wherein the analyzing includes:
- computing a movement distance of the object region from the time-series object image data; and
- computing a movement speed of the object region from the movement distance of the object region.
25. The method according to claim 21, wherein the analyzing includes:
- generating a trajectory of the object region from the time-series object image data; and
- computing a curvature of a portion of the trajectory.
26. The method according to claim 21, wherein the analyzing includes:
- generating a trajectory of the object region from the time-series object image data;
- computing a first curvature of a first portion of the trajectory, and a second curvature of a second portion of the trajectory; and
- computing a ratio between the first curvature and the second curvature.
27. The method according to claim 21, wherein the analyzing includes:
- generating a trajectory of the object region from the time-series object image data; and
- computing an area of a region defined by the trajectory of the object region.
28. The method according to claim 21, wherein the generating includes:
- extracting an object region from the time-series object image data;
- allowing a position in the object region to be designated; and
- setting a position of a particular point in accordance with the designated position.
29. The method according to claim 21, further comprising synthesizing results of the motion analysis with the time-series image data and displaying a synthesis result.
30. The method according to claim 29, wherein the results of the motion analysis are graphed and displayed.
31. The method according to claim 29, wherein the results of the motion analysis are digitized and displayed.
32. An image analysis method comprising:
- inputting time-series image data;
- tracing a particular region based on the time-series image data;
- generating a trajectory of the particular region; and
- computing a curvature of a portion of the trajectory.
33. An image analysis method comprising:
- inputting time-series image data;
- tracing a particular region based on the time-series image data;
- generating a trajectory of the particular region; and
- computing an area of a region defined by the trajectory of the particular region.
Type: Application
Filed: Mar 3, 2008
Publication Date: Aug 20, 2009
Inventors: Nobuyuki MATSUMOTO (Kawasaki-shi), Osamu Hori (Yokohama-shi), Takashi Ida (Kawasaki-shi), Hidenori Takeshima (Ebina-shi)
Application Number: 12/041,212
International Classification: G06K 9/00 (20060101);