AUTOMATIC FRONTAL-VIEW GAIT SEGMENTATION FOR ABNORMAL GAIT QUANTIFICATION

- Xerox Corporation

A computer-implemented method for gait analysis of a subject includes obtaining visual data from an image capture device positioned in front of or behind the subject, the visual data comprising at least two image frames of the subject over a period of time walking toward or away from the image capture device, the at least two image frames capturing at least a portion of the gait of the subject, detecting within the at least two images body parts as two-dimensional landmarks using a pose estimation algorithm on each of the at least two frames, generating a joint model depicting the location of the at least one joint in each of the at least two frames, using the joint model to segment a gait cycle for the at least one joint, and comparing the gait cycle to a threshold value to detect abnormal gait.

Skip to: Description  ·  Claims  · Patent History  ·  Patent History
Description
CROSS REFERENCE TO RELATED PATENTS AND APPLICATIONS

This application claims priority to and the benefit of the filing date of U.S. Provisional Patent Application Ser. No. 62/297,341, filed Feb. 19, 2016, which application is hereby incorporated by reference.

BACKGROUND

Human gait, a biometric aimed to recognize individuals by the way they walk, has recently come to play an increasingly important role in different applications such as access control and visual surveillance. Although no two body movements are ever the same, gait is a characteristic of an individual, analogous to other biometrics. Psychological, medical, and biomechanical studies support the notion that humans effortlessly recognize people by the way they walk, and basic gait patterns are unique to each individual. In contrast to many established biometric modalities such as face, fingerprint or retina, gait can be analyzed from a distance and can be observed without notification to the subject or compliance by the subject. In fact, the considerable attention towards this biometric has been due to its ability to ascertain somebody's identity at a distance while being noninvasive and non-perceivable.

However, human gait analysis and assessment involves challenging issues due to the highly flexible structure and self-occlusion of the human body. These issues mandate using complicated processes for the measurement and analysis of gait in marker-less video sequences. For instance, footwear, physical conditions such as pregnancy, leg or foot injuries, or even drunkenness can change the manner of walking. Like most biometrics, gait will inherently change with age. Therefore, gait can disclose more than identity. As there are numerous applications for the detection of abnormal gait, it seems worthwhile to explore techniques that can accomplish this goal.

Human gait constitutes an essential metric related to a person's health and well-being. Degradation of a person's walking pattern decreases quality of life for the individual and may result in falls and injuries. In one estimate, one out of every three elder adults (over the age of 65) falls each year, and these related injuries cost $20 billion per year in the United States. There are different types of physiological and anatomical factors that can adversely affect gait, such as neurological maladies (e.g., Parkinson's disease or multiple sclerosis), degradation of the bones, joints or muscles, lower limb injury or pains and geriatric diseases, such as osteoporosis, which affect a large percentage of the population. The common symptoms for these cases include moving with slow pace, unstable standing, tilted walking, mini-step walking, altering velocity, length of the stride and cadence. Therefore, passive monitoring of a person's gait and the detection of deviations from normal patterns can support current frailty assessments leading to an improved and earlier detection of many diseases, or provide valuable information for rehabilitation. On the other hand, assessment is important for recuperative efforts. For example, improvement in a person's gait can be monitored and expected when therapeutic actions are taken, such as adjustment of medication, physical therapy, and joint replacement. It is very desirable to enable frequent, objective assessments to continuously understand a person's condition as well as perform fall prediction when gait changes significantly over a short period of time.

The traditional scales used to analyze gait parameters in clinical conditions are semi-subjective, carried out by specialists who observe the quality of a patient's gait by making him/her walk. This is sometimes followed by a survey in which the patient is asked to give a subjective evaluation of the quality of his/her gait. The disadvantage of these methods is that they give subjective measurements, particularly concerning accuracy and precision, which have a negative effect on the diagnosis, follow-up and treatment of the pathologies.

Wearable sensors are being developed to add objectivity and move the assessment into a passive (e.g., home) setting, rather than costly, infrequent clinical assessments. The various wearable sensor-based systems that have been proposed use sensors located on several parts of the body, such as feet, knees, thighs or waist. Different types of sensors are used to capture the various signals that characterize the human gait. However, their major disadvantage is the need to place devices on the subject's body, which may be uncomfortable or intrusive. Also, the use of wearable sensors allows analysis of only a limited number of gait parameters. Besides, the analysis of the signals is computationally complex and presents the problem of excessive noise.

Other than wearable or ground sensors, cameras are also used to analyze gait. Prior camera-based approaches have included the following:

Marker based: this method requires the subject wear easily detectable markers on the body, usually at joint locations. The 2D or 3D locations of the markers will be extracted in a monocular or multi-camera system. The marker locations or the relationships between them are then used to segment each stride/step.

Marker-less: this category of methods can be divided into two sub-categories: holistic (usually model free) and model based. For holistic methods, human subjects are usually first detected, tracked and segmented; then gait is usually characterized by the statistics of the spatiotemporal patterns generated by the silhouette of the walking person. A set of features/gait signatures is then computed from the patterns for segmentation/recognition, etc. One approach analyzed the auto correlation signals of the image sequence. Another approach used XT & YT slices for gait analysis. Model-based methods apply human body/shape or motion models to recover features of gait mechanics and kinematics. The relationship between body parts will be used to segment each stride/step or for other purposes. Models include generative and discriminative models.

For most gait analysis methods, segmenting gait cycle precisely is one of the most important steps and building blocks. Stride-to-stride measurement of gait signals is essential for disease diagnosing and monitoring, such as Parkinson's. As such diseases usually progress over a long period of time, it is very desirable to enable frequent and objective assessments to continuously understand such patients' ambulatory condition. Gait signals can come from wearable devices or camera data. Current methods for gait analysis include manual or automatic segmentation based on some gait signals such as feet distance or knee angles, etc. Visual inspection of gait from real-time actions or video recordings is subjective and requires a costly trained professional to be present, thereby limiting the frequency at which evaluations can be performed. Wearables capture only a portion of gait signal (depending on where the sensors are positioned) and require compliance of a patient to consistently wear the device if day-to-day measurement is to be taken. Current computer vision techniques can be categorized into marker-based and marker-less approaches. Similar to wearables, marker-based technologies require precise positioning of markers on subjects, which is not feasible for day-to-day monitoring. Monocular marker-less technologies often require identifying human body parts first, which is very challenging due to variations in viewing angle and appearance. Hence, the current monocular marker-less method is usually performed in clinical settings where the viewing angle and camera-to-subject distance are fixed, and the method may not be robust enough in an assisted living or traditional home setting.

Marker-based technologies require precise positioning of markers on subjects, which is not feasible in day-to-day monitoring. Monocular marker-less technologies are often performed in a clinical side-view, open space setting, where lateral views are possible. Lateral views may not be readily obtainable in an assisted living or traditional home setting.

INCORPORATION BY REFERENCE

The following references, the disclosures of which are incorporated by reference herein in their entireties, and filed concurrently, are mentioned:

U.S. application Ser. No. 15/283,629, filed Oct. 3, 2016, by Xu et al., (Attorney Docket No. XERZ 203330US01), entitled “A COMPUTER VISION SYSTEM FOR AMBIENT LONG-TERM GAIT ASSESSMENT”; and, U.S. application Ser. No. 15/283,663, filed Oct. 3, 2016, by Wu et al., (Attorney Docket No. XERZ 203336US01), entitled “SYSTEM AND METHOD FOR AUTOMATIC GAIT CYCLE SEGMENTATION”.

The following reference, the disclosure of which is incorporated by reference herein in its entirety, is mentioned:

U.S. application Ser. No. 14/963,602, filed Dec. 9, 2015, by Bernal, et al., (Attorney Docket No. XERZ 203256US01), entitled “COMPUTER-VISION-BASED GROUP IDENTIFICATION)”.

BRIEF DESCRIPTION

In accordance with one aspect, a computer-implemented method for gait analysis of a subject comprises obtaining visual data from an image capture device positioned in front of or behind the subject, the visual data comprising at least two image frames of the subject over a period of time walking toward or away from the image capture device, the at least two image frames capturing at least a portion of the gait of the subject, detecting within the at least two images body parts as two-dimensional landmarks using a pose estimation algorithm on each of the at least two frames, generating a joint model depicting the location of the at least one joint in each of the at least two frames, using the joint model to segment a gait cycle for the at least one joint, and comparing the gait cycle to a threshold value to detect abnormal gait.

The method can further comprise, prior to generating the joint model, estimating a three-dimensional shape of the subject using the two-dimensional landmarks, and estimating the at least one joint location based on the three-dimensional shape. The joint model can include a deformable parts model. The at least one joint can include an ankle, a knee, a hip, or other joint. The gait cycle can include a distance between two consecutive peaks in a trajectory of a joint. The gait cycle can include a distance between consecutive peaks in an angle of a joint or body part. The obtaining visual data from an image capture device can include using a camera mounted in an elongated hallway in which the subject can walk toward and away from the camera.

In accordance with another aspect, a system for gait analysis of a subject comprises an image capture device operatively coupled to a data processing device and positioned in front of or behind the subject, the image capture device configured to capture visual data comprising at least two image frames of the subject over a period of time walking toward or away from the image capture device, the at least two image frames capturing at least a portion of the gait of the subject, a processor-usable medium embodying computer code, said processor-usable medium being coupled to said data processing device, said computer code comprising instructions executable by said data processing device and configured for: detecting within the at least two images body parts as two-dimensional landmarks using a pose estimation algorithm on each of the at least two frames; generating a joint model depicting the location of the at least one joint in each of the at least two frames; using the joint model to segment a gait cycle for the at least one joint; and comparing the gait cycle to a threshold value to detect abnormal gait.

The instructions can further comprise, prior to generating the joint model, estimating a three-dimensional shape of the subject using the two-dimensional landmarks, and estimating the at least one joint location based on the three-dimensional shape. The joint model can include a deformable parts model. The at least one joint can include, an ankle, a knee, a hip, or other joint. The gait cycle can include a distance between two consecutive peaks in a trajectory of a joint. The gait cycle can include a distance between consecutive peaks in an angle of a joint or body part. The image capture device can be mounted in an elongated hallway in which the subject can walk toward and away from the camera.

In accordance with another aspect, a non-transitory computer-usable medium for gait analysis of a subject is set forth, said computer-usable medium embodying a computer program code, said computer program code comprising computer executable instructions configured for: obtaining visual data from an image capture device positioned in front of or behind the subject, the visual date comprising at least two image frames of the subject over a period of time walking toward or away from the image capture device, the at least two image frames capturing at least a portion of the gait of the subject; detecting within the at least two images body parts as two-dimensional landmarks using a pose estimation algorithm on each of the at least two frames; generating a joint model depicting the location of the at least one joint in each of the at least two frames; using the joint model to segment a gait cycle for the at least one joint; and comparing the gait cycle to a threshold value to detect abnormal gait.

The instructions can further comprise, prior to generating the joint model, estimating a three-dimensional shape of the subject using the two-dimensional landmarks, and estimating the at least one joint location based on the three-dimensional shape. The joint model can include a deformable parts model. The at least one joint can include an ankle, a knee, a hip, or other joint. The gait cycle can include a distance between two consecutive peaks in a trajectory of a joint or a distance between consecutive peaks in an angle of a joint or body part. The obtaining visual data from an image capture device can include using a camera mounted in an elongated hallway in which the subject can walk toward and away from the camera.

BRIEF DESCRIPTION OF THE DRAWINGS

FIG. 1 is a flowchart of an exemplary method in accordance with the present disclosure;

FIG. 2 is a schematic block diagram of an exemplary system in accordance with the present disclosure;

FIG. 3A illustrates a series of images with detected body parts superimposed thereon for a first subject condition;

FIG. 3B illustrates a series of images with detected body parts superimposed thereon for a second subject condition;

FIG. 4A is a series of 2D images and 3D shapes estimated therefrom represented as a linear combination of rotatable basis shapes for a first field of depth within the same field of view;

FIG. 4B is a series of 2D images and 3D shapes estimated therefrom represented as a linear combination of rotatable basis shapes for a second field of depth within the same field of view;

FIG. 5 graphically illustrates calculated features from reconstructed 3D model of DPM landmarks (DPM3D) compared with the features of a 3D model built based on manually annotated joints (GT3D);

FIG. 6A graphically illustrates a comparison between two main features displayed in FIG. 5 (foot distance and knee angle), for a first experiment (a), the features calculated from reconstructed 3D model of DPM landmarks (DPM3D) compared with the features of a 3D model built based on manually annotated joints (GT3D);

FIG. 6B graphically illustrates a comparison between two main features displayed in FIG. 5 (foot distance and knee angle), for a second experiment (b), the features calculated from reconstructed 3D model of DPM landmarks (DPM3D) compared with the features of a 3D model built based on manually annotated joints (GT3D); and,

FIG. 7 graphically illustrates the variation of stride duration of the subject for the different conditions of FIG. 6.

DETAILED DESCRIPTION

The present disclosure sets forth systems and methods for performing an objective evaluation of different gait parameters by applying computer vision techniques that can use existing monitoring systems without substantial additional cost or equipment. Aspects of the present disclosure can perform assessment during a user's daily activity without the requirement to wear a device (e.g., a sensor or the like) or special clothing (e.g., uniform with distinct marks on certain joints of the person). Computer vision in accordance with the present disclosure can allow simultaneous, in-depth analysis of a higher number of parameters than current wearable systems. Unlike approaches utilizing wearable sensors, the present disclosure is not restricted or limited by power consumption requirements of sensors. The present disclosure can provide a consistent, objective measurement of gait parameters, which reduces error and variability incurred by subjective techniques. To achieve these goals, a body and gait representation is generated that can provide gait characteristics of an individual, while applying generally to classification and quantification of gait across individuals.

The present disclosure sets forth the following approach to overcome the problem of frontal-view gait abnormality detection which can be performed with a single non-calibrated camera and extracts unique signatures from descriptors of the body's deformation. Similar to a real life scenario, the subjects walk in a hallway toward or away from a camera. Aspects of the method can include 1) Detection of body parts as 2D landmarks by employing a pose estimation algorithm in each frame; 2) Refining joint locations; 3) Estimation of the 3D shape of each subject, given the set of 2D landmarks detected; 4) Using the estimated 3D joint positions, variation of different features, such as knee angle or the distance between right and left feet is calculated; 5) Extraction of multiple gait cycles from each sequence of features by detecting the consecutive peaks from the aforementioned signals. 6) Using stride length, duration and average amplitude of knee angle as features for quantification of gait status.

Aspects of the present disclosure are aimed at passive assessment of health conditions for settings, such as in-home and assisted living. The technologies can also apply to a clinical setting. In one embodiment, the system and method are directed to quantification from common walking settings, such as a hallway, where frontal views are available. The quantification can be used to understand the current state or progression of a degenerative condition or the recuperation after a medical procedure, such as knee replacement.

The methods and systems described below further address the problem of segmenting gait cycles from video in the natural setting where the subject moves toward or away from the camera. The present method addresses this important imaging setting because it allows monitoring in real life home or assisted living settings where cameras can be mounted in hallways and multiple step cycles can be observed. Existing lateral-view methods are not well suited for this imaging condition. In one embodiment, a single frontal view point is used and a special pose or camera calibration is not used, which differentiates aspects of the present disclosure from existing technology. The method accommodates the change of scale as the individual walks toward (or away from) the camera. The gait cycles are extracted from descriptors of the body's deformation.

With reference to FIG. 1, a flow chart illustrates an exemplary process 2 in accordance with an aspect of the present disclosure. The exemplary method begins in step 10 wherein images of one or more subjects are acquired. This is typically performed by recording video or capturing multiple still images. It should be appreciated that step 10 includes acquiring a series of frames per gait cycle. While more frames per gait cycle can provide more information on details of movement within a cycle, at least two frames per cycle are needed to quantify the duration of a stride.

In step 12, detection of body parts as 2D landmarks is performed by employing a pose estimation algorithm in each frame. In step 14, 2D joint locations are refined. In step 16, estimation of the 3D shape of each subject, given the set of 2D refined 2D joint locations, is performed. In step 18, using the estimated 3D joint positions, variation of different features such as knee angle or the distance between right and left feet is calculated. In step 20, extraction of multiple gait cycles from each sequence of features by detecting the consecutive peaks from the aforementioned signals is performed. In step 22, gait status quantification is performed using stride length, duration and average amplitude of knee angle as features. Each of steps 10-22 are further described below.

In FIG. 2, an exemplary system 110 in accordance with the present disclosure is illustrated in block diagram form in connection with a patient space 122 such as a hallway, waiting room, or the like. It will be appreciated that patient space 122 is exemplary, and that the system 110 can be implemented in virtually any location or setting (e.g., public or private spaces, etc.) provided suitable images of a subject approaching and/or departing can be obtained. In the exemplary embodiment, a plurality of cameras C1, C2 and C3 are positioned at different locations within the patient space 122. However, any number of cameras can be utilized.

The cameras C1, C2 and C3 are connected to a computer 130 and supply visual data comprising one or more image frames thereto via a communication interface 132. It will be appreciated that the computer 130 can be a standalone unit configured specifically to perform the tasks associated with the aspects of this disclosure. In other embodiments, aspects of the disclosure can be integrated into existing systems, computers, etc. The communication interface 132 can be a wireless or wired communication interface depending on the application. The computer 130 further includes a central processing unit 136 coupled with a memory 138. Stored in the memory 138 are various modules including an image acquisition module 140, a gait analysis module 142, and a gait segmentation module 144. Visual data received from the cameras C1, C2 and C3 can be stored in memory 138 for processing by the CPU 136 in accordance with this disclosure. It will further be appreciated that the various modules can be configured to carry out the functions described in detail in the following paragraphs.

With reference to FIG. 3, and returning to the description of method 2 in FIG. 1, a pose estimation algorithm is employed at step 12 to find the approximate position of joints in each frame of the video. This is accomplished, for example, using the Flexible Part Model for each frame independently. In one approach, the torso and lower limbs are focused on; so the model consists of eighteen (18) parts total with basic parts including head, neck, shoulders, waist, hips, knees and ankles. The number of shape mixtures per part varies, which were estimated using hierarchical clustering. In one approach, for the lower limbs five (5) mixtures, shoulders two (2), head and neck three (3) and the rest of the joints one (1) mixture was employed.

Then, the N-best pose solution is found per frame using the following method. Starting with a score configuration as the one in Eq. (1) where zi is the location of part i, with a local part score φ(zi), and pairwise deformation model ψ(zi, zj), one can find the best configuration by backtracking from the root location with the highest score.


S(z)=Σi∈Vφ(zi)+Σij∈Eψ(zi, zj)   Eq. (1)

Using the N-best algorithm, iteratively return configurations ordered by score. And finally by exploiting temporal context from neighboring frames, associate the poses to find the best track in the whole video. The selected track is the best smoothing track covering the whole temporal span of the video. To do that, for each frame tin the video, N candidate poses are generated and for a particular pose one wants to maximize score in Eq. (2) where Local(kt) is the score of candidate pose computed by Eq. (1).


Score(k)=ΣtLocal(kt)+αPairwise(kt, kt−1)   Eq. (2)

At step 14, the part detection module provides an estimation of the selected parts in the form of bounding boxes. Then, using the estimation accurate locations of the joints are found. The corresponding landmarks are found in the 2D image using a set of regression models based on part locations estimated from a Deformable Part Model (DPM), for example. The regression model is trained for x and y positions of each landmark separately given the location of detected bounding box of the corresponding landmark.

FIG. 3 shows examples of DPM overlays for frontal views of two individuals (a) and (b) walking along a hall.

An optional method that has the potential to improve feature extraction uses 3D shape reconstruction at step 16. To utilize this method, a convex formulation is applied to reconstruct the 3D shape of each subject given a set of 2D landmarks in each frame.

The method employs a shape-space model, where a 3D shape is represented as a linear combination of rotatable basis shapes. Equation (3) below shows the estimated shape S is a linear combination of k basis shapes Bi learned from training data, rotated by rotation matrix Ri and scaled by ci for each of the i basis shapes. The model is trained based on, for example, seven subjects from a dataset, such as the Carnegie Mellon University Motion Capture Database (CMU MoCap dataset). The selected subjects perform different activities such as jumping, boxing, running as well as walking. In one embodiment, an 11-joint model was trained by learning a dictionary of size 200 (k=200) from the training shapes aligned by the Procrustes method.


S=Σi=1kciRiBi   Eq. (3)

Then, a convex relaxation approach is used to solve Eq. (4), and reconstruct the 3D shape using Eq. (3), having the recovered coefficients and rotation matrix of basis shapes.


minM1, . . . , Mk1/2∥W−Σi−1kMiBi|F2+λΣi=1k∥Mi2,   Eq. (4)

Examples of 3D estimated shapes generated from 2D image frames are illustrated in FIGS. 4A and 4B for the same subject at different fields of depth.

In step 18, features are extracted from the 3D joint positions estimated from the 3D shape. The features that are estimated are both in the 3D space such as the distance between right and left knee, distance between right and left foot, the variation of left knee angle (affected leg) which is the angle between the knee-hip and knee-ankle segments, as well as the 2D space such as the oscillation of head. Some of these features are displayed in FIG. 5 for a sample sequence. The selected sequence depicts a pattern for normal walking. In FIG. 5, the calculated features from reconstructed 3D model of DPM landmarks (DPM3D) has been compared with the features of 3D model built based on manually annotated joints (GT3D).

Gait cycle segmentation is performed in step 20. Each feature extracted represents a gait cycle in a slightly different way from the others. One cycle for knee angle would be the distance between two consecutive peaks in the trajectory created by the whole sequence while for foot distance the distance between two adjacent peaks define a stride. Hence, from all or a selected subset of features, one can segment out gait cycles (peak-to-peak distance) from the sequence. From the segmented gait cycles, one can then estimate a set of metrics such as stride duration, stride length, and cadence that have demonstrated significant differences clinically among other movements for various diseases or injuries. For example, some research indicates that stride length decreases with progression of Parkinson's disease while stride duration (time) tends to not decrease.

At step 22, the estimated set of metrics along with other features can then be employed for abnormal gait quantification depending on the application.

To simulate different types of abnormalities and test how well the selected features differentiate between them, experiments were performed where subjects walk with various ankle weights. The subjects walk back and forth in a hallway where two cameras are mounted at the front and end. Each subject wears an ankle weight of 2.5 and 7.5 lb. in each sequence. Finally, a sequence of normal gait for each subject where no ankle weight is worn is recorded.

FIG. 6 displays a comparison between two main features displayed in FIG. 5 (foot distance and knee angle), for two different conditions (a) and (b) of the same subject. The changes in stride duration (horizontal axis) by increasing the weights is clearly evident. These changes are summarized in FIG. 7 for the variation of stride duration of the same subject.

FIGS. 6 and 7 illustrate but one example of the manner in which aspects of the present disclosure can be used to analysis gait characteristics.

It should now be appreciated that the system and method set forth the following advantages:

    • An approach for detection of human gait abnormality in a frontal-view scenario.
    • Using DPM to locate joints for abnormal gait detection.
    • Using the reconstructed 3D model of the human body in each frame as depth information of detected joints.
    • Employing the variations of joint trajectories in 3D as features that abstract from individual gait characteristics but allows for the classification of gait across individuals.
    • Achieving objective evaluation of different gait parameters.
    • The system further provides repeatability, reproducibility and less external factor inference by facilitating passive monitoring of subjects over long time periods and/or on multiple occasions.
    • Being non-intrusive, with no need to place any device or markers on the subject during the experiments.
    • Using a low-cost camera without expensive setups and expertise in operating the software.

It will be appreciated that variants of the above-disclosed and other features and functions, or alternatives thereof, may be combined into many other different systems or applications. Various presently unforeseen or unanticipated alternatives, modifications, variations or improvements therein may be subsequently made by those skilled in the art which are also intended to be encompassed by the following claims.

Claims

1. A computer-implemented method for gait analysis of a subject comprising:

obtaining visual data from an image capture device positioned in front of or behind the subject, the visual data comprising at least two image frames of the subject over a period of time walking toward or away from the image capture device, the at least two image frames capturing at least a portion of the gait of the subject;
detecting within the at least two images body parts as two-dimensional landmarks using a pose estimation algorithm on each of the at least two frames;
generating a joint model depicting the location of the at least one joint in each of the at least two frames;
using the joint model to segment a gait cycle for the at least one joint; and
comparing the gait cycle to a threshold value to detect abnormal gait.

2. The computer-implemented method for gait analysis of a subject as set forth in claim 1, further comprising, prior to generating the joint model, estimating a three-dimensional shape of the subject using the two-dimensional landmarks, and estimating the at least one joint location based on the three-dimensional shape.

3. The computer-implemented method for gait analysis of a subject as set forth in claim 1, wherein the joint model includes a deformable parts model.

4. The computer-implemented method for gait analysis of a subject as set forth in claim 1, wherein the at least one joint includes an ankle, a knee, a hip, or other joint.

5. The computer-implemented method for gait analysis of a subject as set forth in claim 4, wherein the gait cycle includes a distance between two consecutive peaks in a trajectory of a joint.

6. The computer-implemented method for gait analysis of a subject as set forth in claim 4, wherein the gait cycle includes a distance between consecutive peaks in an angle of a joint or body part.

7. The computer-implemented method for gait analysis of a subject as set forth in claim 1, wherein the obtaining visual data from an image capture device includes using a camera mounted in an elongated hallway in which the subject can walk toward and away from the camera.

8. A system for gait analysis of a subject comprising:

an image capture device operatively coupled to a data processing device and positioned in front of or behind the subject, the image capture device configured to capture visual data comprising at least two image frames of the subject over a period of time walking toward or away from the image capture device, the at least two image frames capturing at least a portion of the gait of the subject;
a processor-usable medium embodying computer code, said processor-usable medium being coupled to said data processing device, said computer code comprising instructions executable by said data processing device and configured for:
detecting within the at least two images body parts as two-dimensional landmarks using a pose estimation algorithm on each of the at least two frames;
generating a joint model depicting the location of the at least one joint in each of the at least two frames;
using the joint model to segment a gait cycle for the at least one joint; and
comparing the gait cycle to a threshold value to detect abnormal gait.

9. The system set forth in claim 8, wherein the instructions further comprise, prior to generating the joint model, estimating a three-dimensional shape of the subject using the two-dimensional landmarks, and estimating the at least one joint location based on the three-dimensional shape.

10. The system set forth in claim 8, wherein the joint model includes a deformable parts model.

11. The system set forth in claim 8, wherein the at least one joint includes an ankle, a knee, a hip, or other joint.

12. The system set forth in claim 11, wherein the gait cycle includes a distance between two consecutive peaks in a trajectory of a joint.

13. The system set forth in claim 11, wherein the gait cycle includes a distance between consecutive peaks in an angle of a joint or body part.

14. The system set forth in claim 8, the image capture device is mounted in an elongated hallway in which the subject can walk toward and away from the camera.

15. A non-transitory computer-usable medium for gait analysis of a subject, said computer-usable medium embodying a computer program code, said computer program code comprising computer executable instructions configured for:

obtaining visual data from an image capture device positioned in front of or behind the subject, the visual data comprising at least two image frames of the subject over a period of time walking toward or away from the image capture device, the at least two image frames capturing at least a portion of the gait of the subject;
detecting within the at least two images body parts as two-dimensional landmarks using a pose estimation algorithm on each of the at least two frames;
generating a joint model depicting the location of the at least one joint in each of the at least two frames;
using the joint model to segment a gait cycle for the at least one joint; and
comparing the gait cycle to a threshold value to detect abnormal gait.

16. The non-transitory computer-usable medium as set forth in claim 15, wherein the instructions further comprise, prior to generating the joint model, estimating a three-dimensional shape of the subject using the two-dimensional landmarks, and estimating the at least one joint location based on the three-dimensional shape.

17. The non-transitory computer-usable medium as set forth in claim 15, wherein the joint model includes a deformable parts model.

18. The non-transitory computer-usable medium as set forth in claim 15, wherein the at least one joint includes an ankle, a knee, a hip, or other joint.

19. The non-transitory computer-usable medium as set forth in claim 18, wherein the gait cycle includes a distance between two consecutive peaks in a trajectory of a joint or a distance between consecutive peaks in an angle of a joint or body part.

20. The non-transitory computer-usable medium as set forth in claim 15, wherein the obtaining visual data from an image capture device includes using a camera mounted in an elongated hallway in which the subject can walk toward and away from the camera.

Patent History
Publication number: 20170243354
Type: Application
Filed: Oct 3, 2016
Publication Date: Aug 24, 2017
Applicant: Xerox Corporation (Norwalk, CT)
Inventors: Faezeh Tafazzoli (Louisville, KY), Beilei Xu (Penfield, NY), Wencheng Wu (Webster, NY), Robert P. Loce (Webster, NY)
Application Number: 15/283,603
Classifications
International Classification: G06T 7/00 (20060101); G06K 9/00 (20060101);