AUGMENTED NEUROMUSCULAR TRAINING SYSTEM AND METHOD
An augmented neuromuscular training system and method for providing feedback to a user in order to reduce movement deficits associated with injury risk, prior injury or disease pathology.
The present application is a continuation of U.S. patent application Ser. No. 15/811,513, filed Nov. 13, 2017, which claims priority to U.S. Provisional Patent Application Ser. No. 62/420,119, filed Nov. 10, 2016, the disclosures of which are expressly incorporated herein by reference.
BACKGROUND OF THE DISCLOSUREThe present disclosure relates to an augmented neuromuscular training system and method, and more particularly, to a system and method for modifying movement deficits associated with injury risk, prior injury or disease pathology, such as risk factors for anterior cruciate ligament injuries in athletes through real-time visual feedback.
Movement deficits associated with injury risk, prior injury or disease pathology present a significant medical concern. For example, anterior cruciate ligament (ACL) injuries are a growing public health problem in the United States, with associated healthcare costs exceeding $2 billion annually. Females are more likely to incur an ACL injury, and in recent years adolescent females (i.e., 14-17 year olds) have experienced the largest increase in ACL injury rate. A large amount of research has investigated and identified several potential risk factors for ACL injuries in females. Prevention of ACL injuries has emerged as a priority, but current injury prevention programs suffer from several problems, such as noncompliance and limited reductions in injury risk, and thus fail to adequately address the rising rates of ACL injuries.
A long-term goal is to reduce injuries due to movement deficits and restore the debilitating sequelae associated with prior injury or disease. Experience supports that there is a potential to reduce and repair aberrant biomechanics via regimented (i.e., non-targeted) neuromuscular training combined with subjective, delayed (i.e., delivered after the movement or exercise) verbal feedback. More particularly, experience indicates that objective, real-time, individualized (i.e., targeted), analytic-driven biofeedback improves on previous methods by inducing immediate neuromuscular adaptations that transfer across tasks. The system and method of the present disclosure supports the central tenant that sensorimotor biofeedback will improve localized joint mechanics and reduce global injury risk in evidence-based measures collected in laboratory tasks and in realistic, sport-specific virtual reality scenarios. The overall objective of this present disclosure is to implement and test innovative augmented neuromuscular training (aNMT) techniques to enhance sensorimotor learning and more effectively reduce movement deficits including, for example, biomechanical risk factors for ACL injury. Such aNMT biofeedback illustratively integrates biomechanical screening with a user display of real-time feedback. The feedback maps complex biomechanical variables onto simple visual stimuli that participants intuitively “control” via their own movements. The rationale is that effective biofeedback will improve the potential to decelerate injury rates.
An objective of the system of the present disclosure is to determine the efficacy of aNMT biofeedback to improve movement deficits associated with realistic tasks of daily living and human performance. More particularly, an objective of the illustrative system including aNMT (neuromuscular training+targeted, real-time biofeedback) is to yield a greater response as assessed through enhanced adaptation relative to a sham cohort (neuromuscular training exercises but with sham biofeedback).
A further objective of the system of the present disclosure is to demonstrate the efficacy of aNMT biofeedback to improve transfer of biomechanical adaptations to realistic human movement with actions performed in fully immersive virtual reality. More particularly, an objective of the illustrative system is to aim tests for improved mechanics during realistic, sport-specific actions performed in high-fidelity, free-ambulatory, immersive virtual environments. A further objective is to demonstrate that aNMT biofeedback produces greater transfer of improved mechanics in realistic immersive virtual reality scenarios compared to sham biofeedback.
Additional features and advantages of the present invention will become apparent to those skilled in the art upon consideration of the following detailed description of the illustrative embodiment exemplifying the best mode of carrying out the invention as presently perceived.
SUMMARY OF THE DISCLOSUREThe present disclosure relates to a real-time feedback system and method that targets and improves the movement biomechanics associated with desirable movement techniques and human performance. The real-time feedback system of the present disclosure is configured to overcome problems associated with prior injury prevention programs by utilizing objective feedback that: (1) can be provided largely independent of an expert's (e.g., a physical therapist) presence and active involvement with individual athletes, (2) is interactive and personalized which may enhance athlete motivation and compliance, (3) may improve learning and performance by directing athletes' attentional focus to an external source, and (4) engages implicit motor learning strategies that may result in faster learning and improved transfer.
The illustrative real-time visual feedback system of the present disclosure is designed so that objective information about multiple biomechanical variables related to ACL injury risk can be displayed concurrently in real-time to participants. These biomechanical variables are illustratively an assortment of kinematic and kinetic variables, some which are determined through inverse dynamics, and may be known as related to injury risk. As these variables change dynamically throughout participant movements, the feedback display is updated relative to these movements and displayed to participants in real time. The display essentially uses the current values of the biomechanical variables and maps them to the display through a predetermined influence on a geometric stimulus shape and a set gain parameter that determines the magnitude of the influence of the biomechanical variable on the change in the shape. Participants receive this real-time feedback during the performance of simple exercise (e.g., bodyweight squats), with their instruction being to perform the squat in such a way as to make the stimulus shape as rectangular as possible. The system and method of the present disclosure includes not only pragmatic advantages such as removing the need for detailed feedback from an expert, but the feedback design and presentation is based on fundamental theoretical principles in perception-action that may be promising for injury prevention, human performance and rehabilitation interventions.
The illustrative aNMT system and method of the present disclosure takes advantage of well-studied linkages between sensory perception and motor control to enable athletes to achieve complex movements by “controlling” the shape of a feedback stimulus. aNMT biofeedback is created by calculating kinematic and kinetic data in real-time from the athlete's own movements. These values determine real-time transformations of the stimulus shape the participant views via a user interface during movement performance. The participant's task is to move so as to create (“animate”) a particular stimulus shape that corresponds to desired values of the biomechanical parameters targeted by the intervention. Further, the illustrative aNMT system and method is a self-guided, self-organized process; it is not explicitly coached and the sensorimotor adaptations are learned implicitly. Additionally, the illustrative aNMT system and method automates the delivery of targeted and analytic-driven biofeedback. This will remove reliance on specific injury prevention and biomechanical specialists to support feedback delivery. The cumulative advancements are expected to significantly enhance the effectiveness and efficiency of current injury prevention approaches.
Based on principles of motor learning, aNMT biofeedback is expected to improve retention and transfer of desired adaptations to injury prevention adaptations to realistic human performance and activities of daily living.
The illustrative aNMT system and method is significant, innovative and represents a new and substantive departure from the status quo through the introduction of real-time, analytic-driven, personalized, visual biofeedback optimized for neuromuscular training. By targeting the underlying sources of maladaptive movement strategies with prophylactic biofeedback interventions during the periods when biomechanical deficits evolve, it is expected to improve movement and/or reduce the occurrence of bodily injuries. The illustrative aNMT system and method illustratively utilizes rapidly processed visual feedback and effective implementation strategies, based on individualized movement deficit biomechanical profiles. The logical connection of individualized movement deficits with the most beneficial intervention will optimize injury prevention and rehabilitation strategies of the future. The use of scientifically validated, objective biofeedback positions us to make a large impact on sport injury prevention training and rehabilitation for all movement disorders, and has utility for preventive strategies related to other common injuries. Beyond the benefits of immediate reduction in health care costs, reduced injuries and better rehabilitation would promote continued health benefits achieved through active lifestyles and avoid subsequent complications of osteoarthritis in all ages and populations.
According to an illustrative embodiment of the present disclosure, an augmented neuromuscular training system for providing real-time feedback to a participant performing exercises includes a biomechanical acquisition system, and a motion analysis and feedback system in communication with the biomechanical acquisition system. The biomechanical acquisition system is configured to track movement of the participant and generate a biomechanical data structure including position data indicative of the movement of the participant. The motion analysis and feedback system includes a controller configured to receive the biomechanical data structure from the biomechanical acquisition system. The controller includes an exercise processing sequence for generating a stimulus data structure in response to the biomechanical data structure. A user interface is in communication with the motion analysis and feedback system and includes a display visible to the participant. This display includes a goal reference and a graphical stimulus having a boundary that is defined by the plurality of stimulus coordinate points. The plurality of stimulus coordinate points are defined by the stimulus data structure.
According to another illustrative embodiment of the present disclosure, a motion analysis and feedback system is in communication with the biomechanical acquisition system, and includes a controller configured to receive a biomechanical data structure from the biomechanical acquisition system. The controller includes an exercise processing sequence for generating a stimulus data structure in response to the biomechanical data structure, and for defining a plurality of stimulus coordinate points. A user interface is in communication with the controller and includes a display visible to the participant, the display including a goal reference and a graphical stimulus having a boundary that is defined by the plurality of stimulus coordinate points.
According to another illustrative embodiment of the present disclosure, a user interface for use with a motion analysis and feedback system includes a display visible to the participant, the display including a goal reference and a graphical stimulus having a boundary that is defined by at least six stimulus coordinate points. A stimulus data structure includes a plurality of biomechanical variables identified as injury risk factors and/or aberrant movement strategies. The graphical stimulus is a rectangle in an initial configuration. The relative positions of at least one of the stimulus coordinate points is configured to vary relative to the goal reference in response to the biomechanical variables, such that a size and the shape of the graphic stimulus varies in response to the biomechanical variables.
Additional features and advantages of the present invention will become apparent to those skilled in the art upon consideration of the following detailed description of the illustrative embodiment exemplifying the best mode of carrying out the invention as presently perceived.
The detailed description of the drawings particularly refers to the accompanying figures in which:
For the purposes of promoting an understanding of the principles of the present disclosure, reference will now be made to the embodiments illustrated in the drawings, which are described herein. The embodiments disclosed herein are not intended to be exhaustive or to limit the invention to the precise form disclosed. Rather, the embodiments are chosen and described so that others skilled in the art may utilize their teachings. Therefore, no limitation of the scope of the claimed invention is thereby intended. The present invention includes any alterations and further modifications of the illustrated devices and described methods and further applications of principles in the invention which would normally occur to one skilled in the art to which the invention relates.
Augmented Neuromuscular Training System
With initial reference to
Illustratively, the augmented neuromuscular training system 10 includes a biomechanical acquisition system 12 configured to receive biomechanical data from a user or participant 14, and in communication with a motion analysis and feedback system 16. As further detailed herein, the motion analysis and feedback system 16 receives signals representative of motion of the user 14 and provides real-time visual feedback to the user 14 in the form of a stimulus 18 on a user interface 20. As further detailed herein, the stimulus 18 may be simultaneously displayed on an operator interface 22.
The biomechanical acquisition system 12 illustratively comprises a motion analysis system including a plurality of user markers 24 worn by the participant 14 and configured to be detected by an image acquiring device 26. A motion acquisition controller 28 receives the relative positions of the markers 24 as detected by the image acquiring device 26, and is configured to generate a biomechanical data structure (e.g., a three dimensional (3D) model) based upon such position data.
The motion acquisition controller 28 is illustratively operably coupled with a motion analysis and feedback controller 30. The motion analysis and feedback controller 30 is in communication with a memory 34 and illustratively includes a microprocessor configured to executed machine readable instructions stored in the memory 34. A user display connector 36, illustratively a wireless transceiver, provides communication between the motion analysis and feedback controller 30 and the motion acquisition controller 28.
The illustrative visual feedback stimulus 18 may be constructed and presented to participants in real time using an assortment of hardware and software (see
With reference to
The biomechanical acquisition system 12 illustratively further includes left and right load or force sensors 38a and 38b. Illustratively, the force sensors 28a and 28b comprise two embedded BP600900 force platforms (available from AMTI of Watertown, Mass.) to collect separate ground reaction forces from each foot of the participant 14. The data recorded by the sensors 28a and 28b may be integrated and synchronized via Cortex (available from Motion Analysis Corp. of Santa Rosa, Calif.). The synchronized data (i.e., biomechanical data structure) is then relayed to the motion analysis and feedback system 16 for generating the visual feedback or stimulus 18 on the user interface 20. An illustrative program used to generate the visual feedback display, including stimulus 18, is a custom-written C++ program designed in Microsoft Visual Studio Professional 2015 (Microsoft Corp., Redmond, Wash.) and incorporating OpenGL (Khronos Group, Beaverton, Oreg.) as the graphics application interface. This program, as further detailed herein, is stored in memory 34 and is responsible for importing the live data stream from the Cortex SDK and exporting the finished visual display on the user interface 20 to participants 14.
The user interface 20 illustratively includes goggles or glasses 40 including a visor or display screen 42 configured to be supported on the head the participant 14 and provide visible instructions or feedback to the participant 14 (including the stimulus 18). A speaker 44 may also be supported by the googles 40 to provide audible feedback to the participant 14. The user interface 20 further illustratively includes a battery 46 configured to supply power to the display 42 and the speaker 44. A transceiver 48 is illustratively supported by the user interface 20 and is configured to provide wireless communication between the user interface 14 and the user display connector 36, and thereby the motion analysis and feedback controller 30, of the motion analysis and feedback system 16. One illustrative user interface 20 may be HoloLens, mixed reality smartglasses available from Microsoft of Redman, Wash. While the illustrative user interface 20 is shown being worn by the participant 14, it should be appreciated that other types of displays may be utilized, including a TV screen, a projector screen, a monitor, a smart phone, a tablet, a laptop screen, a virtual reality headset, an augmented reality headset, etc.
Visual Feedback Stimulus
With reference to
During the start of an exercise, the stimulus 18 is in an initial or default configuration. Illustratively, the stimulus 18 in this initial configuration has the shape of a rectangle. This shape is depicted in
The display 42 also illustratively also includes a count or repetition (rep) indicator 56. The rep indicator 56 illustratively includes a plurality (e.g., ten) of grey circles 58 towards the bottom of the stimulus 18. The circles 58 are used for counting the number of exercises (e.g., squats) within a block or set. As participants performed each exercise (e.g., squat), the circles 58 change appearance (e.g., from grey to green).
A height indicator 60 may also be shown on the display 42 in cooperation with the stimulus 18. The height indicator 60 may be a background rectangle including an upper edge 62 that may be raised and lowered as the biometric data structure process by the motion analysis and feedback system 16 indicates that the participant is lowering her body. Depicted in
Trunk lean is illustratively defined as the angle of deviation from the midline of the body. Changes in trunk lean cause the stimulus 18 to lean to the respective side (
In addition to the previously described variables, the number of exercises (e.g., squats) performed by a participant may be tracked by a variable measuring the knee flexion-extension. For example, a squat may be considered complete when a participant 14 achieves a knee angle below 90° during the squat and then returns to the original standing position (see
Real-Time Biofeedback Trials
An exemplary study was conducted to determine the effectiveness of a real-time biofeedback stimulus 18 that maps to a comprehensive movement profile for reducing biomechanics related to ACL injury risk. The stimulus 18 maps on to a wider range of biomechanical variables (e.g., knee, trunk, hip, etc.) than previous biofeedback investigations (e.g., knee only). To compare real-time biofeedback to traditional interventions, a novel sham feedback apparatus was designed to limit the amount of useful feedback information available to participants during training of squat movements. It was hypothesized that participants would elicit significantly greater squatting performance, as indexed through a novel heat map analysis, throughout acquisition and during retention (mid and post testing) when using real-time biofeedback compared to the sham feedback stimulus. This heat map system can be used to provide participant with a movement score for each of the exercises. In this example, the area that is not captured with desired movement over the period of exercise would be deducted from a referenced perfect score and provide an objective assessment of the movement quality for particular set of exercise. This information would be following a movement training session with the aNMT system 10.
Twenty participants were recruited to participate in the exemplary study (M age=19.7±1.34 yrs; M height=1.74±0.09 m; M weight=72.16±12.45 kg). All participants were female collegiate athletes Participants had no history of neurological disorders (including any neuromuscular disabilities), musculoskeletal disabilities or disorders, or balance problems. Participants were free of any injuries within the last five years that impaired movement or the ability to stand.
The provided example of real-time feedback display used in the study was designed so that objective information about multiple kinematic and kinetic variables related to ACL injury risk could be displayed concurrently and in real time to participants. The stimulus was designed specifically to map onto a wider range of biomechanical variables (e.g., knee, trunk, hip, etc.) than previous biofeedback investigations that were isolated to a single variable (e.g., knee only). The shape of the display was a simple, two-dimensional rectangle defined by six points (see
Table 1 below summarizes each variable's definition, optimal value, and effect on the stimulus 18:
The participants 14 in the study were instructed to squat so as to keep the stimulus shape as close to a perfect, symmetrical rectangle as possible. This was achieved by moving so as to produce optimum values (i.e., values associated with low ACL injury risk) of the aforementioned variables, but participants were not told that and the biomechanical variables and their optimum values were not explained to them. As the values of the variables neared or fell within optimum ranges specific to the given variable(s), a more symmetric rectangle was obtained; alternatively, the rectangle became systematically distorted by increasing amounts as the values of the variables deviated from the optimum values.
The number of squats performed by the participant 14 was tracked by a variable measuring knee flexion angle. A squat was considered “complete” when a participant 14 achieved a knee flexion angle below 90° during the squat and then returned to the original standing position (see
In addition to the real-time feedback display just described, a feedback display was also developed for the sham condition. The real-time and sham feedback displays presented identical stimulus shapes that responded, at least in part, to the same biomechanical parameters. However, the sham feedback was designed to limit the amount of useful feedback information available to participants during the squat movements. This was accomplished using a stepwise gain manipulation (
Participants' movements were recorded using the image acquiring device 26 as detailed above, illustratively a 10-camera motion capture system (Raptor-E, Motion Analysis Corp., Santa Rosa, Calif.) sampling at 240 Hz. In conjunction with the motion capture system, two force sensors 38 as detailed above (embedded force platforms 38 (BP600900, AMTI, Watertown, Mass.) sampling at 1200 Hz) were used to collect ground reaction force from each foot from the participant 14 and were synchronized with the motion system 26. The synchronized marker trajectory and force data were accessed via a custom software program including machine readable instructions stored in memory 34 and executed by controller 30 that was designed to calculate and map the above variables to generate the visual feedback display or stimulus 18.
The visual display 42 was wirelessly transmitted from a desktop computer to participants using an ARIES Pro Wireless HDMI Transmitter and Receiver 36 (Nyrius, Niagara Falls, ON, Canada). The ARIES Pro is capable of transmitting uncompressed 1080p signals up to 160 feet with a latency of <1 ms, which allowed for maximum mobility of participants without degradation of feedback quality. Participants viewed the real-time feedback through a pair of video eyewear glasses 40 (Wrap 1200 DX-VR; Vuzix Corp., Rochester, N.Y.), which had a 60 Hz screen refresh rate (a new frame appeared approximately every 16.67 ms). The glasses 40 presented the feedback display or stimulus 18 in a fixed position relative to the participants' eyes and encompassed their entire field of view. Both the ARIES Pro and glasses 40 were powered by a portable battery pack 46 (PowerGen Mobile Juice Pack 12000; PowerGen, Kwai Chung, Hong Kong, PRC). The wireless transmitter 48 and battery pack 46 were stored in a modified hydration pack designed for running (CamelBak Products, LLC, Petaluma, Calif.). The backpack provided minimal interference to natural movement as it held the equipment securely against the body and was relatively small (length×width×height: 33 cm×27 cm×7.6 cm).
In order to compare the effects of the real-time and sham feedback displays, an AB/BA or two-treatment crossover design was utilized. As the name implies, half of the participants received one condition first (e.g., ‘A’) and another condition second (e.g., ‘B’), while the other half of participants received the two identical conditions but in the opposite order. We randomly assigned participants to one of two groups: real-time biofeedback first (i.e., ‘A’) or sham feedback first (i.e., ‘B’). First, participants completed a block of 10 squats without any feedback (pre-test). Then, participants completed four blocks of 10 squats using their assigned condition (i.e., real-time biofeedback or sham) (acquisition phase 1). Before switching to the next feedback type, a block with no feedback was administered (mid retention test) followed by four blocks of 10 squats using the opposite feedback type as used in acquisition phase 1 (acquisition phase 2). Finally, participants completed a block of 10 squats without any feedback (post retention test). In total, each participant completed 110 squats-40 training squats for each feedback type (80 total) and 10 squats during each test period (30 total; see
Each participant 14 was outfitted with 30 retroreflective markers 24, with a minimum of three tracking markers 24 on each segment, and the backpack containing the wireless transmitter and battery pack. Markers were placed on the sacrum between the L5 and S1 vertebrae, and bilaterally on the acromio-clavicular joint, anterior superior iliac spine, posterior superior iliac spine, greater trochanter, mid-thigh, medial and lateral femoral condyles, tibial tubercle, lateral and distal aspects of the shank, and medial and lateral malleoli, the heel, and central forefoot (between the second and third metatarsals). After the initial experimental preparation, all participants 14 received identical instructions about the squat exercise. The instructions were purposefully kept very basic as to allow for implicit discovery of how their movements related to the stimulus shape during the squat exercise; they were told only to “maintain the goal stimulus shape and size as closely as possible” and, as a secondary instruction, “to squat to sufficient depth”, as indicated by the depth indicator and circle counter at the bottom of the stimulus. Participants were also asked to keep their arms crossed in front of their chest and to avoid covering any markers. A set of squats was considered complete once all ten circles' colors changed from grey to green, indicating that ten sufficiently deep squats were performed. If participants were unable to intuitively achieve the appropriate depth, they were explicitly instructed that they must squat lower. This happened solely during the first feedback trial that participants experienced; no participants needed to be reminded again after the first trial. No other instructions were provided regarding the squats or the stimulus 18.
The recorded raw, three-dimensional marker positions, ground reaction forces, and center of pressure acquired from both feet were first exported from Cortex and imported into MATLAB for preprocessing. Preprocessing consisted of visual inspection of a virtual mid-shoulder marker (defined as the averaged position of the left and right shoulder markers) for each squat trial (pre-, mid-, and post-test and training trials). During the visual inspection, time series of the mid-shoulder marker's vertical position were plotted and trimmed. Only the portions of a trial where the participant was performing a squat were retained for analysis. All other marker and force data were trimmed according to the time points that were identified from the mid-shoulder marker. This procedure resulted in a time series for each squat rep across every squat set.
n order to quantify participants' ability to control the stimulus shape, heat map analysis was performed on the squat data during the middle four training sets and on “reconstructed” feedback shapes obtained from the raw position and force data in the pre- and post-test sets. The heat maps provided a global assessment of squatting performance by indicating how the movement patterns of the biomechanical variables associated with ACL injury related to the target feedback shape (i.e., a rectangle). Specifically, the heat maps portrayed the percentage of time a defined space was occupied by the feedback stimulus. The heat map analysis consisted of two steps: (1) the construction of the heat maps and (2) the calculation of each heat map's correctly occupied space. Heat maps were created using the MATLAB function inpolygon. The calculation of each heat map's correctly occupied space consisted of first calculating the proportion of occupied space within the goal stimulus and then calculating the proportion of occupied space outside of the goal shape. The proportion of occupied space outside of the goal shape was finally subtracted from the proportion of occupied space within the goal stimulus. The possible results of this operation range from −1.00 to 1.00. A score of −1.00 indicated that the stimulus never occupied a correct location in the display while always occupying an incorrect location. A score of 1.00 indicated a stimulus shape never deviated from the goal shape and size, which meant that the relevant biomechanical parameters were achieving the desired optimal values associated with lower injury risk. These scores are transformed and presented as percentages in the following sections for ease of interpretation with higher percentages indicating better squatting performance.
In order to test for varying levels of fatigue caused by differences in the number of squats performed by each participant group, the total number of squats performed by the real- and sham-first feedback groups were compared using an independent samples t-test. This step was necessary because participants required a few squat repetitions in order to explore and determine the appropriate squat depth, which may or may not have affected participants' fatigue level and subsequent squatting performance.
To assess squatting performance, each trial block was first averaged to produce a single heat map percentage score. Then, to assess differences in squatting performance during training (i.e., visual feedback present), separate 2 (condition)×4 (trial block [1, 2, 3, 4]) mixed ANOVAs with repeated measures on the last factor were conducted for acquisition phase 1 and acquisition phase 2. To assess learning (i.e., when visual feedback absent), a 2 (order)×3 (test phase; pre-test, mid-test, post-test) mixed ANOVA with repeated measure on the last factor was conducted. Bonferroni adjustments were used when appropriate and an alpha level of p<0.05 indicating significance was selected a priori.
Heat map analyses have revealed that participants' mean improvement in the produced stimulus shape (i.e., more closely resembled the optimal shape) from the pre- to post-test blocks was 7.70%. This improvement was significant, t(10)=6.63, p<0.01, with scores rising from an average of 77.17% (SD=3.80%) in the pretest to an average of 84.87% (SD=3.12%) in the posttest. Additionally, participants demonstrated a trend of increasing heat map scores over the four feedback training blocks. Participants produced an average heat map score of 78.76% (SD=3.40%), 80.91% (SD=2.20%), 81.65% (SD=1.57%), and 85.71% (SD=2.11%) for feedback blocks one through four, respectively. See
There were no significant differences in the number of squats performed between the real-first feedback group (M=111.80, SD=7.15) and the sham-first feedback group (M=114.30, SD=5.76), t(18)=0.86, p=0.40. There were no significant differences in overall squatting performance as measured by the heat map percentage scores for condition or trial block, nor a condition×trial block interaction during acquisition phase 1 or phase 2 (all p>0.05). Thus, we averaged the heat map percentage scores across the eight trial blocks for each condition respectively and performed a paired-samples t-test to assess differences during training per feedback type. That test revealed that squatting performance during the real-time biofeedback trials (M=60.73%, SD=6.47%) was better than during the sham feedback trials (M=56.62%, SD=8.42%), t(19)=3.06, p=0.006. There were no significant differences in squatting performance for order or trial block, nor an order×trial block interaction during the three test phases that assessed learning. Thus, the availability of the interactive feedback shape during the squat training trials was beneficial to performance compared to when only the sham stimulus was available.
An objective of the illustrative study was to determine the effectiveness of a real-time biofeedback system compared to a sham feedback system for improving biomechanics related to ACL injury risk. Squatting performance, as measured through heat map analysis, was significantly better when participants interacted with the real-time biofeedback system compared to the sham. The study makes three distinct innovations. First, the illustrative system employed an interactive, real-time stimulus that implicitly guided performance while promoting an external focus of attention, factors which as noted previously have been identified as improving motor performance and learning. Second, the illustrative developed real time biofeedback system mapped multiple biomechanical variables associated with ACL injury risk onto a single stimulus. The illustrative system will be applicable for all similar feedback approaches that target aberrant movement deficits that are associated with other injury risk, prior injury and pathology. Unlike previous systems that are isolated to one factor such as knee abduction/adduction, the system uniquely presents participants with a global biomechanical profile associated with movement deficits and, in this case, ACL injury risk including lateral trunk flexion, knee-to-hip joint moment of force ratio, knee abduction moment of force, and vertical ground reaction force. Third, the inclusion of the sham feedback system demonstrated that any increased engagement or motivation associated with real-time biofeedback is not alone sufficient to improve performance, but an accurate mapping from kinematics and kinetics to the feedback is necessary for performance gains.
The results of this study suggest that ACL injury prevention, rehabilitation programs and human performance could be improved by integrating a real-time, interactive biofeedback stimulus that engages a control of external feedback methodology. Prevention programs could use this stimulus to improve performance of prophylactic exercises, which may lead to decreased ACL injury risk. Likewise, following an ACL injury, our approach may be particularly beneficial as a rehabilitation tool for those in the recovery stages. An external focus of attention has already demonstrated efficacy for those who have undergone ACL reconstruction (Gokeler et al., 2015) and our stimulus elicited a similar benefit without the need for an expert to deliver instruction. The integration of new factors associated with ACL injury risk suggests that it may also be possible to design similar real-time biofeedback systems to target other movement dysfunction. For example, previous investigations using real-time biofeedback for gait retraining may benefit by integrating further factors to supplement previously seen motor skill improvements.
The illustrative method of the present disclosure of delivering augmented feedback departs from traditional methods in that it can be delivered to subjects in real-time through the use of multiple integrated technologies. The force plate data and lower extremity joint position data generated from the 3D passive optical motion capture system are delivered to a central hub for integration. The data are processed via a custom pipeline to determine multi-planar/multi-dimensional biomechanical measures of interest and then telemetrically streamed to the smart-eye headset to optimize system interaction and negate latency between data input and visual display output. The desired outcome for participants to achieve while performing each of the intervention exercises is to move so as to produce a rectangular shape and make the shape as large as possible. This is achieved when each of the targeted biomechanical variables is at the desired value. Deviations of the variables from desired values result in specific and systematic changes to the feedback shape: 1) Lateral trunk flexion causes the object to lean to the respective side (
After receiving basic instruction about how to accomplish the exercises, athletes must discover the movement pattern that produces a stimulus shape as close to the desired rectangle as possible and maintain the stimulus in a large rectangular shape as best as she can on each repetition. No explicit directions will be provided to athletes on their movement other than instruction to achieve the goal “rectangle” shape. Based on our preliminary studies, we expect that the aNMT protocol will be especially beneficial to an athlete who can respond to self-guided, implicit learning strategies to correct multiple deficits that are likely cumulative in the exacerbation of injury risk. Given the automated, objectively prescribed mapping between the athlete and the stimulus, there is no interaction between the technician and the stimulus during aNMT delivery. This ensures blinding of the technician.
Virtual Reality (VR) may be an effective tool utilized with the system and method of the present disclosure to analyze training transfer to realistic motion, including sport related, performance. Unlike outdoor motion capture solutions, VR offers a fully standardized, controlled environment and, in combination with untethered/unencumbered freedom of movement, can induce a sense of immersion to facilitate athlete responses that parallel real-world sport responses. VR scenarios may provide controlled environments uniquely equipped to test sport-specific skill transfer following aNMT intervention. These scenarios utilize sport-specific tasks in virtual environments, embedded in the sport's context, that require strings of neuromuscular training-specific movements to accomplish sport-relevant task goals. This allows assessment of training transfer to the biomechanics of maneuvers that map onto specific neuromuscular training tasks. It also enables systematic development of biomechanical profiles across a variety of sport-specific events and stimuli.
System Operation and Operator Interface
In the following description, reference will be made to the operator interface 22 of
The process continues to block 112, where the controller 30 executes processing sequences “Point.cs” and “UserHandler.cs”. Processing sequence “Point.cs” is responsible for manipulations preformed on the stimulus coordinate points, such as resetting them to original valves and/or averaging multiple frames. Processing sequence “UserHandler.cs” is responsible for tracking individual user information, such as group assignment, demographic and anthropometric data, exercise progression, individual gains, scores, etc. The executable files are further shown as labels 15.A through 15.E in
At the input stream step (block 114), the biomechanical data structures are received from the biomechanical acquisition system 12. More particularly, the biomechanical acquisition system samples data at block 116 from the imaging acquiring device 26 and the force sensors 38. The controller 28 then generates the biomechanical data structures (“bioM_Data”). These biomechanical data structures are then transmitted at block 118 to the motion analysis and feedback system 16 at block 114. As noted above, the motion acquisition controller 28 may be Cortex. Processing sequence “amnt.cs” is responsible for establishing a connection to the source of the biomechanical data structures, importing the data structures, and terminating the connection to the data source (i.e., biomechanical acquisition system). The executable files are further shown as labels 1.A through 1.C in
At step 2 (block 120), the process continues by executing processing sequences “AudioHandler.cs” and “tracker.cs”. Processing sequence “AudioHandler.cs” is responsible for audio control of the program. For example, this processing sequence illustratively selects a random encouragement audio clip and/or an appropriate warning audio clip to be broadcast by the speaker of the user interface. The audio clip is illustratively played based upon input from the “Tracker.cs” processing sequence, which is responsible for tracking participant movements and calculating anthropometic data. Illustratively, input to the “Tracker.cs” processing sequence includes biometric data structures, while output includes data about participants and their respective movements. The executable files are further shown as labels 9.A through 9.F in
At step 3 (block 122), the process continues by executing processing sequences “RepCounter.cs” and “Score.cs”. Processing sequence “RepCounter.cs” is responsible for tracking the number of completed successful and unsuccessful exercise repetitions. Input to this processing sequence includes the biometric data structures, while output includes the number of completed successful and unsuccessful exercise repetitions. The executable files are further shown as labels 11.A through 11.C in
At step 4 (block 124), the illustrative process continues based upon input from the operator interface 22. As further detailed herein, one of a plurality of different exercises may be selected, including overhead squat, pistol squat, squat and/or squat jump. In response, the controller 30 executes an exercise processing sequence associated with the selected exercise. All of the illustrative exercise processing sequences share a majority of the same source code (e.g., around 80%). There are additional functions between exercises and sham, for example, that will have some additional functions specific to that particular exercise/condition.
If the overhead squat exercise is selected at the operator interface 22, then the processing sequence “OH Squat.cs” is executed. This processing sequence generates the stimulus for training the overhead squat. Input to this processing sequence includes variables to generate the stimulus (e.g., biomechanical data structures) along with gains input by the operator. Output from this processing sequence includes the current stimulus coordinates. The executable files are further shown as labels 4.A through 4.L in
If the pistol squat exercise is selected at the operator interface, then the processing sequence “Pistol Squat.cs” is executed. This processing sequence generates the stimulus for training the pistol squat. Input to this processing sequence includes variables to generate the stimulus (e.g., biomechanical data structures) along with gains input by the operator. Output from this processing sequence includes the current stimulus coordinates. The executable files are further shown as labels 7.A through 7.I in
If the squat exercise is selected at the operator interface, then the processing sequence “Squat.cs” is executed. This processing sequence generates the stimulus for training the squat. Input to this processing sequence includes variables to generate the stimulus (e.g., biomechanical data structures) along with gains input by the operator. Output from this processing sequence includes the current stimulus coordinates. The executable files are further shown as labels 6.A through 6.J in
If the squat jump exercise is selected at the operator interface, then the processing sequence “Squat Jump.cs” is executed. This processing sequence generates the stimulus for training the squat jump. Input to this processing sequence includes variables to generate the stimulus (e.g., biomechanical data structures) along with gains input by the operator. Output from this processing sequence includes the current stimulus coordinates. The executable files are further shown as labels 8.A through 8.J in
The sham function is illustratively executed at step 5 (block 126). More particularly the processing sequences “Sham.cs” and “ImportSham.cs” are executed. The “Sham.cs” processing sequence modifies the stimulus coordinates by adding noise to the signal representing the biometric data structure. Input to this processing sequence includes the stimulus coordinates and a noise level input. Output from this processing sequence includes current sham stimulus coordinates. The executable files are further shown as labels 5.A through 5.D in
The “ImportSham.cs” processing sequence is responsible for importing a randomly selected text file consisting of the numerical values used to create sham feedback. Values from this processing sequence are used in creating the sham stimulus. The executable files are further shown as label 2.A in
The process then continues to step 6 (block 128) by executing processing sequences “XMLMap.cs” and “TCPServerXML.cs”. The “XMLMap.cs” processing sequence defines and constructs data structure to be used in communicating between the displays of the operator interface and the user interface and other portions of the program. Output from this processing sequence are illustratively data structure used to communicate and operate the displays. The “TCPServerXML.cs” processing sequence is responsible for integrating and maintaining a connection with the displays of the operator interface and the user interface. Input includes information used in making a connection to the displays of interfaces 20 and 22, while output includes stimulus values sent to the displays of interfaces 20 and 22. The display connection is shown as block 128 and is facilitated by execution of the processing sequence “DeformationHandler.cs” (which helps display the stimulus 18, but has a primary function of interacting with the interfaces 20 and 22).
As further detailed herein, the stimulus coordinates as determined by the exercise processing sequences will define the graphical stimulus 18 shown on the displays of the interfaces 20 and 22. A participant 14 will attempt to maintain the stimulus 18 within the goal reference on the display 42.
The illustrative operator interface 22 is shown in different configurations in
As further shown in
Additional options are provided for initializing the calculation of anthropometric data (functions: 9.A-9.G and 13.A-13.H). A foot width button 230 is provided for calculating relative foot width of the participant 14 based on input from the force sensors 38. An enter weight button 232 and an enter height button 234 are provided for calculating the weight and the height of a participant 14, based on input from the force sensors 38 and markers 24, respectively. More particularly, weight is measured from the force sensors 38, while height is determined by the position (x, y, z) of the markers 24 attached to a participant's body. This information is illustratively provided by the biomechanical acquisition system 12. While this is illustratively the Cortex program, it could be from any source capable of 3D tracking. The processing sequence “Tracker.cs” manipulates this data. These are generally algorithms contained in machine executable code to calculate the various variables.
With reference now to
At decision block 334, the controller 30 determines whether the trial on button 224 has been activated. If no, then the controller 30 waits for further input. If yes, then the controller 30 waits for an exercise selection at block 336. The exercise selection may be made at field 236 shown in
If the squat exercise is selected at block 338, then the controller 30 proceeds to function 6.a at block 340. If the squat jump is selected at block 342, then the controller 30 continues to function 8.a at block 344. If the pistol squat is selected at block 346, then the controller 30 continues to function 7.a at block 348. If the overhead squat is selected at block 350, then the controller 30 continues to function 4.a at block 352. Finally, if sham is selected at block 354, then the controller 30 proceeds to functions 5.a/5.b at block 356.
If the squat exercise is selected, the process continues to function 6.a at block 340. The controller 30 then looks for input from the variables and gain section 212 of the operator interface 22. If trunk lean is entered at block 358, then a new function 6.c is executed at block 360. The controller 30 then inquires at block 362 if the transform button of input 248 has been activated. If not, the process continues to function 6.g at block 366. If the transform button of input 248 has been activated, then the gain is transformed or modified at block 364. More particularly, by activating the transform button of input 248 the controller 30 may make the feedback gains linear, quadratic or cubic. Additionally, manipulation of the slide bar of input 248 varies the amount of gain applied to the trunk lean variable. The process then continues to function 6.g at block 366.
At block 370, the controller 30 looks for VGRF input 258. If so, then the controller 30 continues to function 6.d at block 372. Again, the controller 30 then inquires at block 374 if the transform function has been activated. If not, the controller 30 continues to function 6.h at block 378. If the transform function has been activated, then the gain is transformed at block 376. The process continues to function 6.h at block 378.
If knee button 242 is activated at block 380, then the controller 30 continues to function 6.e at block 382. Again, the controller 30 then asks at block 384 if the transform function has been activated. If not, the system continues to function 6.i at block 388. If the transform function has been activated, then the gain is transformed at block 386. The process continues to function 6.i at block 388.
Finally, if the knee/hip button 260 has been entered at block 390, then the function continues to block 6.f at block 392. Again, the controller 30 then asks at block 394 if the transform function has been activated. If not, the controller 30 continues to function 6.j at block 398. If the transform function has been activated, then the gain is transformed at block 396. The process continues to function 6.j at block 398.
Following blocks 366, 378, 388 and 398, the process continues at block 368, where function 16.d is executed causing data to be sent to the displays 42 and 210 (via processing sequence “TCPServerXML.cs”). Next, the number of reps are checked at block 400 (functions 11.a-11.c of
Based upon different flagged values in “amnt.cs”, Table 2 below illustrates potential audio statements that may be played by speaker 44 at block 414:
Turning now to
If the VGRF button 258 is activated at input 428, then the controller 30 continues to function call 8.d at block 430. Again, the controller 30 then asks at block 432 if the transform function has been activated at input 262. If not, the controller 30 continues to function call 8.h at block 436. If the transform function has been activated, then the gain is transformed at block 434. The process continues to function call 8.h at block 436.
If the knee button 242 is activated at input 438, then the controller 30 continues to function 8.e at block 440. Again, the controller 30 then asks at block 442 if the transform function has been activated. If not, the controller 30 continues to function call 8.i at block 446. If the transform function has been activated, then the gain is transformed at block 444. The process continues to function call 8.i at block 446.
If the knee/hip button 260 is activated at block 448, then the process continues to function 8.f at block 450. Again, the controller 30 then asks at block 452 if the transform function has been activated. If not, the controller 30 continues to function 8.j at block 456. If the transform function has been activated, then the gain is transformed at block 454. The process continues to function call 8.j at block 456.
If in-air input (biometric data from the markers 24 and/or force sensors 38) is received at block 458, then the process continues to function 9.b at block 460. At block 462, the stimulus 18 is removed from the display 210 (e.g., during the time that the user is detected as not being in contact with the force sensors 38).
Following blocks 424, 436, 446 and 456, the process continues, as detailed above in connection with
Turning now to
If the pelvis button 266 is activated at input 474, then the controller 30 continues to function call 7.e at block 476. Again, the controller 30 then asks at block 474 if the transform function has been activated at input 270. If not, the controller 30 continues to function call 7.g at block 482. If the transform function has been activated, then the gain is transformed at block 480. The process continues to function 7.g at block 482.
If the knee button 242 is activated at input 484, then the controller 30 continues to function 7.c at block 486. Again, the controller 30 then asks at block 488 if the transform function has been activated. If not, the controller 30 continues to function call 7.h at block 492. If the transform function has been activated, then the gain is transformed at block 490. The process continues to function 7.h at block 492.
If the hip button 268 is activated at block 494, then the process continues to function 7.d at block 496. Again, the controller 30 then asks at block 498 if the transform function has been activated. If not, the controller 30 continues to function 7.f at block 502. If the transform function has been activated, then the gain is transformed at block 500. The process continues to function 7.f at block 502.
If a right leg input button 269 is activated, then based upon input from the force sensors 38, the controller 30 decides at block 504 whether the participant 14 is standing on her right leg. If not, then the process continues to block 506 where the controller 30 orients the stimulus 18 for the participant's left leg. If so, then the process continues to block 508 where the controller 30 orients the stimulus 18 for the participant's right leg.
Following blocks 472, 482, 492 and 502, the process continues, as detailed above in connection with
Turning now to
If the VGRF button 258 is activated at input 520, then the controller 30 continues to function call 4.e at block 522. Again, the controller 30 then asks at block 524 if the transform function has been activated at input 262. If not, the controller 30 continues to function call 4.k at block 528. If the transform function has been activated, then the gain is transformed at block 524. The process continues to function call 4.k at block 528.
If the knee button 242 is activated at input 438, then the controller 30 continues to function 4.f at block 532. Again, the controller 30 then asks at block 534 if the transform function has been activated. If not, the controller 30 continues to function call 4.j at block 538. If the transform function has been activated, then the gain is transformed at block 536. The process continues to function 4.j at block 538.
If the knee/hip button 260 is activated at block 540, then the process continues to function 4.g at block 542. Again, the controller 30 then asks at block 544 if the transform function has been activated. If not, the controller 30 continues to function 4.j at block 548. If the transform function has been activated, then the gain is transformed at block 546. The process continues to function call 4.j at block 548.
If the arm button 274 is activated at block 550, then the process continues to function 4.c at block 552. Again, the controller 30 then asks at block 554 if the transform function has been activated. If not, the controller 30 continues to function 4.i at block 558. If the transform function has been activated, then the gain is transformed at block 556. The process continues to function call 4.i at block 558.
Following blocks 518, 528, 538, 548 and 558, the process continues, as detailed above in connection with
If the sham function is selected, then the controller 30 continues to functions 5.a/5.b as shown in
If the VGRF button 258 is activated at input 570, then the controller 30 continues to function call 6.d at block 572. Again, the controller 30 then asks at block 574 if the transform function has been activated at input 262. If not, the controller 30 continues to function 6.h at block 578. If the transform function has been activated, then the gain is transformed at block 576. The process continues to function 6.h at block 578.
If the knee valgus button 278 is activated at input 580, then the controller 30 continues to function 6.e at block 582. Again, the controller 30 then asks at block 584 if the transform function has been activated. If not, the controller 30 continues to function 6.i at block 588. If the transform function has been activated, then the gain is transformed at block 586. The process continues to function 6.i at block 588.
If added noise gain button 244 and slide bar 254 have been activated at block 590, then function 5.b is executed at block 592. Similarly, if the sham on angle button 246 and slide bar 256 have been activated at block 594, then function 9.a is executed at block 596.
The process continues to function 16.d at block 368. The sham is displayed at block 597, followed by execution of function 5.c at block 598. The process then continues to block 402 in the manner detailed above in connection with
With reference to
An advantage of creating individualized feedback gains in the manner detailed above is that participants who perform atypically (e.g., below or above an average level of performance) could interact with a stimulus display that is tailored to her own needs. Effectively the gains could be used to increase or decrease the sensitivity of the display and, therefore, make it easier or more difficult to maintain the goal feedback shape and size. The gains could then be adjusted over the course of training to introduce a progression of exercise difficulty as appropriate to a given individual's performance—as the participant masters exercise form at one gain setting, the exercise could be progressed to further challenge the participant to improve more. The initial individual gains could be determined from a statistical distribution of participant pre-test performances, where the location of the participant's performance relative to the distribution determines the feedback gains used to generate the feedback display. From the same distribution it would also be possible to determine acceptable ranges for the biomechanical variables. For example, it may be counterproductive to provide feedback on trunk lean values that are within ±1.0° of 0.0° (the trunk is almost negligible). Lastly, additional exercises could be programmed that target complementary biomechanical variables, such as the single-leg Romanian deadlift. The exercise is performed on a single leg, requiring that the person essentially bends over (at the waist) and touches the ground with their fingers. This exercise may lead to greater trunk control, more stable hip joint dynamics, and improved balance beyond the effects of the unweighted squat.
The current interactive, real-time biofeedback system effectively engages implicit motor learning mechanisms and promotes an external focus of attention. The heat map results revealed a positive change in participants' squatting performance from the pre- to posttest period. It is envisioned that the system of the present disclosure may provide a more efficient method for reducing ACL injury risk in high-risk athlete populations.
Although the invention has been described in detail with reference to certain preferred embodiments, variations and modifications exist within the spirit and scope of the invention as described and defined in the following claims.
Claims
1. An augmented neuromuscular training system comprising:
- a sensor for tracking movement of a participant;
- a processor for generating a biomechanical data structure based on data obtained from the sensor;
- a controller configured to generate a stimulus data structure based on a goal reference and a plurality of biomechanical variables from the biomechanical data structure, wherein the stimulus data structure includes a plurality of stimulus coordinate points configured to vary relative to the goal reference in response to the plurality of biomechanical variables; and
- a display visible to the participant, the display presenting a graphical stimulus defined by the plurality of stimulus coordinate points and that is a geometric shape in an initial configuration.
2. The training system of claim 1, wherein the sensor is part of a mobile device.
3. The training system of claim 2, wherein the mobile device is one of a smartphone, a tablet, a virtual reality headset, or a smartwatch.
4. The training system of claim 2, wherein the goal reference is one of a plurality of different exercise processing sequences for generating respective stimulus data structures in response to different exercises performed by the participant.
5. The training system of claim 1, wherein any of a size and a shape of the graphical stimulus varies in response to the biomechanical variables.
6. The training system of claim 1, wherein the display includes a headset configured to be worn by the participant, the headset including a wireless receiver for communication with the controller.
7. The training system of claim 5, wherein the headset further includes a speaker to transmit audible instructions from the controller to the participant.
8. The training system of claim 1, wherein the goal reference is one of a plurality of different exercise processing sequences for generating respective stimulus data structures in response to different exercises performed by the participant.
9. The training system of claim 7, further comprising an operator interface in communication with the controller, the operator interface including the display and an operator input for selecting one of the different exercises.
10. An augmented neuromuscular training system comprising: a motion analysis and feedback system in communication with the biomechanical acquisition system, the motion analysis and feedback system including a controller configured to generate a stimulus data structure based on a goal reference and a plurality of biomechanical variables from the biomechanical data structure, wherein the stimulus data structure includes a plurality of stimulus coordinate points configured to vary relative to the goal reference in response to the plurality of biomechanical variables; and a display visible to the participant, the display presenting a graphical stimulus defined by at least six of the stimulus coordinate points, wherein the at least six stimulus coordinate points define more than one line.
- a biomechanical acquisition system including a sensor for tracking movement of a participant, and a processor for generating a biomechanical data structure based on data obtained from the sensor;
11. The training system of claim 9, wherein the sensor is part of a mobile device.
12. The training system of claim 10, wherein the mobile device is one of a smartphone, a tablet, a virtual reality headset, or a smartwatch.
13. The training system of claim 9, wherein any of a size and a shape of the graphical stimulus varies in response to the biomechanical variables.
14. The training system of claim 9, wherein the display includes a headset configured to be worn by the participant, the headset including a wireless receiver for communication with the controller.
15. The training system of claim 13, wherein the headset further includes a speaker to transmit audible instructions from the controller to the participant.
16. The training system of claim 9, wherein the goal reference is one of a plurality of different exercise processing sequences for generating respective stimulus data structures in response to different exercises performed by the participant.
17. The training system of claim 15, further comprising an operator interface in communication with the controller, the operator interface including the display and an operator input for selecting one of the different exercises.
18. A biomechanical data acquisition system, comprising:
- a plurality of user markers attached to a participant;
- an image acquiring device; and
- a controller configured to receive the relative positions of the markers and generate a biomechanical data structure based upon the position data, wherein the biomechanical data structure is a three dimensional model.
19. The biomechanical data acquisition system of claim 17, wherein the plurality of user markers are virtual markers.
20. The biomechanical data acquisition system of claim 18, wherein the virtual markers are calculated based on a geometrical offset from a known location.
21. The biomechanical data acquisition system of claim 19, wherein one of the virtual markers is attached to the user's hip.
22. The biomechanical data acquisition system of claim 17, wherein the biomechanical data structure comprises a plurality of biomechanical variables.
23. The biomechanical data acquisition system of claim 21, wherein the plurality of biomechanical variables includes at least one of the group consisting of trunk lean, knee-to-hip movement ratio, knee abduction moment, and vertical ground reaction force ratio.
Type: Application
Filed: May 27, 2022
Publication Date: Sep 8, 2022
Inventors: Gregory Donald Myer (Cincinnati, OH), Michael Alan Riley (Cincinnati, OH), Adam Charles Kiefer (Cincinnati, OH)
Application Number: 17/826,219