Accepted Manuscript Title: 25 years of lower limb joint kinematics by using inertial and magnetic sensors: a review of methodological approaches Author: Pietro Picerno PII: DOI: Reference:
S0966-6362(16)30637-3 http://dx.doi.org/doi:10.1016/j.gaitpost.2016.11.008 GAIPOS 5210
To appear in:
Gait & Posture
Received date: Revised date: Accepted date:
16-3-2016 27-10-2016 4-11-2016
Please cite this article as: Picerno Pietro.25 years of lower limb joint kinematics by using inertial and magnetic sensors: a review of methodological approaches.Gait and Posture http://dx.doi.org/10.1016/j.gaitpost.2016.11.008 This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and review of the resulting proof before it is published in its final form. Please note that during the production process errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.
25 years of lower limb joint kinematics by using inertial and magnetic sensors: a review of methodological approaches Pietro Picerno
Corresponding Author: Pietro Picerno School of Sport and Exercise Sciences Faculty of Psychology “eCampus” University Via Isimbardi 10 22060 Novedrate (CO), ITALY E-mail:
[email protected]
RESEARCH HIGHLIGHTS
a 25-years report of methodological contributions for the estimate of lower limb joint kinematics by using inertial sensors is presented
basic prerequisites for a reliable estimate of joint kinematics are discussed
contributions are classified according to the computational approach and the relevant instrumental technique
these methodological approaches are discussed from the point of view of their usability in a practical clinical context
Abstract: Joint kinematics is typically limited to the laboratory environment, and the restricted volume of capture may vitiate the execution of the motor tasks under analysis. Conversely, clinicians often require the analysis of motor acts in non-standard environments and for long periods of time, such as in ambulatory settings or during daily life activities. The miniaturisation of motion sensors and electronic components, generally associated with wireless communications technology, has opened up a new perspective: movement analysis can be carried out outside the laboratory and at a relatively lower cost. Wearable inertial measurement units (embedding 3D accelerometers and gyroscopes), eventually associated with magnetometers, allow one to estimate segment orientation and joint angular kinematics by exploiting the laws governing the motion of a rotating rigid body. The first study which formalised the problem of the estimate of joint kinematics using inertial sensors dates back to 1990. Since then, a variety of methods have been presented over the past 25 years for the estimate of 2D and 3D joint kinematics by using inertial and magnetic sensors. The aim of the present review is to describe these approaches from a purely methodological point of view to provide the reader with a comprehensive understanding of all the instrumental, computational and methodological issues related to the estimate of joint kinematics when using such sensor technology.
Keywords: joint kinematics; wearable inertial sensors; accelerometers; gyroscopes; methodological approach.
1. Lower limb joint angular kinematics in gait analysis: aims, current practice and new trends Cappozzo and colleagues stated that “Human movement analysis aims at gathering quantitative information about the mechanics of the musculo-skeletal system during the execution of a motor task” [1]. When the latter refers to human gait, a variety of kinematic variables and related parameters can be used for characterising this motor task [2]. One of these variables is represented by joint angular kinematics (simply “joint kinematics” from now on), which is considered a key descriptor for discriminating between a 2
normal and a pathological gait [3]. Gait analysis has, therefore, become a widely used tool by an increasing number of clinicians as a quantitative approach for the evaluation of the effects of a surgical or therapeutic intervention and the progressions in a patient’s rehabilitation [5-7]. Besides the characterisation of an altered joint kinematics from time-domain signal analysis, a number of indices can be retrieved from joint kinematics for characterising what is called interjoint or interlimb coordination [8]: an altered coordination pattern indeed characterises numerous musculoskeletal [9, 10] and neurological diseases [11–14]. Joint kinematics during walking is typically measured in laboratory settings where a multi camerabased stereophotogrammetric system is used to measure the three-dimensional (3D) position of pointmarkers placed on palpable anatomical landmarks. Unfortunately, the laboratory-based movement analysis has turned out to be inappropriate for investigating a motor task in a natural environment for a long period of time. From a clinical point of view, motor capacity as measured in laboratory settings, may not accurately reflect a functional ability in daily-life environments, since the behaviour of patients in a laboratory is not necessarily representative of their daily-life behaviour. An alternative to laboratory based techniques for the estimate of joint angular kinematics is that of using inertial sensors such as accelerometers and gyroscopes. The standard technologies of motion analysis (stereophotogrammetry) provide body segment position and orientation relative to a fixed reference frame. Other kinematic quantities, such as linear acceleration or angular velocities, are generally computed as the time-derivative of the measured linear and angular displacement [15], but they can also be measured without the need of any external reference system due to the fact that the inertia principle can be used. The linear acceleration and the angular velocity of a rigid body can indeed be directly measured on board by using inertial sensors fixed to the body. The term “inertial” refers to the fact that this typology of sensors is able to measure their movement (or the movement of a rigid body to which the sensor is fixed) by exploiting the reluctance of a free mass to move (inertia) when contained in the sensor, while the latter is accelerated by an external force (accelerometer) or is rotated by a force couple system (gyroscope). “Inertial sensors” refer to a family of sensors represented by linear acceleration sensors (accelerometers) and angular rate sensors (gyroscopes). Accelerometers and gyroscopes measure linear acceleration and angular velocity along and about a so-called “sensitive axis”, respectively. Recent technological advances have led to the miniaturisation of these sensors which can be electronically assembled and contained in small cases. The sensitive axis of the sensor is generally aligned with one of the geometrical axes of the case so that the linear acceleration and the angular velocity can then be referred to the sensor’s housing. Three single-axis inertial sensors can be assembled mutually orthogonal to each other so that the linear acceleration and the angular velocities can be measured, respectively, along and about a 3D sensor-embedded frame. Such an assembly is generally referred to as an Inertial Measurement Unit (IMU). Commercially available inertial sensors turn up in small and lightweight wireless units that can be fixed to a body segment without affecting its movement. Furthermore, depending upon their application, inertial sensors are generally embedded with Bluetooth/wireless transmitters or SD cards for real time data streaming or on-board for long term data recording, respectively. Finally, inertial sensors can be fixed on a single body segment, or a network of two or more inertial sensors can be used to retrieve data from multiple body segments synchronously so that joint kinematics can be estimated. 3
Inertial sensors are now normally used by researchers in a wide range of applications as an alternative to conventional movement analysis systems [16]. The first study which formalized the problem of estimating joint kinematics by using inertial sensors dates back to 1990. Since then, a variety of methods have been presented throughout 25 years for the estimate of 2D and 3D joint kinematics by using wearable inertial sensors. To the best of author's knowledge, papers reviewing inertial sensors that are based on applications for the estimate of lower limb joint kinematics have been focussed either on an overall description of applications for gait analysis [17] or on a systematic description of studies related to lower limb biomechanics [18]. None of them, however, have focussed their analysis on the relevant methodological and mathematical approaches. The aim of the present paper is to review the literature by classifying and describing the methodological approaches used so far for estimating lower limb joint kinematics by means of wearable inertial sensors. This is to provide the reader with a comprehensive understanding of all the instrumental, computational and methodological issues related to the estimate of joint kinematics when using such sensor technology. 1.1 Preamble Although the present review was not meant to be systematic, the most popular health-, biomechanicsand engineering-related electronic databases (Medline, Scopus, ISI Web of Knowledge, IEEE Xplore and SportDiscus) were queried in November 2015. The search was limited to English language research articles. The searched keyword string was (joint kinematics) AND (gait analysis) AND (lower limb OR knee OR hip OR ankle) AND (inertial sensor OR magnetic sensor OR accelerometer OR gyroscope OR magnetometer)‖ contained in title, abstract, and keyword fields. Only the papers related to a novel methodological approach for the estimate of joint kinematics using inertial or inertial and magnetic sensors were included. A total of 12 different and original approaches were found from 1990 to 2015. After a brief historical excursus about the emerging use of wearable inertial sensors for the estimate of joint kinematics from their first application in the early nineties (Section 2), the following sections focus on the prerequisites for estimating 2D and 3D joint kinematics by using such technology and, in particular: a) computational bases for determining the sensor's orientation in a 2D and 3D space and their relevant issues (Section 3); b) the alignment of the sensor's axes to the body segment's axes for the sake of functional readability of the relevant joint kinematics for the 2D (Section 4.1) and 3D (Section 4.2) cases. Finally, Section 5 is the core section that features a review of all of the methodological approaches used thus far for estimating 2D (Section 5.1) and 3D (Section 5.2) joint kinematics by means of inertial sensors. 2. Inertial sensors for estimating a body segment’s orientation: a brief “historical” perspective A common way to introduce inertial sensors in research papers proposing novel methodologies for human movement analysis (including the present paper) is to present inertial sensors as a recent alternative to standard laboratory-based motion analysis technologies. Inertial sensors started to make their appearance, instead, in parallel to camera-based approaches [19]: the first studies on the estimate of segmental orientation indeed date back to 1973 when Morris used six uniaxial accelerometers mounted on a rigid bar for solving the equation governing the motion of a rotating rigid body and determining its angular acceleration [20]. Angular velocity and displacement (i.e., orientation) were then computed by first and 4
double numerical integration of angular acceleration, respectively. To eliminate the drift introduced by numerical integration that affected the angular displacement, Morris identified the beginning and the end of the walking cycles, and matched the signal at the beginning and at the end of the cycle. A direct measurement of the angular velocity was actually carried out by Bortz two years before, using a gyroscope, while the orientation of the object was estimated by the numerical integration of the measured angular rate [21]. In those times, Morris’ approach resulted as a novel methodology with respect to that of Bortz because gyroscopes were much more cumbersome, expensive, and power consuming than accelerometers. The latter could instead exhibit a higher dynamic range and resolution [22]. Estimating angular kinematics using accelerometers was, hence, more convenient. Starting with the previously mentioned work of Morris, a variety of so called “gyro-free IMUs” approaches have so far been proposed for estimating rigid body angular motion [23], but they have had little success in movement analysis and are now almost reduced to being a mere analytical exercise. The poor applicability of “gyro-free” IMUs in human movement analysis was probably due to three main reasons: 1) the cumbersome setup required by such an approach (at least six uniaxial accelerometers per segment); 2) the great exploitation of Micro Electro-Mechanical Systems (MEMS) that, from a hardware point of view, made gyroscopes much higher in the measurement range and resolution, cheaper, miniaturised and low power consuming; 3) the great exploitation of sensor fusion algorithms that, from a software point of view, allowed for robust drift correction techniques of the numerical integrated angular velocity, together with a more accurate orientation estimate of rigid bodies [24]. The advent of MEMS technology is probably the reason why inertial sensors are typically presented as a recent alternative to standard motion analysis techniques. The commercial availability of MEMS inertial sensors is the key factor that truly opened up new perspectives in human movement analysis and that justifies the significant literature-related exploitation from the beginning of 2000 until the present day [18]. 3. On the estimate of the sensor’s orientation Since joint kinematics has been defined as the relative orientation of two adjacent body segments, its estimate first requires the knowledge of the orientation of each body segment by using inertial sensors. In a bi-dimensional (2D) (i.e., planar) case, joint kinematics is, then, usually computed as the difference between the planar orientation of the adjacent body segments. In the 3D case, just as in stereophotogrammetry, the 3D orientation of the two adjacent body segments is needed. Both in the 2D and 3D cases, relating the orientation of the sensor to that of the body segment on which the sensor is fixed is not straightforward. This will be the main topic in the following section and it will be further discussed as the various methodological approaches available in the literature for the estimate of joint kinematics are presented. In the 2D case, joint kinematics can also actually be estimated by comparing the equivalent accelerations of the hinge joint that connects the two adjacent body segments whose radial and tangential accelerations, along with the geometry of the system, are known. This approach will be discussed later in Section 5.1. In a static condition, the orientation of a rigid body can be measured by an accelerometer: since the latter senses solely the acceleration due to gravity, its signal is proportional to the deviation of its sensitive 5
axis from the vertical direction and its orientation with respect to gravity can be determined by simple trigonometry directly from the accelerometer’s reading [25]. A triaxial accelerometer is needed for an unambiguous representation of the orientation of an object with respect to gravity. This orientation representation, typically referred to as “attitude” in inertial navigation terminology, is represented by roll and pitch angles. In a dynamic condition, the rigid body orientation can be computed by the numerical integration of angular velocity. The latter can either be directly measured by a gyroscope or can be determined by using gyro-free IMUs composed of a minimum of 12 distributed uniaxial accelerometers [20]. As previously discussed, gyro-based IMUs have been preferred for human movement analysis purposes. Either by using gyro-based or gyro-free IMUs, the angular displacement is characterised by a time-increasing drift that strongly affects the reliability of the sensor’s orientation [26]. This is because of the intrinsic nature of the signal measured by MEMS inertial sensors that are characterised by an unpredictable low-frequency random-walk noise that becomes “amplified” in the numerical integration process. Drift affecting the angular displacement can be reset every time an a priori known kinematical state of the system can be detected. This can be done either by using an event identification (e.g., a shank that is vertical at midstance during gait) or by using the accelerometer’s readings every time a static or a quasi-static condition is detected. The practice of fusing gyroscope and accelerometer signals for the estimate of IMU orientation is typically referred to as “sensor fusion”. Sensor fusion algorithms are usually already implemented in commercially available off-the-shelf IMUs. Accelerometers can correct the orientation computed from the numerical integration of the angular velocity solely with respect to gravity (roll and pitch angles), but with a lack of reference information of the sensor’s orientation about the vertical direction (heading or yaw angle). The latter can be provided by a compass, which senses the local magnetic north and that can be used as an absolute reference to reset the drift about the vertical direction, so that a full 3D orientation of the sensor in space can be obtained. This justifies the use of a magnetometer in combination with the IMU to enhance sensor fusion algorithms [27]. Unfortunately, the presence of ferromagnetic disturbances distorts the sensing of the local magnetic north. This negatively affects the reliability of the estimated sensor's orientation and may, thus, compromise the usability of such application in the clinical settings, which are normally characterised by ferromagnetic materials and related interferences. 4. Segment-to-sensor alignment When estimating joint kinematics by using inertial sensors, a sensor-to-segment axis alignment is a crucial factor to take into account. As previously said, obtaining the orientation of the body segment from the known orientation of a sensor fixed on the segment is not straightforward. Sensor-to-segment axes alignment is needed for the sake of the functional readability of the measured or derived information content: to be called “joint kinematics”, it has to be the relative orientation between the anatomical axes of two adjacent body segments rather than solely the relative orientation between the axes of two adjacent body-fixed sensors.
4.1 2D kinematics 6
When measuring a static segment’s orientation with an accelerometer, the sensitive axis of the sensor has to be aligned with the segment’s anatomical axis in order to reference the orientation measurements with respect to the body segment. When joint kinematics is estimated by comparing the equivalent accelerations at a hinge-modelled joint, the radial axes of the pairs of biaxial accelerometers used for determining the segment’s rotational motion must intersect the joint. This can either be done manually [28], or the alignment can be aided by digital pictures [29]. Unlike the accelerometer-based orientation, the orientation of the gyroscope is relative to itself in that first instant of time of the numerical integration process. Joint kinematics will, then, be defined as the difference between the distal and the proximal body segment’s orientation. In this case, the sensor-tosegment axes alignment is, hence, not required since joint kinematics will consist in a variation of orientation whose initial state is assumed to be zero (e.g, knee flexion-extension angle is zero in the upright posture). The only shrewd choice is to mount the gyroscope on the body segment so that its sensitive axis is perpendicular to the plane of rotation. The same stands for those approaches where the angular motion of the segment is measured by using linear accelerations instead of gyroscopes [30]. Accelerometers can also be used for initialising the numerical integration of the angular rate to express the orientation of the sensor case with respect to a gravity-based inertial reference system. In this case, whereas the accelerometer’s axis has been aligned with the segment’s axis, the initial joint kinematics (i.e., the offset angle) is no longer zero but represents the orientation of the anatomical axis with respect to gravity in the first instant of time (e.g., upright posture before starting to walk) of the numerical integration process. A further axis can be defined during the selected segment’s rotational movements by using the direction of the measured angular velocity vector which can be assumed to coincide with the joint axis of rotation [31].
4.2 3D kinematics In order to estimate a functionally meaningful 3D joint kinematics by using inertial sensors, for each involved body segment, the orientation of the axes of the anatomical reference system representing the orientation of the body segment has to be known with respect to the orientation of the sensor-embedded reference system. This relationship is assumed to be time-invariant and, hence, can be accomplished once the sensors are mounted on the segment through ad hoc calibration procedures. This will be explained in detail in the next section. Once the time-invariant relation between the sensor and the anatomical axes is known, it will be sufficient, during walking, solely to record the time-varying orientation of the sensorembedded system of reference. 5. Estimating joint kinematics by using inertial sensors The first study which formalised the problem of the estimate of joint kinematics by using inertial sensors dates back to 1990 [28]. Since then, a variety of methodological approaches have been presented over the past 25 years to estimate 2D and 3D joint kinematics by using wearable inertial sensors. Several levels of classification are used in the present paper for classifying the relevant research from 1990 to the present day (Fig. 1). A first level consists in distinguishing between the 2D and the 3D joint kinematics. 7
In the 3D case, joint kinematics is essentially determined as the relative orientation between the proximal and distal sensor-embedded frames (both expressed with respect to the same absolute reference frame). To be precise, one segment is expressed as being relative to the adjacent segment by multiplying the transposed rotation matrix of the proximal segment by that of the distal segment. A joint orientation matrix is then obtained. Finally, joint kinematics is retrieved by decomposing the joint orientation matrix into three consecutive rotations about specified anatomical axes following a specified order of rotation (as normally done in stereophotogrammetry). While the 3D orientation of the sensor-embedded frame, with respect to an absolute reference frame, is generally readily available as provided by the manufacturers, the key point of 3D joint kinematics is the sensor-to-segment alignment: two different approaches, an anatomical approach [32] and a functional approach [33-35], have been classified and will be further discussed. With regards to the estimate of 2D joint kinematics, four different approaches have been classified. In particular, 2D joint kinematics has been estimated: 1) by comparing the equivalent accelerations of the proximal and the distal body segments at the connecting hinge joint (a sub-classification is possible between the studies solving the equation governing the motion of a rotating rigid body by using gyro-free IMUs, where the angular acceleration is determined from radial and tangential acceleration measured in correspondence to two points of the rigid body, [28], and by those using gyroscope-aided accelerometer system, where angular acceleration is computed from the measured angular velocity of the rigid body) [29]); 2) as the difference between the planar orientation of two adjacent body segments (a sub-classification is possible between gyro-based [36, 37] and gyro-free [30] segment orientation measurement techniques, while gyro-based technique can be further classified depending on the used drift correction method: kinematical reset [36] or sensor fusion [37]); 3) by using a mixed approach of the two previous methodologies [31]; 4) by using neural network prediction [38]. FIGURE 1 AROUND HERE 5.1 2D kinematics As previously said, the first attempts to estimate joint kinematics by using inertial sensors dates back to 1990 when Willemsen used two pairs of biaxial accelerometers mounted on a bar and then fixed to the lateral aspect of the thigh and shank. This was for estimating the angular acceleration of the two segments and for determining the radial and the tangential acceleration of the knee using the equation governing the motion of a rotating rigid body. The knee was modelled as a hinge joint, and distances between sensors and the joint were assumed as constant. The knee flexion-extension angle was, hence, computed by comparing the body-fixed equivalent acceleration at the connecting joint [28]. The advantage of this method is that no numerical integration is required. The disadvantages are the high frequency errors characterising the joint angle signal due to the fixation-related issue: the vibration of the accelerometers introduces unwanted highfrequency accelerations, especially in correspondence to ground impact. This issue was mitigated by lowpass filtering the estimated knee flexion-extension signal. Furthermore, care must be taken in placing the accelerometer pairs on the body segments, so that the radial axes of the pair of biaxial accelerometers, which are used for determining the segment’s rotational motion, must intersect the joint. This alignment was performed manually and, in addition, it can hardly be considered as effective during walking, where the knee does not behave exactly as a hinge. 8
The same analytical method was then proposed by Dejnabadi and colleagues, but using a hybrid approach that allowed the reduction of the number of sensors to a biaxial accelerometer per segment. The segment’s angular velocity and angular acceleration in the lateral plane (which are needed to translate the acceleration from the sensor’s location to the segments link) were, hence, respectively measured and derived by using a uniaxial gyroscope mounted perpendicularly to the biaxial accelerometer [29]. This latter approach, however, required the numerical differentiation of the angular velocity. Sensors were arbitrarily mounted on the segments, and sensor-to-segment axis alignment was performed by using a photograph. A small root mean square error (1.3°) and a high correlation coefficient (0.997) were found with respect to photogrammetry. Basically, the same approach of Dejnabadi was used by Takeda and colleagues [39], but knee kinematics was also extended to the frontal plane (knee adduction-abduction angle). This was because a triaxial accelerometer and gyroscope were used. Segment-to-axis alignment was neither performed manually nor by images, but using ad hoc calibration postures, so that the sensor could be placed in an arbitrary orientation on the body segment. The authors reported a root mean square error (RMSE) of 6.79° and a correlation coefficient of 0.92 with respect to a standard motion analysis. The great exploitation in the assessment of joint kinematics with inertial sensors came from the so called “strapdown approach”. Although Bortz’s study dates back to the early 70s, the strapdown approach started to become the most considered and the most convenient solution when, in the late 90s, miniature MEMS gyroscopes became commercially available and when they could be integrated in a small electronic assembly. Tong and Granat proposed “a practical gait analysis” by using a uniaxial gyroscope mounted on the thigh and on the shank, in order to estimate segment sagittal orientation by numerical integration of the angular velocity [36]. The drift affecting the segment’s planar rotation was compensated either by resetting the numerical integration every time the thigh and the shank could be considered vertical during the gait cycle (kinematical reset) or by high-pass filtering. The knee flexion-extension angle was then computed by subtracting the rotation of the shank from that of the thigh. Since no external reference can be obtained with gyroscopes, the knee flexion-extension angle was considered as zero at the beginning of trial when the numerical integration process started (standing posture). The advantage of this approach is certainly the light setup required for the analysis. Furthermore, no sensor-to-segment axis alignment is required since the initial angle (initial condition of the numerical integration) is simply zero. No offset (posture) angles can be obtained, however. The authors reported a RMSE of 6.42° and a correlation coefficient of 0.93 with respect to a standard motion analysis. This approach was further enhanced when the resetting process of the gyro-based segment orientation was performed by a sensor fusion algorithm based on signals measured by a triaxial IMU: a Kalman filter was proposed by Cooper and colleagues [37] for using accelerometer readings in addition to joint constraints for continuously correcting the drift affecting the gyro-based segment’s orientation. The method has indeed proved to be robust and accurate: a 0.7° of error during walking and a 3.4° of error during 5 mph running were found against a standard motion analysis during 5 minutes trials. The use of joint constraints consisted in “locking” the knee for allowing rotations solely in the lateral plane: this allowed for the exclusion of a heading reference (i.e., magnetometers) and, hence, avoided the limitations due to ferromagnetic 9
disturbances. On the other hand, modelling the knee as a hinge joint reduced the degrees of freedom of the joint solely to flexion-extension. A similar approach to that used by Tong and Granat was proposed by Djuric-Jovicic and colleagues [30]: the knee flexion-extension angle was computed by subtracting the rotation of the shank from that of the thigh. The segment rotation was computed, though, by double numerical integration of the planar angular acceleration, as determined by a pair of biaxial accelerometers that were used for measuring the radial and the tangential segment accelerations. The drift affecting the numerical integration process was corrected by high-pass filtering. The advantage of this approach is that a sole set of biaxial accelerometers has been used for solving the segment’s angular motion equation (instead of two pairs, as done by Willemsen). The authors reported a root mean square error of below 6° and a correlation coefficient of 0.97 with respect to a standard motion analysis. A regression algorithm was used by Fidlow and colleagues to train a neural network for predicting transverse plane joint kinematics of the lower limb from measured body segment 3D linear acceleration and angular velocities [38]. Neural network training was performed by using joint kinematics as measured by a standard motion analysis system. As expected by the authors, the best results were from intra-subject predictions, which resulted in very high correlations (0.99) and low mean absolute deviation (2.3°) between the measured and the predicted joint kinematics. In contrast, the worst scenario resulted from the intersubject prediction which produced poorer correlations (0.88) and larger errors (7.8°). The latest methodology presented for the estimate of 2D joint kinematics is one that was proposed by Seel and colleagues [31]: it consists in a mixed approach where the knee flexion-extension angle was computed both by numerical differentiation of angular velocity and by using the method proposed by Dejnabadi. The final joint kinematics resulted from a Kalman filter-mediated weighted average between the two flexion-extension angles: during the high frequency events of gait cycle (e.g. heel strike), where the segmental kinematics derived from accelerations may result in being unstable and characterised by a highfrequency noise, the filter assigns more importance to the drifting but smoother joint kinematics estimated by numerical differentiation of angular velocity, and vice versa. The authors reported errors of less than 1° against a standard motion analysis. The IMU could be placed in an arbitrary orientation on the body segment since the sensor-to-segment axis alignment was carried out using ad hoc segment rotational movements where the direction of the angular velocity vector (as defined with respect to the sensorembedded frame) was used as joint rotation axis. The main innovation of this paper, however, is the presence of the very first attempt to estimate the position of the joint rotation centre with respect to the sensor from arbitrary movements by using kinematic constraints. This allows for no manual measurements of sensor-to-hinge distances, as required by previously mentioned approaches.
5.2 3D kinematics 3D joint angular kinematics requires the estimate of the 3D sensor’s orientation in space. For this reason, these solutions are based on sensor fusion algorithms of IMU signals [34] and most of them feature the use of a magnetometer [32, 33, 35]. In some cases the sensor orientation is readily furnished by proprietary “black-boxed” sensor fusion algorithms provided by the vendor of the sensor [32, 35], while, in 10
other cases, the sensor orientation algorithm was proposed by the authors and explained in the paper [33, 34]. Once the 3D orientation of a sensor, which is rigidly associated with a body segment, can be tracked, the orientation of the bone embedded frame, with respect to the sensor’s embedded frame, is required in order to estimate joint angular kinematics. This procedure is commonly referred to as an anatomical calibration and can be carried out using two different approaches. An anatomical approach has been proposed by Picerno and colleagues in 2008, where the directions of a set of axes defined by selected palpable anatomical landmarks were measured by using a magnetometeraided IMU mounted on a specific pointing device [32]. These directions were used to determine the orientation of the bone with respect to a second sensor placed on the body segment and then used to track its movement, assuming that the two sensors shared the same global reference frame. The orientation of the bone embedded frame can thus be expressed with respect to the global reference frame, and its orientation can be computed relative to the contiguous bone embedded frame determined with the same procedure. 3D joint kinematics was estimated in correspondence to the hip, knee and ankle joints during gait. The highest RMSE values between the estimated waveforms and those measured by a reference stereophotogrammetric system were 1.9°, 2.8 and 3.6° on the lateral, the frontal and the transverse plane, respectively. O’Donovan and colleagues used a functional approach in order to determine the orientation of the ankle joint by using the direction of the angular velocity vector measured by a sensor fixed on the foot while the latter performed the selected segment’s rotational movements about the functional joint axes [33]. The reported RMSE values (as computed against a reference stereophotogrammetric system during gait) were 0.55°, 2.3° and 4.09° on the lateral, the frontal and the transverse plane, respectively. The same functional approach was used by Cutti and colleagues, who extended the assessment of joint kinematics to the whole lower limb [35]: the accuracy with respect to a sterophotogrammetric system during a walking trial was evaluated in terms of waveform similarity using the coefficient of multiple correlation (CMC) [40]. The latter was reported as ranging from 0.95 to 1, from 0.92 to 0.95 and 0.68 to 0.92 for the lateral, the frontal and the transverse plane kinematics, respectively. A final mention goes to the work of Favre and colleagues in 2009 who, differently from the work of O’Donovan and Cutti, proposed a functional approach for estimating 3D knee kinematics without using magnetometers [35]. The reported mean differences and the CMC between the estimated kinematics and that measured by a reference stereophotogrammetric system during gait were 8.1° (CMC=1), 6.2° (CMC=0.76) and 4° (CMC=0.85) for the lateral, the frontal and the transverse plane kinematics, respectively. All of the mentioned methodological approaches for the estimate of 3D joint kinematics showed a high reliability with respect to a video-based stereophotogrammetric system that was assumed to be a gold standard. Even if these methods are not completely comparable due to the different statistical tools used for reliability assessment, a common trend is the error that increases in the transverse plane. The sensor’s orientation in the transverse plane, relying mainly on magnetometers, results as the most exposed to ferromagnetic disturbance related errors.
11
6. Discussion and Conclusions A 25 year methodological review of studies related to the estimation of joint kinematics by means of inertial sensors has been performed. A list of the methodological contributions, along with their relevant main peculiarities and distinctiveness, can be found in Table 1. TABLE 1 AROUND HERE Generally, all of the analysed approaches were found to be accurate with respect to standard motion analysis techniques. For this reason, the following debate is going to be addressed more on the practical and “ecological” aspects, rather than on measurement reliability matters. For instance, a relevant factor is the usability of the method in real-time applications; where the clinical context requires a real-time description of joint kinematics, then all of the methodologies which rely on post-processing operations (e.g., high pass filtering for drift correction [30, 36] or low-pass filtering for high frequency noise removal [28]) cannot be considered as a suitable solution. A critical factor, crucial for the clinical applicability of any methodology, is the measurement’s setup; approaches requiring a single sensor per body segment are preferred to gyro-free based techniques [28, 30], which are typically characterised by a cumbersome setup of several accelerometers per segment. Moreover, a gyro-free approach is affected by high frequency errors, even if, at the same time, it allows one to avoid any numerical integration process for estimating joint kinematics. However, considering the latest advances in sensor fusion algorithms, the drift affecting the numerical integration of the gyro-based segment’s orientation no longer represents a big concern, especially during low-frequency cycling gestures like walking. The simplest solution for assessing joint kinematics is the approach proposed by Tong and colleagues, which makes use of a single uniaxial gyroscope per segment [36]. This solution is fine if sagittal plane kinematics (e.g., knee flexion-extension) is the sole quantity of interest. With regard to the consequent drift affecting the estimated joint angle, sensor fusion might be more “elegant” from a computational point of view with respect to the kinematical reset that is required when using Tong's approach. This is because the automatic events identification, needed for performing the kinematical reset, might not be straightforward, especially in pathological gait [41]. However, the accuracy of joint kinematics would depend on the accuracy of event identification. A sensor fusion solution would, though, require adding a triaxial accelerometer to the sensor’s assemble [37], even if the final assemble would still result in a single device. Nevertheless, it is not known whether a kinematical reset would be more or less efficient than a sensor fusion solution in compensating for drift because no such study has ever been conducted yet. Since drift increases over time during the numerical integration process, any comparison should be performed on a long distance gait. The advantage of adding a triaxial accelerometer, beyond drift correction, is the possibility of assessing offset (posture) angles. An alternative approach for the assessment of 2D joint kinematics using a single IMU per segment is the proposals of Takeda [39] and Dejnabadi [29], in which joint kinematics is computed without any numerical differentiation. Furthermore, the approach proposed by Takeda and colleagues allows one to extend the assessment also to the frontal plane. Finally, with regard to sensor-to-segment alignment in 2D joint kinematics, a functional approach [31] is probably easier and less time-consuming to execute than performing the alignment using a picture [29]. 12
With regard to 3D joint kinematics, both the anatomical and functional calibrations resulted in being reliable with respect to standard motion analysis techniques. Nevertheless, the resulting joint kinematics is intrinsically different since the two approaches lead to different anatomical axes definitions. For this reason, whereas no comparison in terms of accuracy can be made, any investigation should be addressed on assessing their functional and clinical significance. However, the biggest issue related to the estimation of the 3D sensor’s orientation (and the related joint kinematics) remains compensation for ferromagnetic disturbances. A magnetometer-based sensor orientation may expose the estimation of joint kinematics to errors related to any alteration of the local magnetic field. If it is true that a number of papers have demonstrated the efficacy of sensor fusion algorithms in compensating for ferromagnetic disturbances [43-45], it is also true that clinical ambulatory settings are unpredictable and not very standardisable from this point of view [46].The best solution thus far seems to be, when possible, to avoid using magnetometers at all, but by “settling” for a two-plane rather than a 3D joint kinematics. In this regard, the signal-to-noise ratio characterising transverse plane kinematics could be so low in fact that differences due to a pathology or an intervention are barely appreciable or can even be misleading. Furthermore, the actual requirements of clinicians are, sometimes, limited to lateral plane kinematic analysis. To conclude, wearable inertial sensors represent a good solution for estimating joint kinematics when conventional gait analysis is too expensive or, above all else, the analysis has to be carried out outside of the laboratory in clinical ambulatory settings. Commercially available solutions are usually provided with software that returns the 3D sensor’s orientation or even with implemented protocols for estimating 3D joint kinematics during gait. In this case, the user must be aware of the issues related to ferromagnetic disturbances, to sensor-to-segment alignment and to the proprietary sensor fusion algorithm’s accuracy when estimating the 3D sensor’s orientation. This latter issue can be assessed with ad-hoc spotchecks that can be performed prior to data collection [47].
Conflicts of Interest: “The author declare no conflict of interest."
Acknowledgments: no funding were received for the preparation of this paper.
Supplementary Materials: none
13
References 1.
Cappozzo
A,
della
Croce
U,
Leardini
A,
Chiari
L.
Human
movement
analysis
using
stereophotogrammetry. Part 1: theoretical background. Gait Posture 2005;21:186-96. 2.
Cimolin V, Galli M. Summary measures for clinical gait analysis: a literature review. Gait Posture 2014;39:1005-10.
3.
Saunders JB, Inman VT, Eberhart HD. The major determinants in normal and pathological gait. J. Bone Joint Surg Am 1953;35:543-58.
4.
Ortega JD, Farley CT. Minimizing center of mass vertical movement increases metabolic cost in walking. J Appl Physiol 2005;99:2099–2107.
5.
Wren TA, Gorton GE, Ounpuu S, Tucker CA. Efficacy of clinical gait analysis: A systematic review. Gait Posture 2011;34:149-53.
6.
Whittlea MW. Clinical gait analysis: A review. Hum Mov Sci 1996;15:369–87.
7.
Sutherland DH. The evolution of clinical gait analysis. Part II kinematics. Gait Posture 2002;16:159-79.
8.
Burgess-Limerick R, Abernethy B, Neal RJ. Relative phase quantifies interjoint coordination. J Biomech 1993;26:91-4.
9.
Drewes LK, McKeon PO, Paolini G, Riley P, Kerrigan DC et al. Altered ankle kinematics and shank-rearfoot coupling in those with chronic ankle instability. J Sport Rehabil 2009;18:375–88.
10. Chiu SL, Lu TW, Chou LS. Altered inter-joint coordination during walking in patients with total hip arthroplasty. Gait Posture 2010;32:656–60. 11. Winogrodzka A, Wagenaar RC, Booij J, Wolters EC. Rigidity and bradykinesia reduce interlimb coordination in Parkinsonian gait. Arch Phys Med Rehabil 2005;86:183–89. 12. Kwakkel G, Wagenaar RC. Effect of duration of upperand lower-extremity rehabilitation sessions and walking speed on recovery of interlimb coordination in hemiplegic gait. Phys Ther 2001;82:432–48. 13. Hutin E, Pradon D, Barbier F, Gracies J, Bussel MB, Roche N. Lower limb coordination patterns in hemiparetic gait: factors of knee flexion impairment. Clin Biomech. 2011;26:304–11. 14. Fowler EG, Goldberg E.J. The effect of lower extremity selective voluntary motor control on interjoint coordination during gait in children with spastic diplegic cerebral palsy. Gait Posture 2009;29:102–7. 15. Woltring HJ. On optimal smoothing and derivative estimation from noisy displacement data in biomechanics. Hum Mov Sci 1985;4:229–45. 16. Cuesta-Vargas AI, Galán-Mercant A, Williams JM. The use of inertial sensors system for human motion analysis. Phys Ther Rev 2010;15:462–73. 17. Tao W, Liu T, Zheng R, Feng H. Gait Analysis Using Wearable Sensors. Sensors (Basel) 2012;12:2255–83. 18. Fong DT, Chan YY. The use of wearable inertial motion sensors in human lower limb biomechanics studies: a systematic review. Sensors (Basel) 2010;10:11556-65. 19. Sutherland DH, Hagy JL. Measurement of gait movements from motion picture film. J. Bone Joint Surg Am. 1972; 54:787-97. 20. Morris JRW. Accelerometry - A technique for the measurement of human body movements. J Biomech 1973;6:729-36. 14
21. Bortz JE. A new mathematical formulation for strapdown inertial navigation. IEEE Trans Aerosp Elec Sys 1971;7:61-6. 22. McGinnis RS, Perkins NC, King K. Reconstructing free-flight angular velocity from miniaturized wireless accelerometer. J Appl Mech 2012;1:274. 23. Park S, Hong SK. Angular rate estimation using a distributed set of accelerometers. Sensors (Basel) 2011;11:10444-57. 24. Sabatini AM. Estimating three-dimensional orientation of human body parts by inertial/magnetic sensing. Sensors (Basel) 2011;11:1489-525. 25. Luczak S, Oleksiuk W, Bodnicki M. Sensing tilt with MEMS accelerometers IEEE Sensors J 2006;6:1669– 75. 26. Woodman OJ. An introduction to inertial navigation. Technical Report UCAM-CLTR-696: University of Cambridge 2007. 27. Roetenberg D, Luinge HJ, Baten CT, Veltink PH. Compensation of magnetic disturbances improves inertial and magnetic sensing of human body segment orientation. IEEE Trans Neural Syst Rehabil Eng 2005;13:395-405. 28. Willemsen AT, van Alste JA, Boom HB. Real-time gait assessment utilizing a new way of accelerometry. J Biomech 1990; 23:859-63. 29. Dejnabadi H, Jolles BM, Aminian K. A new approach to accurate measurement of uniaxial joint angles based on a combination of accelerometers and gyroscopes. IEEE Trans Biomed Eng 2005;52:1478-84. 30. Djuric-Jovicic MD, Jovicic NS, Popovic DB. Kinematics of gait: new method for angle estimation based on accelerometers. Sensors (Basel) 2011;11:10571-85. 31. Seel T, Raisch J, Schauer T. IMU-based joint angle measurement for gait analysis. Sensors (Basel) 2014;14:6891-909. 32. Picerno P, Cereatti A, Cappozzo A. Joint kinematics estimate using wearable inertial and magnetic sensing modules. Gait Posture 2008;28:588-95. 33. O'Donovan KJ, Kamnik R, O'Keeffe DT, Lyons GM. An inertial and magnetic sensor based technique for joint angle measurement. J Biomech 2007;40:2604-11. 34. Favre J, Aissaoui R, Jolles BM, de Guise JA, Aminian K. Functional calibration procedure for 3D knee joint angle description using inertial sensors. J Biomech 2009;42:2330-5. 35. Cutti AG, Ferrari A, Garofalo P, Raggi M, Cappello A, Ferrari A. 'Outwalk': a protocol for clinical gait analysis based on inertial and magnetic sensors. Med Biol Eng Comput 2010;48:17-25. 36. Tong K, Granat M. A practical gait analysis system using gyroscopes. Med Eng & Phys 1999;21_87-94. 37. Cooper G, Sheret I, McMillian L, Siliverdis K, Sha N, Hodgins D et al. Inertial sensor-based knee flexion/extension angle estimation. J Biomech 2009;42:2678-85. 38. Findlow A, Goulermas JS, Nester C, Howard D, Kenney LPJ. Predicting lower limb joint kinematics using wearable motion sensors. Gait Posture 2008;28:120-6. 39. Takeda R, Tadano S, Todoh M, Morikawa M, Nakayasu M, Yoshinari S. Gait analysis using gravitational acceleration measured by wearable sensors. J Biomech 2009;42:223-33.
15
40. Ferrari A, Cutti AG, Garofalo P, Raggi M, Heijboer M, Davalli A. First in vivo assessment of ”Outwalk”: a novel protocol for clinical gait analysis based on inertial and magnetic sensors. Med Biol Eng Comput 2010;48:1-15. 41. Trojaniello D, Cereatti A, Pelosin E, Avanzino L, Mirelman A et al. Estimation of step-by-step spatiotemporal parameters of normal and impaired gait using shank-mounted magneto-inertial sensors: application to elderly hemiparetic parkinsonian and choreic gait. .J Neuroeng Rehabil 2014;11:152. 42. Roetenberg D, Baten CT, Veltink PH. Estimating body segment orientation by applying inertial and magnetic sensing near ferromagnetic materials. IEEE Trans Neural Syst Rehabil Eng 2007;15:469-71. 43. Yadav N, Bleakley C. Accurate orientation estimation using AHRS under conditions of magnetic distortion. Sensors (Basel) 2014;24:20008-24. 44. de Vries WH, Veeger HE, Baten CT, van der Helm FC. Magnetic distortion in motion labs implications for validating inertial magnetic sensors. Gait Posture 2009;29:535-41. 45. Picerno P, Cereatti A, Cappozzo A. A spot check for assessing static orientation consistency of inertial and magnetic sensing units. Gait Posture 2011; 33:373-8.
16
FIGURES FIGURE 1
CAPTIONS TO FIGURES Figure 1. Outline of the methodological approaches used for the estimate of 2D (four approaches) and 3D (one approach) joint angular kinematics.
17
TABLES Table 1. Main characteristics of the methodological approaches, arranged in chronological order, proposed for the estimate of joint angular kinematics (“accel” stands for “accelerometer”; “mag-aided” stands for “magnetometer-aided”; “gyro” stands for “gyroscope”).
STUDY
WHAT
DEVICES/SEGMENT
HOW
DRIFT CORRECTION
ALIGNMENT
Willemsen 1990
knee flex-ext
2 (biaxial accel)
by comparing the equivalent accelerations of proximal and distal body segment at the
not required
manual
kinematic reset
not necessary
connecting hinge joint; joint's acceleration from rigid body angular motion equation Tong 1999
knee flex-ext
1 (uniaxial gyro)
difference between the planar orientation of 2 adjacent body segments; segment’s orientation computed by numerical integration of angular velocity
Dejnabadi 2005
knee flex-ext
1 (biaxial accel + monoaxial gyro)
as Willemsen [29]
not required
picture
O’Donovan 2007
3D ankle kinematics
1 (mag-aided IMU)
relative orientation between the proximal and distal segment's 3D frames
sensor fusion
functional
Findlow 2008
hip knee ankle flex-ext
1 (IMU)
neural networks-based prediction from measured segmental linear accelerations and
not required
not necessary
angular velocities Picerno 2008
hip knee ankle 3D kin.
1 (mag-aided IMU)
relative orientation between the proximal and distal segment's 3D frames
sensor fusion
anatomical
Favre 2009
3D knee kin
1 (IMU)
relative orientation between the proximal and distal segment's 3D frames
sensor fusion
functional
Takeda 2009
knee flex-ext& abd-add
1 (IMU)
as Dejnabadi [30]
not required
functional
Cooper 2009
knee flex-ext
1 (IMU)
as Tong [37]
sensor fusion + joint
not necessary
constraints Cutti 2010
hip knee ankle 3D kin
1 (mag-aided IMU)
relative orientation between the proximal and distal segment's 3D frames
sensor fusion
functional
Djuric-Jovicic 2011
knee flex-ext
2 (biaxial accel)
as Tong [37]; segment’s orientation is computed by double numerical integration of
high-pass filtering
not necessary
weighted average of the
functional
the angular acceleration determined from rigid body angular motion equation Seel 2014
knee flex-ext
1 (IMU)
knee flexion-extension angles computed according to [30] and [37]
two estimates
2