Copyright © IFAC Nonlinear Control Systems, Stuttgart, Germany, 2004
ELSEVIER
IFAC PUBLICATIONS www.elsevier.coml1ocate/ifac
NONLINEAR SYSTEM IDENTIFICATION OF MUSCLE CONTRACTIONS INDUCED BY REPETITIVE PERIPHERAL MAGNETIC STIMULATION 1 Bernhard Angerer'" Dierk SchrOder * Albrecht Stroppier *...
... Institute for Electrical Drive Systems, Technische Universitiit Miinchen, Germany, www.eat.ei.tum.de *.. Sensorimotor Research Group, Klinikum rechts der Isar, Miinchen, Germany, www.netstim.de
Abstract: Repetitive peripheral magnetic stimulation (RPl\lS) is an innovative approach in treatment of central paresis, e.g. after a stroke, by inducing muscle contractions and relaxations. The therapeutic effect can be increased by a closed loop control to induce coordinated movements in the forearm and the (index) finger. Therefore the nonlinear system identification based on the Volterra theory, the general regression neural network and the Hammerstein model is used to achieve an appropriate model of muscle contractions induced by RPMS. Copyright © 20041FAC Keywords: Biomedical application, Identification algorithms, Magnetic stimulation, Neural network, Radial base function networks, Volterra series
In order to activate a beneficial reorganization process in central paresis, the lost (reduced) proprioceptive input should be compensated. Currently physiotherapy aims to achieve such a compensation. through externally applied passive movements. When the lost movements are induced by muscle stimulation, the associated proprioceptive input is much higher and corresponds closer to the lost voluntary action patterns.
1. INTRODUCTION - THERAPEUTIC BACKGROUND In central paresis, e.g. after a stroke, a paresis of the arm and/or hand reduces the quality of life dramatically, while a paresis of the leg can easier be compensated. Studies on large clinical cohorts, using standard therapeutic methods, showed that approximately 45 % of patients with completed stroke have persistent hemiparesis [Gresham and Stanson 1998].
In this context the functional electrical stimulation (tES) is a well-known method to induce movements by muscle stimulation (e.g. [Gollee et al. 2001]). However, the fES not only activates somatosensory nerve fibers, also cutaneous receptors are activated. This causes pain and spasticity by skin irritation. Therefore the usability of fES for therapeutic purposes is strongly limited, while fES is useful in the case of paraplegia. (review in [Agarwal et al. 2003])
Cortical reorganization probably forms the bases of relearning lost motor functions. Hereby the integrity of both the executive motor structures and the afferent sensory loop for motor recovery after a stroke is important. In central paresis morphological and functional investigations revealed that even the mature brain is capable of considerable, partly also structural modifications (reorganization, neuroplasticity) [Weiller and Rijntjes 1999].
As a new, deeper penetrating, focused and painless stimulation method repetitive peripheral magnetic stimulation (RPA1S) is used. Hereby the muscle contractions are elicited by depolarization of the terminal
1 supported by "Deutsche Forschungsgemeinschaft" (Gennan Research Foundation)
525
motor branches. The therapeutic concept of RPA1S is the activation of the reorganization processes in the central nervous system (CNS) by induction of proprioceptive input to the CNS corresponding physiologically to the lost input during active movements. In clinical experimental studies (references in [StroppIer et al. 2004]) on spasticity, cognitive functions, cerebral activation, a postural motor performance component and on disturbed goal-directed motor performances it could be shown, that the sensorimotor failures due to brain lesions can be remarkably improved by RPA1S. Together these findings lead to the conclusion that the proprioceptive inflow evoked by RPA1S modifies control systems at the spinal and the cortical level. This is essential for the therapeutic effect of
RPA1S. To optimize the proprioceptive inflow, it is necessary to induce coordinated movements by stimulation of multiple muscle groups. Hence a nonlinear closed loop control of the position of the finger tips as depicted in fig. I is the overall target [Havel and Struppler 200 1].
Fig. 2. Principal RP!Y1S application coil of approx. 3 kA. These impulses are repeated at a frequency of 20 Hz. The field impulses are generated by a self-built stimulation device.
2. NONLINEAR SYSTEM IDENTIFICATION In this paper a new approach for the identification of a Hammerstein model based on the Volterra theory is presented. Conventional methods use a polynomial description of the static nonlinear function, but polynomials are not suited to approximate static non linear functions with a saturation characteristic (e.g. the description of isometric muscle contractions). Therefore a neural network is used instead of the polynomial approximation of the nonlinear static function. To reduce the number of parameters of the Volterra kernels, orthonormal basis functions (OBFs) are used in the presented approach.
2.1 Hammerstein Model The Hammerstein model consists of a nonlinear static function followed by a linear time invariant (LTI) transfer function as displayed in fig. 3. It is assumed Fig. 1. Nonlinear closed loop control of the position of the index finger tip - therapeutic RPMS application
~ Fig. 3. Structure of the Hammerstein model
Since the force induced by RPA1S depends on various factors like the position and orientation of the stimulation coil, an online system identification will be used for the adaptation of the nonlinear closed loop control.
that the non linear static function can be described by a polynomial v = N.C( u) = ao u O + ... + a q uP of qth order, while the LT! transfer function will be charach[i] v[k - i] terized by the convolution y[k] = of the impulse response. By inserting the polynomial in the convolution a mathematical description for the Hammerstein model is given:
z::::o
In this paper a nonlinear system identification algorithm based on the Volterra theory, the general regression neural network and the Hammerstein model is used to generate a model of the force at the forearm induced by RPA1S of the biceps muscle.
y[k] =ao Lh[i] uO[k-i] +
... + a qLh[i] uq[k-i]
i=O
i=O
(I)
1. I RPA1S Application
The general discrete Volterra equation [Schetzen 1980]
To induce movements by RP!Y1S the magentic field impulses are applied to the terminal motor branches of the stimulated muscle as depicted in fig. 2.
00
y[k] = c +
L
g[il] u[k - i l ] +
(2)
i 1 =O 00
The applied field impulses are sinusoidal half-waves with a fixed duration of 100 J-Ls and a variable amplitude called stimulation intensity. The maximum stimulation intensity of 100 % corresponds to a magnetic flux density of 2.0 T and a current at the stimulation
00
+ L Lg[it,i2 ]u[k-ir]u[k-i 2 ]+ ... + i 1 =O i2=O 00
00
+ L'" L i 1 =O
526
iq=O
g[it, ... ,iq]u[k - i l ]·· ·u[k - iq]
has to be taken into account for deriving the mathematical description which has to be used for the identification of a Hammerstein model using OBFs.
By inserting eq. (5) in the convolution Y[k] , the mathematical description for L: m th~~amIYt,~e.inirodel using the GRNN is given by: m
In eq. (2) g[i] E IR are the elements of the Volterra kernels and c E IR is the steady state value. The number of multiplicated input variables u[ k - i] corresponding to each Volterra kernel is a characteristic for its order.
At first sight eq. (7) is completely different from eq. (4). In comparison to the approach using the polynomial, the input variable u[k - i] is now the argument of the normalized activation functions Al(U[k If these normalized activation functions are considered as input variables, it is possible to find an analogy to the discrete Volterra equation by assuming
iD.
V i 1 =I i 2 V i 1 == i 2
9l[i]==h[i]8J\£:,l' q==p and c==O
00
(3)
p
1=1 i=O
==
where c == ao and 9l [i] i 1 == ..• == il == i.
m
y[k]==LL9l[i]Al (u[k-i])
g[i 1 , ... ,id == al h[i] with
(9)
l=l i=l
With the definition of the parameter vector ~ E IRm·p
Since the impulse response of a stable system will decay, the upper limit of the sums can be substituted by the length of the truncated impulse response m E N+ . If just systems are allowed where the input signal does not directly influence the output, the lower limit of the sums is i == 1 instead of i == O. These simplifications lead to the the discrete Volterra type equation for a Hammerstein model given by [Hofmann et aL. 2002]: q
(8)
Corresponding to eq. (4) the mathematical description for the Hammerstein model is now given by [Treichl et aL. 2002]:
Hence eq. (2) can be simplified to q
(7)
i=l l=l
If eq. (1) and eq. (2) are compared with each other, it is indicated that not all elements of the Volterra kernels have to be used to describe the Hammerstein model. For a second order Volterra kernel (e.g.) it can be declared that
y[k] == c+ L L91[i] ul[k - i]
p
Y[k] == L Lh[i] e~,l Al(U[k - i])
~T == [91 [1] ... 91 [m] ... gp[l] ... gp[m]] (10) ~~ -T
§.l
and the measurement vector ~[k] E IRm·p
~T[k]
== [Ai[k] ... ~[kJJ with the activation vectors & E IRm
(11)
m
Y[k] == c + L L 9l[i] ul[k - i]
(4)
l=l i=l
(12)
Due to the truncated impulse response an approximation error is left in comparison to eq. (3). Hence the output of the Hammerstein model Y[k] and the elements of the Volterra Kernel m[i] are marked as estimated.
eq. (9) can be rewritten in vector notation
Since polynomials are not well-suited to approximate static nonlinear functions with a saturation characteristic, a general regression neural network (GRNN) [Schr6der et al. 200 1] is used instead of the polynomial approximation of the nonlinear static function:
Apart from the simplifications of the general discrete Volterra equation for the Hammerstein model, the number (m . p) of unknown parameters (Volterra kernel elements) in ft is still too large for practical use.
~[k] == [Ap(u[k - 1]) ... Ap(u[k - m])]
-T
y[k] == ~
(5)
l=l
8~,l
E IR are the unknown network weights of the GRNN and A l ( u) E IR are the nonnalized activation functions:
Al
=~
with
Cl
= exp (
(13)
Therefore a weighted superposition of OBFs is used to approximate the truncated impulse responses in the Volterra kernel. By this supefPOsition the number of unknown parameters is reduced to (m r . p) with m r E N+ as the number of OBFs. In order to achieve a parameter reduction the number m r of OBFs must be smaller than the length m of the truncated impulse response as can be seen in fig. 4.
p
v == .Nt(u) == ~ ~(u) == L 8~,l Al(U)
~[k]
(u ;U~I)2) (6)
The OBFs used in this paper are distorted sinusoidal functions 2 since weakly damped systems as well
~ Ej
j=l
Xl E IR are the centers of the activation functions and E IR+ is a smoothing parameter for the Gaussian radial basis functions.
(J
2 Other OBFs for the description of a truncated impulse response exist, e.g. Kautz functions [Wahlberg 1994].
527
is not used as an additional input signal for the identification algorithm, but only to calculate an error signal as depicted in fig. 5. That means that stability of the identification algorithm can be proven [Treichl et al. 2002] and that the identi fication algorithm is very robust against measurement noise [Hofmann et al. 2002].
as strongly damped systems can be described with this type of functions. [Killich 1991] The distorted sinusoidal functions R E IRm r xm are defined by
1
rji
= ~ sin (j 1r (1 - exp (_i-O.5/<))) (14)
V j = 1 ... m r
V i = 1 ... m
and
and orthononnalized by the Cholesky decomposition 1 R with eT = RRT and according to ii =
R E IRm
e-
r
In order to use eq. (18) for the purpose of system identification an adaptation law has to be applied. For this purpose it is assumed that the system can be represented formally by
e
xm.
With the form factor ( E IR and m r the OBFs can be adjusted to the dynamics of the system. In fig. 4 e.g. three distorted sinusoidal functions R and the OBFs ii are displayed. ~
(19)
a'
where [k) is the optimal parameter vector and ~' [k) the measuring vector. de[k] is the inherent approximation error which results from the truncated impulse response and the OBFs. This error disappears with a rising number of parameters which also means a rising number of OBFs.
O.5.-----~-~-__,
.g U t::
cC Cil
"0
.~
= !t'T ~/[k] + de[k]
y[k]
0
.~
The output signal of the identi fication algorithm y[ k] is defined by eq. (18) with the unknown parameter
~s
--I
:.a -O.50~~-~10-~15----J20
10
sample i
15
vector ~ . The error signal
20
sample i
e[k)
Fig. 4. Distorted sinusoidal functions with m = 20 and m r = 3 (left) and OBFs (small dots, right) reconstructing the truncated impulse response (big dots, right) by a weighted superposition
is used to adjust the parameter vector !t . As learning algorithm for online identification the recursive least square (RLSQ) [Ljung 1999] algorithm is well-suited and if the system is persistently excited the unknown parameters converge to their optimal values.
2.3 Parameter Reconstruction
~T=[[~[1] ... ~[mr]]R ... [~[1] ... ~[mr]]R] I,
V"
--T
"..
--I
Since the parameter vector ~ contains the network weights of the GRNN as well as the truncated impulse response of the LT! transfer function, a parameter reconstruction has to be performed in order to use the static nonlinear characteristic for a design of an (adaptive) nonlinear closed loop control. [Angerer et al. 2004]
.f
--T
!!1
§,p
(J 5) Since the OBFs are known, it is not useful to include the OBFs into the unknown parameter vector. Hence a --I
reduced parameter vector ~ E --IT i1. = ["" gd1]
IRmr·p
is defined
---I ---I] ---I ]] ... gdm (16) r ] ... gp[l ... gp[m r
For the parameter reconstruction it is assumed that the gain of the LTI transfer function is one. If this assumption is not fullfilled the gain of the transfer function will be reassigned to the static nonlinear characteristic m for the reconstructed by assuming I: impulse respons~. ~~ th» ¥sumption and through eq. (8), (10), (15fand ("16) the identification result for the network weights of the GRNN can be written as
and a new measurement vector including the OBFs is given:
~/T[k]
=
[di[k] iiT ... ~[k] R.T]
(17)
Together this leads to a simplified description of the Hammerstein model with a expedient number of parameters for practical use:
Y(k]
--fT
= ~
m
8Ni:,,1
[k] ~/[k]
(20) --I
Apart from an additional approximation error, which is minimized by the leraming algorithm (see section 2.2) the parameter vector ~ can be rewritten using the OBFs in R: ,
= y[k] - y[k]
( 18)
m
r
== LL~[uJi-
Vl=1 ... p
(21 )
i=l j=l
With this equation the identification result for the truncated impulse response of the LT! transfer function is
2.2 System Identification
1
hl[i] = -__-
In comparison to identification methods based on difference equations, the identi fication algorithm based on the Volterra theory requires no model or system feedback, respectively. The output signal of the model
mr
L9:[i) rji
Vi
= 1 ... p
(22)
8Ni:"l j=l
Theoretically this leads to the same impulse response for all network weights 8Ni:,,1 (l = 1 ... p). However,
528
in real applications this leads to p impulse responses which are slightly different. Therefore the square error cost functions Ei
=
t
15000 r----....,.....---~--___._------,
I-
ytll
ek
Z10000
o
(h[i] - hdiJf
---+ min
o
(23)
.S
5000
o
l=l
~
.2
are used to determine the identification result h[i] for the LTI transfer function. The minimization of E i leads to ~ 1 ~~
h[i]
= - ~
P
hl[i]
-5000 ' - - - - - - ' - - - - - - ' - - - - - ' - - - - - - - ' 150 o 50 100 200
time in s
(24)
Fig. 6. Estimated output y[k] and identification error e[k]; (simulation)
l=l
which is the average of the different impulse responses.
rnJ!:
1.2
~
§
3. IDENTIFICATION RESULTS
1
.NI:.
-
0 08
'2
0.6
~
0.4
c;;
0.2
~
(J
According to the application described in the introduction the identification algorithm based on the Voltcrra theory and the GRNN is used to identify the recruitment behavior and the activation dynamic of the biceps muscle stimulated by RPMS. The identification structure for this purpose is depicted in fig. 5 and the settings of the identification algorithm are summarized in tab. 1.
.~
~
-0.2 L . . . - . . _ - - ' -_ _ 02 o 0.4
~
_ _- , - - _ - - , -_ _- ,
0.6
Fig. 7. Identification result of the static nonlinear characteristic; N.C: desired, identification result, ~: GRNN weights; (simulation)
JVt:
t:~~--.......-2 !
o
Fig. 5. Identification structure
p a
m
Value 21
0.03 40
ffi r
8
( Ts
10 0.01 s
I]
i,------,-----.
50
100
150
200
time in s
Fig. 8. Identified parameters ~; (simulation)
Table 1. Settings of the nonlinear identification algorithm Parameter
0.8
stimulation intensity
stimulation intensity and the induced isometric force at the forearm has been measured. In order to evaluate the identification results the recruitment behavior has been measured in steady state experiments. The results of the system identification and the steady state experiments are depicted in fig. 9, while the identification result for the LT! transfer function can be seen in fig. 10. In this figure also an assumed transfer function
Description number of GRNN base points smoothing parameter of the GRNN length of the truncated impulse response number of OBFs form factor sample time
rn
First of all the system identification is performed in a simulation environment with a randomized stimulation intensity and a PTJ-behavior a'i LT! transfer function. The estimated output y[k] and the identification error e[k] are depicted in fig. 6. The identification result for the nonlinear static characteristic NI:, can be seen together with the desired characteristic N.C and the network weights of the GRNN ~ in fig. 7. The parameters ~ during the identi fication process are depicted in fig. 8. As can be seen in fig. 6 after 50 secondes the identification error tends to its minimum, while the parameters (fig. 8) converge to their optimal values.
Fig. 9. Recruitment behavior of the biceps muscle; NL: measured, identification result, ~.c: GRNN weights
After the simulation, the algorithm has been tested in an experimental environment. Eight healthy subjects are stimulated over 160 seconds by a randomized
is depicted. This function is the result of prior experiments using single stimulations in order to determine the impulse response. The parameters ~ during the
.NL
.... 0.8
~
E
f!:,
-
0
..
stNL
.-;:: 0.6 .
2
u o
~ 0.4 .
.~
~ ~ 0.2 ..
0.2
0.4
0.6
0.8
stimulation intensity
JVt:
529
1-- ~[n I -
hi
z
~2L----,-----'---'---"----.J
o
10
15
sample
20
tiT.
25
Nt:
REFERENCES
identification process are depicted in fig. 11.
Agarwal, S., R.J. Triolo, R. Kobetic, M. Miller, C. Bieri. S. Kukke, L. Rohde and J.A. Jr. Davis (2003). Long-term user perceptions of an implanted neuroprosthesis for exercise, standing, and transfers after spinal cord injury. Journal of Rehabilitation Research and Development 40(3),241-252. Angerer, B., Ch. Hintz and D. Schroder (2004). Onlinc identification of a nonlinear mechatronic system. Control Ell!?ineering Practice p. (in press). Gollee, H., DJ. Murray-Smith and J.c. Jarvis (2001). A nonlinear approach to modeling of electrically stimulated skeletal muscle. IEEE Transactions on Biomedical Engineering 48(4),406-415. Gresham, G.E. and W.B. Stanson (1998). Stroke - Pathophysiology, Diagnosis and Management. Chap. Section V: Stroke Therapy: Rehabilitation of the Stroke Survivor, pp. 13891401. 3nJ ed. Churchill Livingstone. New York, New York (USA). Havel, P. and A. Struppler (2001). First steps in functional magnetic stimulation (FMS) - movements of forearm and fingers induced by closed-loop controlled FMS. Acta Physiologica & Pharmacolo!?ica Bulgarica 26(3), 185-188. Hofmann, S., Th. Treichl and D. Schrooer (2002). Identifcation of nonlinear dynamic MISO systems on fundamental basics of the Volterra theory. In: Proceedings of391h Intelli!?ent Motion Conference (IMC 2(02). PCIM-Europe. Niimberg, Gennany. pp. 231-236. Killich, A. (1991). ProzefJidentifikatioll durch Gewichtsfol!?cnschiitzung. Vol. 268 of Fortschritt-Berichte VDI, Reihe 8. VDI-Verlag. Diisseldorf, Gennany. Ljung, L. (1999). System Identification - Theory for the User. fYfR Prentice Hall Information and System Sciences Series. 2 nd cd. Prentice Hall PTR. Upper Saddle River, New Jersey (USA). Schetzen, M. (1980). The Volterra and Wiener Theories of Nonlinear Systems. John Wiley & Sons. Indianapolis, Indiana (USA). Schrooer, D., Ch. Hintz and M. Rau (2001). Intelligent modeling, observation, and control for nonlinear systems. IEEElASME Transactions on Mechatronics 6(2),122-131. Struppler, A., B. Angerer, Ch. Gi.indisch and P. Havel (2004). Modulatory effect of repetitive peripheral magnetic stimulation (RPMS) on the skeletal muscle tone (stabilization of the elbow joint) on healthy subjects. Experimental Brain Research 157(1). 59-66. TreicW, Th., S. Hofmann and D. Schrooer (2002). Identification of nonlinear dynamic systems with multiple inputs and single output using discrete-time Volterra type equations. In: Proceedin!?s of J5 th International Symposium on Mathematical Theory of Networks and Systems (MTNS 2(02). Notre Dame, Indiana (USA). pp. TUM6: 1-13. Wahlberg, B. (1994). System identification using Kautz models. IEEE Transactions on Automatic Control 39(6), I27fr-1282.
ft
The estimated force and the error signal during the identification process are depicted in fig. 12. As can be seen, the error signal reaches a minimum (apart from measurement noise). 100..------.-----...,...--------.-----,
:~
: l..=::.iIill Z
50
.S Q)
~
..8
06
Fig. 13. Simulation of muscle fatique; .N£: desired, identification result,~: GRNN weights
Fig. 10. Result for the LTI transfer function h[i] and assumed result h[i] known from prior work
Fig. ] ]. Identified parameters
0.4
stimulation intensity
30
0
_50'------'----..J..-----'-----.J o 100 150 50 200
time in s
Fig. 12. Estimated output y[k] and identification error e[k] during the identification process
4. CONCLUSIONS AND FUTURE WORK The experiments have shown that the presented nonlinear system identification algorithm works very well for all tested subjects. The ripple in the identification result of the recruitment behavior can be traced back to muscle fatigue and voluntary activation since the tested subjects are not completely relaxed. This is proven by the simulation of muscle fatigue which also leads to ripple in the identification result of recruitment behavior in the simulation as can be seen in fig. 13. Since the presented identification algorithm runs under isometric conditions, the algorithm will be enhanced in order to identify the recruitment behavior during movements of the forearm. Hence it is possible to implement a nonlinear position control of the forearm using the identification results.
Weiller, C. and M. Rijntjes (1999). Learning, plasticity, and recovery in the central nervous system. Experimental Brain Re-
search 128( 1/2), 134-138.
530