Copvri!(ht © I FAC Idctltilir ation and Slstem Paramete r Estimation 19HC,. York. L K. 19Hc,
A NEW RECURSIVE IDENTIFICATION TECHNIQUE F. Z. Unton S)'st~II/S R~.\farrh
IlIstitute, Polish Academy uf ScienceJ. 01 -447 Wanwll'a. 1/1..Vell'~isIUI 6, Puiallli
Abstract. A new way of constructing recursive identification algorithm is proposed in o rder to impro ve the identification accuracy. The estimates obtain ed are formulated to be closer to their o ff-line c o unterparts. The general appr o ach pr o pos ed i s wo rked out for the Generalized Least Squares Identification. Keywo rds.
Identification; linear systems; on-line operatio ns.
INTRODUCTION
GENERALIZED LEAST SQUARES IDENTIFICATION
Usually, to o btain r e cu r sive algorithms, the sequenc e o f o ff-line criterion functions is appr o ximated by a sequence of functi o ns wh ose minima ar e c omputed recursi v ely. Most o ften, the sum o f squares t o g e th e r with the Re cursive Least Squares ( RLS ) alg o rithm a re us e d f o r this purpo se.
Let us c o nsider the sum o f squares function of the form v ( e,t ) =
i ( y ( k ) _ IjI T ( k_1 )€I ) 2
(1 )
k=l
where ljI ( k-1 ) and e are vecto rs and y ( k ) is a scalar. It can be minimized recurs ively by means of the formula ( for A( t ) =l )
It has been o bs e rv e d e xperimentally that o n-line alg o rithms a r e gene rally not as accurate as off-lin e meth ods. Thus, the aim of the paper is to pr o p o se another techniqu e using a better appr oximation to the off-line criteri o n s than RLS.
e( t)=0 (t-1 )+P (t )( y ( t )_IjIT(t-1 )0( t-1 ))
( 2a )
T
The technique pr o po sed consists of the foll o wing steps:
P( t )=[P( t-1 ) - P(t-1 )1jI ( t-1 )1jI (t-1 )P(t-l ) ] /A (t ) ( 2b) A( t )+IjIT(t-1 )P( t-l )ljI ( t-1 )
1 ) Construct appr oximating functions whose minima are s o luti o n s t o a linear equation sequence. The eq u ati o n parameters are required to be computed recursively.
The choice A( t ) = A < 1 leads to the modif i ed algorithms with th e expo nential forgetting memory. Another possibility is to use following scheme for defining A( t)
2 ) Use the alg o rithm pr o p o s e d by Unton ( 1984 ) t o e s timate the above minima in o rd e r t o reduce a computational burden. Throughout this paper some notation will be used. The polynomials associated with the true parameter vector % %T %T T
Tis technique all ows a greater variety of appr o ximations and can provide better res ilts than RLS. The crux of the matter lies in the fact that the r e is n o way to say how t o choo se an appropriate approximating s e quence for a general case.
o
= [0 1
%[%
,0 2 ]
'
%
%
%
0 1 = al, .. ·,ana,b1, .. ·,bnbJ
The results o btained i n the paper are specific f o r the Generalized Least Squares ( GLS ) model structure. The paper is outlined as f o llows. In the next section an interpretation of the basic RGLS method ~ an accerelation technique are given. In section 3 the choice of the linear equation sequence ( acc o rding to the point 1 ) is proposed. The algorithm for estimating linear equation sequence solutions is described in section 4. Remarks on comparison of computational burdens and a s ymptotic convergence are given in secti o n 5 and 6 respectively. In section 7 simulation results are presented.
G
T
% % % T 2 =[c 2 ,···,c nc )
will be denoted by C% ( q-1 )
A% ( q-1 ) , B% ( q-l ) ,
% -1 % -n a % -1 A (q ) =1+a q + ... +a q 1 na % -1 % -1 % -n b B (q ) =b q + .•. +b q 1 nb % -1 % -1 % C (q ) =1+c q + •.. +c q 1 nc ( q-1 is the delay operator:
873
-n c
q-1 y ( t ) =y ( t-1 »
ti74
F. Z. UnIon
Moreover the following notations will be used T T JT , () ( t ) = r, L() 1 ( t ) , () 2 ( t ) A( q-1,t),
B( q-1,t ) ,
( 7d) In such a case, W; ( ., .. ) is an approxima-
C( q-1,t)
tion of
the vector of estimates and associated polynomials obtained at instant t ()= 1() T 1' () TIT 2 ,A ( q -1 ), B( q -1 ), C( q -1 ) - any
wi (. , .. ) i. e.
in k-th term in ( 7b )
the unknown vector 8~ is replaced by the estimate
8 ( k-l ) obtained by minimization 2 of ( 7d) at time instant k-l ( similarly for
Wi ( .;. ) and its approximation vector and associated polynomials. Consider now the Generalized Least Squares Identification. Let the asymptotically stable system be given by
W~ ( .,.)).
Both function~ are minimized recursively via the algorithms taking the form such as in ( 2 ) . However, for small
t, 8 1 ( t ) and 8 2 ( t ) are f% . 0 8 an d 8 lt respecti1 2 vely. It seems therefore reasonable that the influence of new 8 ( . ) and 8 2 ( . ) should . o f ten poor estLmates
where y ( t ) is a scalar output, u ( t) a scalar input and e ( t) a white noise with zero mean, independent with u ( t ) . The identification problem is to estimate the vector ()x using the given data u(l ) , y(l ) , u (2) , y ( 2 ~ , • ••
•
1
be greater than the influence of old ones. This is done using the algorithm ( 2 ) with the factor A( t ) ( 3 ) . Thus, A( t ) should be used not only for ident i fication of time varying parameters but of constant ones as well ( see e.g. Soderstrom at aI, 1975 ) .
The off-line GLS algorithm was introduced by Clarke ( 1967 ) . It can be interpreted as the following loss function minimization ( see, Soderstrom, 1974 )
The algorithm proposed is defined as follONS
t
( Sa )
W(() 1, 0 2;t ) = [ 1 (() 1' () 2;k l k=l l (() l,() 2;k )=(A(q
-1
)C(q
THE CHOICE OF LINEAR EQUATION SEQUENCE
-1
)y (k )-B(q
-1
)C(q
-1
)u (k ) )
( Sb ) The GLS method can also be interpreted as the LS method for a model of the form
degA = na + nc
The identification of o ~ and 8 ~ can be obtained by alternate minimization of two functions of the same form as in ( I ) . 1 x W1 ( 8 1 ;t ) =W ( () I' () 2;t )
( 6a ).
2 % WI (8 2 ;t ) =W ( 8 , 8 ;t ) I 2
( 6b )
8 2 ( t ) = arg min W23 ( 8 2 ;t ;\ 8 2
( Sb )
1
Ay ( t) =Bu ( t) +e ( t )
with constraints for parameters of A and B. This approach is recommended for off-line identification ( see, Soderstrom, 1974 ) . However, for on-line one, a minim i zation without constraints is more convenient.
( Sa )
1
2
degB = nb + nc
8 ( t ) = arg min W1 (8 ;t ) 3 1 1 8
W3 (8 1 ,t ) =W (8 , 8 ( t-l ) ;t ) 1 2
( Sc)
W2 (8 ,t ) =W (8 ( t ) , 8 ;t ) 3 2 2 1
( Sd )
The difference betwen the functions ( 7b ) and ( Sc ) is as follows. In the function ( 7b ) the k-th term ( k=l, ... ,t ) depends on 8 2 ( k-l ) but the function ( Sc ) depends only on the latest estimate 8 ( t-l ) of 8 x , The 2 2 difference between ( 7d ) and : Sd ) is similar. Because of this, th e functions W31 ( ., ., . ) 2
and
W3 ( . , . , . ) ( S ) are better approx imations of W1 ( . , . ) and W21 ( . , . ; ( 6 ) than W1 ( . , . , . ) 1
and
2
W~ ( . , . , . ) ( 7 ) .
Consider now how to minim i ze the introduced functions. It is clear, that 8 ( t ) and 1 8 ( t ) ( Sa,b ) are the solutions to linear 2 equations:
The RGLS algorithm was proposed by HastingJames and Sage ( 1969 ) . It consists Ln mLffimization of two sum-of-squares functions combined via filtering ( 7a )
In the first, it will be shown that the matrices G1 (8 2 ,t ). and G ( 8 ,t ) as well as 2 1 the vectors gl (8 2 ,t ) and g2 ( G ,t ) can be 1
computed recursively for any ( 7c)
Introducing the notations
0 1' 8 , 2
Recursive Identification Technique
875
The equation ( 9a )
n +n a c r-----~~~------~',r------AA------~
o
o o
• l,cl, .. :,cn
_________c_
o
is
_________ _ l o 1/e ,··· ,en I .1 c
I
equivalent to the equality ( 12 )
The above gradient can be computed as
0
g ( t ) =A' ( -grad T ( x,t) ) x
( lOa ) [cl, ... ,c n '
o, ... ,O]T
( lOb )
~------~~r--------J
where
A=[a .. ]
( i=1,2; j=1,4 ) is the fol-
l.J
lowing matrix
ax . (°
)
1 a" - ..,.".... J~'rl.J-a 0 U)
na+ n b+ 2n c
1
where ° 1 ( 1 ) =a, ° 1 ( 2 ). =b . Then A=A (° ) , 1 2 Moreover t
-grad T(x,t )= [ t (k-l ) (y (k )- t
x ( lOc ) bl, ... ,b
-T
T
( k-~ ) x )
=r ( t )-R( t)x
k=l
Then, the equation ( 12) can be rewritten as
,O, ... ,OJ
~lOd ) n
a
+ n
nb + nc
c
On the other hand T
X(0 1 , 0 2 ) =A 1 (0 2 ) • ° 1+.3:1 (° 2 ) U( t -l ) , ... ,U ( t-~ -nc )]
( 10e )
where
( 14 )
al (0 2 ) =[C,0,0,0}T.
Combining ( 13 ) and ( 14 ) the final equation is obtained
th e following theorem holds. Th eorem. Let the matrix R( t ) and the vector r ( t ) be giv e n by R( t ) = R( t-l ) +~ ( t -l ) 41 T ( t-l ) , R( 0 ) =0 r ( t ) =r ( t-l ) +q> ( t-l ) y ( t ) , r( 0 ) =0
( lla ) ( lIb )
then, the parameters o f the equations ( 9 ) can be computed as follows G (0 ,t ) =A (0 ) R( t ) Ai (0 ) l 2 2 l 2 gl (0
2
( llc )
,t ) = A (0 )( r ( t ) -R ( t ) a (0 )) ( lld ) l 2 l 2 T
G (0 ,t l =A (0 ) R( t)A (0 ) 2 1 2 1 2 1
(lIe )
g2 (0 , t ) =A (0 )( r(t ) -R( t )a (0 )) 2 1 2 1 1
(11£ )
Proof. For simplicity the polynomials A( q-l ) , B ( q-l ) and C ( q-l ) a r e assumed to of degree 1. In that case and
0 =[a,b]T, O 2 =C 1 l ( a,b,c,t ) =
l ( a,b,c,t ) has the form
= ( y ( t ) + ( a+c ) y ( t-l ) +acy ( t-2 1-bu ( t-l ) bcu ( t-2 )) 2. The function
W (0 1, 0 2~t l ( 5 1
can be rewritten as
The proof of ( lIe ) and ( llf ) is similar.
~
Notice, that the computation of the matrices ( llc ) and ( lIe ) as well as the solution of the equations ( 9a l and ( 9b ) require proportional to N3 ( N=n +nb+n ) arithmetic a c operations per time step; and therefore any modification of the algorithm ( 9 ) , ( ~Ol, ( 11) reducing the computational time would be very useful. It will be done in the next section. THE FINAL ALGORITHM Consider the following problem. Let for each instant t=1,2, ... the following linear equation ( with respect to x ( t )) be defined o ( t ) x ( t) =g ( t ). where O( t) is are N-vectors.
T ( x,t )
t T 2 [ ( y ( k ) -t ( k-I l x ) k=l
x (0 )=[x (O L ... ,x (O l]T=[a+c,ac,b,bc}T • 4 l
NxN matrix, x ( t l and g ( t)
Moreover, i t is assumed that lim O( t ) =D t .. m
where
(15 )
and
lim g ( tl=g t .. m
w.p.l.
(16)
and that O( t ) ( t=1,2, ... ) and Dare nonsingular. In this way, the sequence x ( t) ( t=1,2 l • • • ) converges ( w.p.l ) to the solution x of the equation
F. Z. Unton
876
Dx The tor
=
g
p~oblem
is to find a recursive estimax ( t ) of x ( t) requiring proportional 2 to N arithmetic operations per time instant. It can be solved as follows.
- k (t,n) t=1,2, ... denote the sequence of intergers 1,2, ... ,n-l,n periodicaly repeated i.e. {k ( t,n ) , t=1,2, ... } = = { 1,2, ... ,n-l,n, 1 ,2, ... ,n-l,n .. }
Consider the linear equation ( with respect to xlt) i=l, ... IN
where a ( i ) ( i=I,2, ... ,N ) are N-dimensional linearly independent vectors. It can be solved in N steps' by the Kaczmarz ( 1937 ) algorithm with the modification of Westphall ( 1978 )
i ( i)=~ ( i-l )
I
- 5 ( , ) be 1
xn -matrix, zl ( ·)be nI-vector 1 ( n =n +n ) a b l - 5 ( ,) be n xn -matrix, z2 ( ' ) be n 2-vector 2 2 2 (n =n ) 2 c - 5 ( 0 ) =0,5 ( 0 ) , R( O) , r ( O) =O. 2 1
z (i )z( i )
i-I T z ( i)=a ( i ) - l: \ ( i ) Z( j ) z ( j) j=1 z ( j ) z ( j )
for
1
Then, the final version o f the algorithm is defined as follows
( ylt ( i ) -aT ( i l ~ ( i-l »
T z ( i)
n
i>1
R( t ) =R ( t-l ) +A ( t )q,( t-l )q, T ( t-l ) ,
( 18a)
r ( t ) =r ( t-l ) +A ( t )q, ( t-l ) y ( t ) ,
( 18b )
T
a ( t ) =[k ( t,n ) -th row of A (G ( t-l » ] 2 l l 1
z(l ) =a ( I ) .
T
The method proposed c~nsists of application of the above algorithm for a special choice of a ( i ) ( i=I,2, ... ) . Because the same number of arithmetic operations per step is required the algorithm will be rewritten as z ( t ) =a ( t ) -5 ( t-l)a ( t )
·R ( t ) . A (0 ( t-l )) l 2
y~ ( t ) =[k ( t,n2 ) -th r ow of
( 18c) A
2 (0 2 ( t-l » ]
.[r ( t ) -R ( t ) .3. (0 ( t-l » ] 1 2
( 18d)
( 17a,b,c ) ( 18e )
~ ( t ) =Q ( t-l) + T z ( t )
( ylt ( t ) -aT ( t)~ ( t-l »
z ( t ) z ( t) T
5 ( t ) =5 ( t -1 ) + z ( t ) z ( t ) zT ( t ) z ( t ) where 5 ( t ) is a satisfying
Nx N
5 ( t ) = 0, for
dimensional matrix
t=0,N,2N, ...
( 17d )
The choice of a r t ) and ylt ( t l ( t=1,2, ••. ) is as follows. Let d ( t,i ) denote the i-th row of matrix D( t ) . Then {a ( t ) : t=l, 2, ... } =
a~ ( t ) =[k ( t,n2 ) -th row of
A (0
2
1
( t »J
T .R ( t ) • A (0 ( t » 2 1
( 18h)
yJt ( t ) =lk ( t,n ) -th row of A (0 ( t » ] 2 2 1
, ... ,d T ( 2N,N ) , ... } and similary, let element of vector
( 18g )
( 17e )
• [r ( t ) -R ( t ) .3. (0 1 ( t ) )] 2
g ( t,i ) denote the i-th g ( t ) . Then
{ ylt ( t ) :t=I,2, ... } =
( 18 i) ( 18j )
O ( t ) =G ( t-l ) + 2 2
=(g ( 1,1) ,g( 2,2 ) , ... ,g ( N,N ) ,g ( N+l,l ) , ..• ,g ( 2N,N ) , .. } ( 17f ) The above algorithm is a small modificatwn of the algorithm proposed by Unton ( 1984 ) . It provides a good estimation of x ( t )( 15 ) if x ( t ) are slowly varying in time and the number of instants considered is significantly greater than N. ( see Unt~n , 1984 for more details ) . The application of the algorithm ( 17 ) to the problem ( 8 ) can now be summarized as follows. Let
z2 ( t ) It T + T ( Y2 ( t ) -a 2 ( t )0 ( t-l» 2 z2 ( t ) z2 ( t ) ( 18k ) T
z2 ( t ) z2 ( t ) +
T
z2 ( t ) z2 ( t ) ( 181)
The above algorithm can be used for identification of constant parameters (,( t)=l ) as well as of time varying ones (,( t )< I ) .
877
Recursive Identification Technique Because ( 18c) and ( 18h) compute only one row of matrices Gl ( .,t) ( 9a) and G ( .,t) 2 ( 9b ) the algorithm re~uires only proport10nal to ( n +nb +nc) arithmetic operations a per time instant. The comparison of computational burden will be given in the next section.
where A( q-l ) = 1 - 1.5q-l + 0.7q-2 B( q -l ) = l.oq-l + 0.2q-2
REMARKS ON COMPUTATIONAL BURDEN u ( t ) ""N ( O,lO.O ) , A (0 ) ( lOa) 2 l and A (0 ) ( lOc ) are only used to define 2 1 the algorithm ( 18 ) . Due to their regular form the vectors ( 18c l , ( 18d), ( 18h), and ( 18i) can be computed directly using only r ( t},R ( t), 0 ( t-l) and 0 ( t ) . 2 l
e ( t ) ",N ( O,l )
Notice, that the matricies
Let us assume that
Then the numbers of multiplications and divisions required per one instant is given by
To improve the convergence in initial iterations the algorithms are started up in the following way: - In the conventional one ( 7 ) O ( t ) is sub2 stituted with 0 for the first No instants. Thus, the RLS is computed. - In the algorithm ( 18 ) . for the first No 1nstants. R( t ) and r ( t) is only computed; O ( t ) is substituted with 0 and 2 0 ( t ) is taken from the RLS method. 1 Ten independent realizations are generated and results are given in Fig.l ( No=16),
kl ( N) =.!.2 N2 + Q N - for the conventional 18 2 algorithm ( 7 )
Fig.2 ( No =40 ) , Tab.l and Tab.2 ( No=401.The
87 2 15 k 2 ( N) =T8N + 2N- for the algorithm (18)
the conventional algorithm.
Their asymptotic ratio is
and that is what we have to pay for the identification accuracy improvement.
factor >.( t ) ( 3 ) (>.( 0 ) = >' 0=0.99 ) is used for
Two measures of accuracy are used. In Fig.~ 2 the normalized error is plotted. In Tabls. 1,2 the mean and the standard deviation of 10 realizations are computed. Moreover the Cramer-Rao lower bound is computed In order to unify the description the "normalized" bound is plotted in Figs.l,2,Le. O( t ) = Eo il
110 1 ( 0 ) -0 ~ 11 2
1
REMARKS ON ASYMPTOTIC CONVERGENCE For some of the theoretical considerations the function ( W .,.,. ) ( 5 ) should be replaced by
The above functional sequence converges ( w. p.l ) to the deterministic function and the off-line estimate converges ( w.p.l ) to an local minimum of this limiting function ( see Soderstrom, 1974). The convergence properties of the recursive algorithm ( 7 ) can be established by the method of Ljung ( 1974, 1977 ) . Both estima tes are convergent to the vector O* for s~fficiently large signal-to-noise ratio (e.g. Soderstrbm at al., 1978 ) . In such a case, the limiting function has the unique minimum. The algorithm ( 18 ) may also converge to a local minimum of the limiting function. Due to limitations in space, this problem will not be considered here. AN EXAMPLE The system chosen for comparison of the algorithms has the form
where
0 i is the lower bound of the accura-
cy ( standard deviation ) of the i-th parameter. The Cramer-Rao lower bound was computed according to the idea in the paper of ~strom ( 1967 ) . The results indicate a better accuracy of the algorithm ( 18 ) for both cases, especially in initial iterations. For a large number of iterations the rates of convergence would be approximately the same. Then, the algorithm ( 18 ) should be applied for small and medium numbers of instants. CONCLUSIONS A new way of constructing a recursive algorithm for the Generalized Least Squares Identificati on is proposed. As compared to earlier ones it uses a better approximation of the off-line criterion functions. In such a case the estimates are solutions to a linear equation sequence. The algorithm of Unton ( 1984 ) is proposed to reduce a,computational burden. Finally, the algor1thm proposed requires proportional to N2 arithmetic operations per one step, where N is the number of estimated parameters. The results of a simulation example are given showing that the proposed algorithm provides better estimates than the conventional o ne.
878
F. Z. Un ton
The author believes that the above way of appr oxima tion of off -l i n e criterions can be applied in more cases, a nd hence may be called a new recursive technique .
Westphal,L.C. ( 1 978). An improved adaptive identifier for d iscrete mu lti v a ri abl e linear systems. IEEE Trans. Autom. Contr. AC - 23, pp . 860 - 865 . Parameter
REFERENCES
True value
~strom,
K . J . ( 1 967 ) . On the achievable accuracy in identif ic ation problems. Preprints. 1st IFAC Symposi um o n Id en tification in Automatic Control Systems. Prague. Clar~D .W. ( 1967 ) . Genaralized least squares estimatioR of parameters of a dynamic model. 1st IFAC Symposium on Identificati o n in Automatic Control Systems. Prague. Hasting-Jame s , R. and M.W. Sage ( 1 969 ) . Recursive generalised l east squares procedure for on- line ide ntification of process parameters . lEE Proceedings, Vol .1 16 , pp . 2057 - 2062 . Kaczmarz , R. ( 1937 ) . Approximate solution of system of linear equations ( in Ger man). Bull. In t . Acad. Pol . Cl. Sci. Math. nat. ser . A. Ljung, L. ( 1974 ) . Convergence of recursive stochastic algorithm . Preprints, IFAC Symposium on Sto chast i c Control , Buda pest. Ljung , L. ( 1977 ) . Analy s is o f recursive, stochastic alg or ithms, IEEE Trans. Autom. Contr. AC - 22, No . 4, pp. 551-575. S oderstrom , T. ( 19 7 4 ) . Co nvergence properties of the generalised least squares identificati on me thod . Automatica, Vol. 10, pp. 617 - 626. Soderstrom, T., Ljung , L. and I. Gustavsson ( 19 78 ) . A th eore tical analysis of re cursive identification methods. Auto matica , Vol. 14 , pp. 231 - 244 . Un t on , F . Z. ( 1984 ) . Recursive estimator of the solution s of linear equation sequence, IEEE Trans. Autom . Contr., AC 29 , pp. 177 - 1 7 9 .
1
10
10
l:
i=l
-1.5
-1. 36:!:0 . 13
-1.5 1:+:0.037
0.7
0.62:!:0 . 15
0.7hO.0l8
b1
1. 0
0.94:!:0 . 03
0.99:+:0.22
b2
0.5
0 . 59 :!:0.1l
0 .4 8:!:0.009
Cl
-1. 0
- 0 . 79:!:0.24
-1. 02:!:0 . 007
c2
0 .2
0 . 00:!:0.21
0.18:+:0.009
a a
l
2
Tab . 1.
Parameter a a b b
True value -1. 5
1 2 l
2
Cl
c
Mean and standard deviation for 200 instants a n d No=40 .
2
I I
Tab. 2.
Convent . RGLS Mean arrl s.d. of 10 real
Algori thm ( 18) Mean arrl s .d . of 10 real
-1. 64 :!:0. 03
-1.508:!:0.0 10
0.7
0.69:!:0.016
0.709 ±0 . 009
1.0
0.97±0.044
1.012:!:0 .006
0.5
0.46 ±0.041
0 . 495:+:0 . 006
-1. 0
- 0 . 91±0 . 090
- 0 . 975:+:0.012
0.2
0. 1l:!:0 . 075
0.181±0.007
Mean and standard deviation for 900 instants and N =40. o
li e
lie 1 ( 0 ) -e~ I!
1 101 ( 0 ) -e~! 1 2
2
_ _ RLS
--RLS o-O-Q conventional RGLS ~ ......
+
t
i-number of realizat ion
o~ conventional RGLS
0. 1
0.01
i ( ;-e ~ :! 2
Ile i ( t ) -e ~ 11 2
i-number of realization 1.0
Algorithm ( 18) Mean arrl s.d. of 10 real
Convent.RGLS Mean and s .d. of 10 real
algorithm
( 18 )
•
C- R lower bourrl
• • algorithm ( 18 : C - R lower bourrl
+
0.001
0.0001
+
• • • •
0.0001
+
0.00001
+
+ t
0.00001
t
300
Fig. 1. System identification : No =16
Fig.
600
90
2 . S ystem identification: No=40