Observer design for nonlinear system

Observer design for nonlinear system

IFAC Workshop on Adaptation and Learning in Control and Signal Processing, and IFAC Workshop on Periodic Control Systems, Yokohama, Japan, August 30 –...

332KB Sizes 2 Downloads 225 Views

IFAC Workshop on Adaptation and Learning in Control and Signal Processing, and IFAC Workshop on Periodic Control Systems, Yokohama, Japan, August 30 – September 1, 2004

Observer design for nonlinear system Koichi Hidaka Tokyo Denki University Kanda-Nishiki-cho, Chiyoda-ku, Tokyo 101-8457, Japan [email protected] Abstract A design method is presented that extends the least mean squared (LMS) algorithm of time–varying parameters which incorporate an internal model. The aim is to track the varying parameters of linear regression models in situations where regressors have not only slowly changing parameters but also rapidly time–varying ones. This algorithm is given from the standpoint that a change in a varying parameter can be approximated as a polynomial function of time with a finite degree. Under this assumption, the proposed algorithm includes an internal model of the polynomial function in a conventional LMS algorithm so as to compensate the influence of the varying parameters. The design method is based on a loop transformation for the dynamics and small–gain theorem. Furthermore, a sufficient condition for assuring stability of the proposed algorithm is given by the output strictly passive and small–gain theorem. Compared with conventional LMS adaptive algorithms, the tracking performance of varying parameters improves in numerical simulations. Keyword: time–varying parameter, internal model, LMS algorithm, output strictly passive.

1 Introduction A wastewater treatment process model called a dynamical behavior of a continuously stirred tank bioreactor (CSTR) is described by a general nonlinear massbalance model[1, 2]: ˙ ξ(t) = Kr (ξ (t)) − D(t)ξ − Q (ξ (t)) where the vector ξ(t) is the vector of the concentrations of the biological or chemical species inside the liquid medium. Kr (ξ (t)) on the right–hand side represents the biological and conversions in the reactor according to the underlying reaction network and r is a vector of reaction rates. The process model can be transformed into the system as follows; ζ˙ 1 (t) = K 1 r (T ζ) − Dζ 1 − Q1 (T ζ) ζ˙ 2 = −Dζ 2 − M y2

(1) (2)

with      ξ I ζ1 = T 1 , T := n−p ζ2 ξ2 −A

0n−p,p Ip

 (3)

where the ξ 1 and ξ2 are measured mad unmeasured state variables vectors, respectively. The transformed system separates the nonlinear system with the measured states and the linear system with the unmeasured states. We have to estimate the variable with the observer for the unmeasured states. The traditional

approach to estimation has been to employ a constant parameters as the least mean–square (LMS) or Recursive least–squares (RLS). The sets of the states change in practice, however, then these adaptive estimation techniques have a number of limitations. They are derived under the assumption of stationary parameters and do not take into account the time–varying nature of the states. We proposed the estimated algorithm that extends the least mean squared (LMS) algorithm of time–varying states which incorporate an internal model. The aim is to track the varying states of nonlinear process model. This algorithm is given from the standpoint that a change in a varying state can be approximated as a polynomial function of time with a finite degree. Under this assumption, the proposed algorithm includes an internal model of the polynomial function in a conventional LMS algorithm so as to compensate the influence of the varying parameters. The design method is based on a loop transformation for the dynamics and small–gain theorem. Compared with conventional LMS adaptive algorithms, the tracking performance of varying parameters improves in numerical simulations. The paper is organized as follows. In Section 2 an input–output FIR model is introduced. Section 3 sketches the estimation algorithm and the stability condition of the closed–loop system is introduced. Finally in Section 4, two simulations demonstrate that the approach can be useful for rapidly time–varying parameters.

105

Then e(n) is given by

2 Problem setting We consider a FIR system  with time–varying paM rameters described by y(n) = k=0 u(n − k)ω(k, n) = T u (n)ω(n), where u(n) is an input signal, y(n) is an output signal and ω(k, n) is an unknown time– varying parameter. u(n) and ω(n) are the input vector and the time–varying parameter vector defined by u(n) = [u(n), u(n − 1), · · · , u(n − M )]T and ω(n) = [ω(0, n), ω(1, n), · · · , ω(M, n)]T , respectively. The initial value of the input signal is simply given by u(0) = 0. We assume that an upper bound of the impulse response length M is known a priori. The main goal of this paper is to estimate the time– varying parameters ω(i, n) on–line. Since the changes in the time–varying parameters are smooth in practice, we assume that the movement can be approximately modeled on the polynomial function of time, such as, ω(i, n) ∼ ci0 + ci,1 n + · · · ci,l nl . By using the polynomial function, we apply the internal model principle to the estimation algorithm in order to decrease the sensitivity to the changes. Ordinary expansion-based approaches need to estimate these coefficients. However the proposed method does not need to estimate coefficients ci,0 , · · · , ci,l but only ω (i, n) directly.

3 Novel LMS adaptive estimation algorithm including an internal model The proposed LMS adaptive estimation algorithm is given by 

l ˆ 1 − q −1 [ω(n)] = P (q −1 ) [e(n)u(n)] T

T

ˆ ˜ e(n) = y(n) − u (n)ω(n) = u (n)ω(n),

(4) (5)

where e(n) is the output error, q −1 is the backward operator and ω ˜ (n) is the parameter error defined by ˜ ˆ ω(n) = ω(n) − ω(n), and the polynomial P (q −1 ) is given by   P q −1 = p0 + p1 q −1 + · · · + pr q −r ,

(6)

l  where r ≤ l and p0 = 0. 1 − q −1 expresses an internal model of unknown time–varying parameters. Figure 1 shows the block diagram of the error system and the estimation algorithm. e(n) in equation (5) cannot be calculated at step n because equation (5) ˆ includes the estimation parameter vector ω(n) at step n. By substituting equation (5) into equation (4), the error system can be rewritten as  l  ˆ ˆ e(n) =y(n) − u (n) ω(n) − 1 − q −1 ω(n) T



+ P (q

−1

) − p0



 e(n)u(n) + p0 e(n)u(n) .

e(n) = −



1 1 + p0

uT (n)u(n)

φ(n, q −1 ) − y(n)



l  ˆ φ(n, q −1 ) =uT (n) 1 − 1 − q −1 ω(n)   + P (q −1 ) − p0 e(n)u(n) .



(8)

(9)

Notice that all the signals on the right–hand side in equation (8) can be calculated by the previous variables. Figure 1 shows that the unknown time–varying parameter vector ω(n) is regarded as an external disturbance. By taking advantage of the internal model principle for the LMS algorithm, we propose a novel LMS adaptive algorithm for time–varying parameters. The internal model in the new LMS algorithm is given  l by 1 − q −1 , where l is the degree of the polynomial function. For instance, if the parameter approximately changes in l − 1th degree of time function, the new l  LMS algorithm should include 1/ 1 − q −1 to compensate the time–varying parameters. The condition l = 1 in the new algorithm yields the conventional LMS algorithm. The conventional LMS algorithm is the estimation method under stationary parameters. l = 1 is regarded as the internal model of the constant   l  parameters. Since the poles of P q −1 / 1 − q −1 are not simple roots, this closed system shown in Figure 1 cannot assure the stability by the passivity theorem [7]. To overcome this problem, we now introduce another stability theorem that is based on the input– output point of view. Before presenting the stability condition, we define some terms and key results. Signals ω (i, n) and u (i) are elements of a normed space, which we call the signal space. A norm in the signal space is defined as  u(n) 2 = u(n), u(n) = M 2 M 2 2 A k=0 u(k)2 , where u(k)2 = i=1 u (i, k). system with input signal xin (n) and output signal xout (n) in the signal space is passive if  > 0 such that xin (n), xout (n) ≥ 0, and output strictly passive if  > 0 such that xin (n), xout (n) ≥ xout (n), where xin (n), xout (n) =

n

xTin (k)xout (k).

k=0

Having established a notion of output strictly passive, we can state the key results. By using the loop transformation [8], new operators S 1 (n) and S2 −1 are given by S 1 (n) = [H 1 (n) + I] [H 1 (n) − I] and −1 S2 = [H2 + 1] [1 − H2 ] , where H 1 (n) = u(n)uT (n), l  H2 = P (q −1 )/ 1 − q −1 and I is an identity matrix. The input–output relation of the operator S 1 (n) is v 1 (n) = S 1 (n)z 1 (n)

(7)

−1

= [H 1 (n) + I]

106

[H 1 (n) − I] z 1 (n),

(10)

ω( n )

 (n) ω

+ -

(Step 3) Generate the input signal u(n) such that equation (12) is satisfied at any time n. (Step 4) ˆ (n) from the algorithm Estimate the parameter ω

l  ˆ ˆ ω(n) + P (q −1 )e(n)u(n). ω(n) = 1 − 1 − q −1

u (n)

uT ( n )

e(n)

1 ω( n ) 2 + z1 ( n )

e(n)u (n)

+

2I

+

 (n) ω

-

H1 ( n )

e ( n ) u ( n )+

v1 ( n )

-

H1 ( n ) S1 ( n ) 1

ˆ (n) ω

P ( q −1 )

v2 (n)

(1 − q )

−1 l

1 I 2

H2 (n)

+

ˆ (n) - ω

H2

ω( n )

2 + + + + z2 ( n )

S2

Fig. 1 Block diagram of an adaptive estimation system.

Fig. 2 Loop transformation.

4 Simulation results

˜ (n)) and v 1 (n) = where z 1 (n) = 1/2 (e(n)u(n) + ω ˜ ˜ (n) = 1/2 (e(n)u(n) − ω (n)). With e(n)u(n) ±ω   ˜ (n), we obtain u(n)uT (n) ± I ω ˜ ˜ ω(n).  v 1 (n) 2 = z 1 (n) 2 −H 1 (n)ω(n),

(11)

The condition for the output strictly passive of H 1 (n) is presented in the following lemma. Lemma 1 [9] If u(n) satisfies 1 > u(n) 22 ε

S 1 (n)z 1 (n) , ∀z 1 (n) ∈ l2e (N ) \ {0}. z 1 (n) (13)

If we obtain the sufficient stability condition margin ζ > 0 such that  γ (S 1 (n)) = 1 − ζ, the freedom of the design of P q −1 will increase so that γ(S 1 )γ(S2 ) = (1 − ζ) γ(S2 ) < 1 may be satisfied. We are now ready to state the following stability condition. When γ(S 2 ) is satisfied as γ(S 2 ) < 1/γ(S 1 (n)), the gain γ(S 1 (n)S2 ) becomes γ(S 1 (n)S2 ) < γ (S 1 (n)) γ (S2 ) < 1, where γ(S2 ) is defined as    1 − q −1 l − P q −1     γ (S2 ) = sup   .  (1 − q −1 )l + P (q −1 )  ∞

T

[ ω(1, t), ω(2, t), ω(3, t) ]      0.03π 0.01π t , sin t = sin (0.01πt) , cos 2 2 (15)

(12)

for  > 0 and any time n, then H 1 (n) is output strictly passive. By using the input signal satisfying lemma 1 and equation (11), the gain of γ (S 1 (n)) is given by γ (S 1 (n)) < 1, where γ (S 1 (n)) = sup

We perform simulations to illustrate the effectiveness of the proposed algorithm. Let us consider the FIR system given by y(nT ) = uT (nT )ω(nT ) = u(nT )ω(1, nT ) + u ((n − 1) T ) ω(2, nT ) + u ((n − 2) T ) ω(3, nT ), where the unknown time–varying parameters are given by

(14)

By using the small–gain theorem, the proposed LMS algorithm is assured by attaining the stability of the closed-loop system in Figure 1 via the suitable selec tion of the polynomial coefficients of P q −1 . The proposed adaptive estimation algorithm can be obtained as follows: (Step 1) Determine a region of the parameters of P (q −1 ) so that the denominator of equation (14) may be stable. (Step 2) Determine the coefficients of P (q −1 ) so that the infinite norm of equation (14) satisfies the condition γ(S 2 ) ≤ 1 + ξ , (ξ > 0), where ξ is a sufficiently small positive constant.

T

[ ω(0, t), ω(1, t), ω(2, t) ] ⎧    0.01π  T  π ⎪ sin (0.02πt) , cos 0.01π ⎪ 3 t + 5 , sin 2 t ⎪ ⎪ ⎨ (0 ≤ t ≤ 5) =     0.01π  T 28π ⎪ sin (0.01πt + 10π) , cos 0.02π ⎪ 3 t + 15 , sin 2 t ⎪ ⎪ ⎩ (5 ≤ t ≤ 10). (16) We approximately model the parameters ω(k, t) (k = 1 ∼ 3) as polynomial functions of time with second order. Then the internal mo del is given by 3  1 − q −1 . The proposed adaptive LMS algorithm is designed by setting l = 3 , p0 = 19.2 , p1 = −30 and p2 = 11, where the gain γ (S2 ) = 1.0166 is obtained. In this case, the error system and the parameter estimation algorithm are e(n) = − 

φ 3, q

 −1

1 1 + p0

uT (n)u(n)

   −1  − y(n) φ 3, q (17)

ˆ − 1) − 3ω(n ˆ − 2) = uT (n) {3ω(n ˆ − 3) + p1 e(n − 1)u(n − 1) +ω(n +p2 e(n − 2)u(n − 2)}

ˆ ˆ − 1) − 3ω(n ˆ − 2) + ω(n ˆ − 3) ω(n) = 3ω(n + p0 e(n)u(n) + p1 e(n − 1)u(n − 1) + p2 e(n − 2)u(n − 2).

(18)

According to lemma 1, we use the input signal such that u(k) = ±0.2 (k = 0, 1, · · · ) and  is given by 1/u(k)22 = 8.33. We compare the new LMS algorithm with conventional LMS algorithms such as ˆ ˆ LMS1: ω(k, n) = ω(k, n − 1) + μ1 e(n)u(n)

(19)

ˆ ˆ LMS2: ω(k, n) = ω(k, n − 1) + μ2 e(n − 1)u(n − 1), (20)

107

dition are given by the loop transformation and the small–gain theorem. The validity has been illustrated in simulations.

3

ω(1,n)

2 1 0 −1

References

−2 −3

0

1

2

3

4

5

6

7

8

9

10

3

ω(2,n)

2 1 0 −1 −2 −3

0

1

2

3

4

0

1

2

3

4

5

6

7

8

9

10

5

6

7

8

9

10

3

ω(3,n)

2 1 0 −1 −2 −3

time

Fig. 3 Unknown time–varying parameters given by equation (15). 3

ω(1,n)

2 1 0 −1 −2 −3

0

1

2

3

4

5

6

7

8

9

10

0

1

2

3

4

5

6

7

8

9

10

0

1

2

3

4

5

6

7

8

9

10

3

ω(2,n)

2 1 0 −1 −2 −3

3

ω(3,n)

2 1 0 −1 −2 −3

time

Fig. 4 Unknown time–varying parameters given by equation (16).

ˆ where μ1 = 30 , μ2 = 0.5 and ω(0) = 0. Notice that LMS1 satisfies passivity and LMS2 does not satisfy passivity. The tracking results are shown in Figures ˆ 5–10. Figures 5, 7 and 9 display the tracking ω(n) given by equation (15) and Figures 6, 8 and 10 inˆ dicate the tracking ω(n) when the frequencies of the parameters change at time t = 5. The dotted lines represent time–varying parameters. The dashed–dotted lines and dashed lines represent the estimated parameters using LMS1 and LMS2 algorithms, respectively, and the solid line denotes the tracking results of the proposed algorithm. The conventional approaches fail to track the time–varying parameters and the estimations perform very poorly. The proposed algorithm can give excellent performance in the tracking results even if the parameters change.

[1] O.Bernard, Z.Hadj–Sadok and J.L. Gouze, “Observers for biotechnological processes with unknown kinetics. Application to wastewater treatment,” in Proc. 2000 Conf. Decision Control, Sydney, Australlia, Dec. 2000, pp.4526–4531. [2] G. Bastin and J. VanImpe, “Nonlinear and adaptive control in biotechnology: a tutorial,” European Journal of Control, vol.1, no.1, pp. 1–37, 1995. [3] M. Sternad, L. Lindbom and A. Ahlen, “Winer design of adaptation algorithms with time– invariant gain,” IEEE Trans. Signal Processing, Vol. 50, no.8, pp.1895–1907, 2002. [4] S. Haykin, “Adaptive filter theory, fourth edition,” Upper Saddle Rever, NJ: Prentice–hall, 2002. [5] L. Ljung, System identification, theory for the user. Upper Saddle River,NJ: Prentice–Hall, 1987. [6] S. Haykin, A. H. Sayed, J. R. Zeidler, P. Yee and P. C. Wei, “Adaptive tracking of linear time– varying system by extended RLS algorithms,” IEEE Trans. Signal Processing,, vol.45, pp.1118– 1128, May 1997. [7] K.J.˚ Astr¨om and B. Wittenmark, Adaptive Control second edition, Addison-Wesley Publishing Company, 1995. [8] H.K. Khalil, Nonlinear systems, Prentice–Hall, 1996. [9] A. Sano, K. Hidaka and H. Ohmori, “Stable accelerated adaptive tracking algorithm”, International Conference on Acoustics, Speech, and Signal Processing (ICASSP), Orlando, Florida, USA pp1629–1932, 2002.

5 Conclusion The new LMS adaptive estimation algorithm for time–varying parameters has been presented using an internal model of polynomial function with a high order of time. The design procedure and stability con-

108

1.5

times ω(1,n) LMS1 LMS2 LMS with IM

1.5

ω(2,n) LMS1 LMS2 LMS with IM

1

1

0.5

ω(2,n)

ω(1,n)

0.5

0

0

−0.5

−0.5

−1

−1

−1.5

−1.5 0

1

2

3

4

5

6

7

8

9

0

10

1

2

3

4

5

6

7

8

9

Fig. 8 Tracking performances of ω ˆ (2, n) in equation (16). Dotted:ω(2, n); dashed– dotted:LMS1; dashed: LMS2; solid: new LMS.

Fig. 5 Tracking performances of ω ˆ (1, n) in equation (15). Dotted:ω(1, n); dashed– dotted:LMS1; dashed: LMS2; solid: new LMS

ω(3,n) LMS1 LMS2 LMS with IM

1.5

1.5

10

time

time

times ω(1,n) LMS1 LMS2 LMS with IM

1

1

0.5

ω(3,n)

ω(1,n)

0.5

0

0

−0.5 −0.5

−1 −1

−1.5 0

−1.5 0

1

2

3

4

5

6

7

8

9

1

2

3

4

5

6

7

8

9

10

time

10

time

Fig. 9 Tracking performances of ω ˆ (3, n) in equation (15). Dotted:ω(3, n); dashed– dotted:LMS1; dashed: LMS2; solid: new LMS.

Fig. 6 Tracking performances of ω ˆ (1, n) in equation (16). Dotted:ω(1, n); dashed– dotted:LMS1; dashed: LMS2; solid: new LMS.

1.5

ω(3,n) LMS1 LMS2 LMS with IM

ω(2,n) LMS1 LMS2 LMS with IM

1.5

1 1

0.5

ω(3,n)

ω(2,n)

0.5

0

0

−0.5

−0.5

−1

−1

−1.5 0

1

2

3

4

5

6

7

8

9

−1.5

10

time 0

Fig. 7 Tracking performances of ω ˆ (2, n) in equation (15). Dotted:ω(2, n); dashed– dotted:LMS1; dashed: LMS2; solid: new LMS.

1

2

3

4

5

6

7

8

9

10

time

Fig. 10 Tracking performances of ω ˆ (3, n) in equation (16). Dotted:ω(3, n); dashed–dotted:LMS1; dashed: LMS2; solid:new LMS.

109