ON MODEL REDUCTION BASED ON WEIGHTED EQUATIONS ERRORS

ON MODEL REDUCTION BASED ON WEIGHTED EQUATIONS ERRORS

ON MODEL REDUCTION BASED ON WEIGHTED EQUATIONS ERRORS Wieslaw Krajewski ∗ Umberto Viaro ∗∗ ∗ Systems Research Institute, Polish Academy of Sciences, ...

222KB Sizes 0 Downloads 17 Views

ON MODEL REDUCTION BASED ON WEIGHTED EQUATIONS ERRORS Wieslaw Krajewski ∗ Umberto Viaro ∗∗ ∗

Systems Research Institute, Polish Academy of Sciences, ul. Newelska 6, 01447 Warsaw, Poland ∗∗ Dipartimento di Ingegneria Elettrica, Gestionale e Meccanica, Universit`a di Udine, via delle Scienze 208, 33100 Udine, Italy

Abstract: The paper deals with the problem of approximating a stable continuous–time multivariable system by minimizing the L2 –norm of a weighted equation error. Necessary and sufficient conditions of optimality are presented, and some properties of the resulting approximating models, such as stability and uniqueness, are pointed out. Based on the conditions of optimality, an efficient algorithm for generating reduced–order models that retain different numbers of Markov parameters and MacLaurin expansion coefficients, is c 2007 IFAC derived and applied to a benchmark example. Copyright ° Keywords: Linear systems, model reduction, equation error, Markov parameters, time moments

1. INTRODUCTION AND PRELIMINARIES The approximation of high order systems by means of lower order models is one of the most important topics in systems engineering and has continued to be of interest to mathematicians and applied scientists. Many approaches to find reduced–order models have been proposed over the years: some are based on empiric considerations, others rest on more theoretical grounds. Among the latter, those leading to reduced– order models that are optimal with respect to a well– defined criterion deserve special attention. A reasonable measure of the goodness of the approximation is provided by the L2 norm of the output error. Its minimization, however, is often a very hard task, and the related reduction methods are difficult to implement (Zigic et al., 1993; Krajewski et al., 1995b) or depend crucially on the initial conditions, (Ferrante et al., 1999). To avoid these difficulties, resort can be made to a slightly different approach that refers to a suitable norm of the equation error. Such an approach has been suggested in (Bierman, 1973) for SISO sys-

tems and adopted also in (Eitelberg, 1981; Lepschy and Viaro, 1984) and (Regalia, 1995). According to a generalized version of this approach (Krajewski et al., 2004), this paper aims at the minimization of the L2 norm of a weighted equation error, and presents a computationally efficient algorithm to find reduced–order models of MIMO systems. Similar results can be obtained by the methods in (Yousuff et al., 1985; de Villemagne and Skelton, 1980; Krajewski et al., 1994; Krajewski et al., 1995a) using more complicated procedures. Let us consider a linear, time–invariant stable system with mo outputs and mi inputs described by the state– space equations: x(t) ˙ = F x(t) + Gu(t) ,

(1)

y(t) = Hx(t) ,

(2)

where x(t) ∈ Rn , y(t) ∈ Rmo , u(t) ∈ Rmi , and n is the dimension of a minimal realization. The input– output behavior of system (1)–(2) can be characterized by the impulse–response matrix W (t) or by the

ˆ (s) = L{W (t)} relating transfer–function matrix W the input and output transforms: ˆ (s) u(s) . y(s) = W

(3)

Similarly, let the state–space equations of the reduced– order model be: ˜ x ˜˙ (t) = F˜ x ˜(t) + Gu(t) , ˜ y˜(t) = H x ˜(t) ,

(4) (5)

where x ˜(t) ∈ Rn˜ with n ˜ < n, y˜(t) ∈ Rmo , mi u(t) ∈ R , and consider the following left matrix– fraction description (MFD) of its transfer–function ˆ f matrix W (s): ˆ f y˜(s) = W (s) u(s) = Dl−1 (s)Nl (s) u(s) , (6) . q q−1 Dl (s) = Is + Aq−1 s + . . . + A1 s + A0 , (7) . q−1 Nl (s) = Bq−1 s + . . . + B1 s + B0 , (8) where Ai ∈ Rmo ×mo , Bi ∈ Rmo ×mi , i = 0, . . . , q − ˆ f 1. The MacMillan degree of W (s) and, therefore, the dimension of its minimal realizations is less than, or equal to, the degree mo q of det[Dl (s)] (Kailath, 1980). Clearly, different MFD’s of the same transfer– function matrix may exist, and some of them may be characterized by a smaller degree of the determinant. ˆ f From (6) it follows that W (s) satisfies the equation: ˆ f Dl (s)W (s) − Nl (s) = 0 .

(9)

ˆ f ˆ (s), the difference If W (s) in (9) is replaced by W ˆ (s) − Nl (s) is no longer equal to 0 and Dl (s)W represents the so–called equation error. In the following, we refer to a more general definition of the error, specifically: . 1 ˆ (s) − Nl (s)] , (10) El (s) = k [Dl (s)W s and take as its measure the L2 norm of (10). This means that matrices Ai and Bi in (7) and (8), i = 0, 1, . . . , q − 1, are to be chosen so as to minimize the functional: Z ∞ . 1 tr[El (jω)El∗ (jω)] dω . J= (11) 2π −∞ 1 By changing k in the weighting factor k , a family s of reduced–order models is obtained. The standard equation–error approach corresponds to k = 0, which leads to a better approximation at high frequencies. Instead, to improve the approximation at low frequencies, it is convenient to set k > 0. Other interesting 1 reduction methods can be obtained by replacing k s in (10) by another weighting function of suitable frequency shape. Section 2 presents necessary and sufficient conditions for a reduced–order model to be optimal according to

the adopted criterion. Section 3 shows some important properties of the optimal approximants, such as stability and uniqueness. To find these approximants, in Section 4 a computationally simple algorithm is suggested and applied to a meaningful example taken from the literature. 2. OPTIMALITY CONDITIONS ˆ (s) of the stable system (1)– The transfer function W (2) admits both the asymptotic series expansion: ˆ (s) = M−1 1 + M−2 1 + M−3 1 + . . . (12) W s s2 s3 and the MacLaurin series expansion: ˆ (s) = M0 + M1 s + M2 s2 + M3 s3 + . . . , (13) W where Mi ∈ Rmo ×mi for i = . . . , −1, 0, 1, . . . . The coefficients with negative subscripts in (12), called Markov parameters, are equal to the coefficients of ti in the power series expansion of the impulse (i − 1)! response matrix W (t): t2 t + M−3 + . . . (14) 1! 2! The coefficients with nonnegative subscripts R ∞ in (13) are related to the matrix time moments 0 ti W (t)dt; precisely, they are given by Z (−1)i ∞ i t W (t)dt . (15) Mi = i! 0 W (t) = M−1 + M−2

We consider now the two sequences of functions {Wi (t)} and {W−i (t)} generated recursively starting dWi−1 (t) from W0 (t) = W (t) according to Wi (t) = dt Rt and W−i (t) = ∞ W−i+1 (τ )dτ , i = 1, 2, . . . . . ˆ i (s) = Denoting their Laplace transforms by W L{Wi (t)} , i = 0, ±1, ±2, . . . , for i, l = −k, −k + 1, . . . , q − k , we may define the scalar products: Z +∞ . 1 ˆ i (ω) [W ˆ l (ω)]∗ dω , W Pil = (16) 2π −∞ where the star denotes complex conjugate, and form the mo (q + 1) x mo (q + 1) matrix:   P−k,−k . . . P−k,q−k .   .. .. P[−k:q−k] =   . (17) . . Pq−k,−k . . . Pq−k,q−k It can easily be shown (Krajewski, 2006) that Pil = HF i Wc (F l )T H T

(18)

for i, l = −k, −k + 1, . . . , q − k , where Wc is the controllability Gramian which is the solution of the Lyapunov equation F Wc + Wc F T + GGT = 0 .

(19)

This implies that T P[−k:q−k] = O[−k:q−k] Wc O[−k:q−k] ,

(20)

where

O[−k:q−k]

 HF −k   ..   .   .  = H   .   ..   . q−k HF

3. PROPERTIES OF THE REDUCED–ORDER MODELS



(21)

By exploiting the properties of Laplace transforms and taking account of the previous definitions, the equation error (10) can be written as:

El (s) =

k X

ˆ −(k−i) (s) + Ai W

i=0

q X

ˆ i−k (s) + Ai W

i=k+1

k−j k X 1 X ( Ai Mk−i−j − Bk−j ) + sj i=0 j=1 q−k−1 X

j

s (

j=0

q X

Ai Mj−i−k − Bk−j ).(22)

i=k+1+j

Functional J is finite if and only if the last two addenda in (22) vanish so that (11) can be expressed as: J = tr[ A P[−k:q−k] AT ] ,

(23)

where A = [A0 , A1 , . . . , I]. Since matrix P[−k:q−k] is positive semi–definite, J is a convex functional. On the basis of the preceding observations we can prove the following theorem, which supplies a set of necessary and sufficient conditions for optimality. Theorem 1. For any fixed k = 0, 1, . . . , q , the reduced–order model (6), (7), (8) minimizes functional (11) if and only if: i) matrices B` , ` = 0, 1, . . . , q − 1 , satisfy the following equations: k−j X

Ai Mk−i−j − Bk−j = 0

(24)

i=0

Ai Mj−i+k − Bk+j = 0

The following properties characterize the approximants obtained according to the generalized equation– error approach. Property 1. For any asymptotically stable, controllable and observable linear time–invariant system (1)– (2) and fixed k, there exists a stable reduced–order model (4)–(5) minimizing functional J. Property 2. For any asymptotically stable, controllable and observable linear time–invariant system (1)– (2) and fixed k, a reduced–order model (4)–(5) obtained by minimizing the functional J retains the q −k Markov parameters M−1 , M−2 , . . . , M−q+k and the k coefficients M0 , M1 , . . . , Mk−1 of the MacLaurin expansion. Property 3. For any asymptotically stable, controllable and observable linear time–invariant system (1)–(2) and fixed k, there exists, up to a state– coordinate transformation, a unique minimal realization of reduced–order model minimizing J. According to (Krajewski et al., 1994) matrix P[−k:q−k] can be decomposed as P[−k:q−k] = N[−k:q−k] + E[−k:q−k] ,

(27)

whereN[−k:q−k] and E[−k:q−k] are defined as follows. The block entries Nij of N[−k:q−k] only depend on the parameters Mi according to:

for j = 1, 2, . . . , k , and q X

In the case of SISO systems, it is rather easy to prove the stability and uniqueness of the reduced–order models obtained by minimizing J because the corresponding matrix P[−k:q−k] is positive definite. In the MIMO case, instead, this matrix may be singular so that (26) admits infinitely many solutions. However, by using arguments similar to those in (Inouye, 1993), it is possible to prove (Krajewski, 2006) that all these solutions correspond (up to a state–coordinate transformation) to the same minimal realization of the reduced–order model.

(25)

i) if j − i is even or zero, then j−i

i=k+1+j

for j = 0, 1, . . . , q − k − 1 , ii) matrices A` , ` = 0, 1, . . . , q − 1 , satisfy the following matrix equation:

Nij =

¯ −(i+l) M ¯T (−1)l M −(j−l+1) ,

ii) if j − i is odd, then j−i−1 2

Nij =

X

¯ −(i+l) M ¯T (−1)l M −(j−l+1)

l=1

(26)

j−i+1 1 ¯ T i+j+1 , ¯ i+j+1 M + (−1) 2 M − 2 − 2 2

for some matrix Γq ∈ Rmo ×mo . 2 where Condition (26) can be obtained by applying the Lagrange multipliers approach to (23).

(28)

l=1

[A0 , A1 , . . . , Aq−1 , I] P[−k:q−k] = Γq [0, 0, . . . , 0, I]

2 X

¯l = M

( Ml −Ml

if l < 0 , if l ≥ 0 .

(29)

Matrix E[−k:q−k] is the symmetric Hankel–like block matrix: E[−k:q−k] =   .. .. .. .. .. . . . . . . . .    . . . E−2,−1 −E−1,−1 −E−1,0 E0,0 . . .     . . . E−1,−1 E−1,0 −E0,0 −E0,1 . . .     . . . −E−1,0 E0,0 E0,1 E0,2 . . .  ,    . . . −E0,0 −E0,1 E −E . . . 1,1 1,1    . . . E0,1 E2,0 −E1,1 E2,2 . . .    .. .. .. .. .. . . . . . whose entries are defined next. i If we denote by whl (t) the element of position (h, l) in Wi (t), then: i) the m0 · (q + 1) diagonal entries eii hh of Eii , i = −k, . . . , 0, . . . , q − k , are given by the impulse–response energies: m0 Z ∞ X i eii = (whj (t))2 dt , hh ii) the 21 m0 · (m0 − 1) · (q + 1) off–diagonal entries eii hl of Eii , i = −k, −k + 1, . . . , 0, . . . , q − k , are given by m0 Z ∞ X i i (t)dt , (t)wlj = whj eii hl 0

iii) the 21 m0 · (m0 − 1) · (q + 1) off–diagonal entries ei,i+1 of the antisymmetric matrix Ei,i+1 , i = hl −k, −k + 1, . . . , 0, . . . , q − k , are given by m0 Z ∞ X i+1 i ei,i+1 = whj (t)wlj (t)dt . hl j=1

4.1 Algorithm 1) Solve the Lyapunov equation: F Wc + Wc F T + GGT = 0 . 2) Find the Markov parameters: M−i = HF i−1 G , i = 1, . . . , q − k . and the coefficients of the MacLaurin expansion: Mi = −HF −(i+1) G , i = 0, . . . , k . 3) Compute: 

0

j=1

j=1

(6) ÷ (8) of a reduced–order model. The main steps of the resulting algorithm for fixed q and k are listed below.

O[−k:q−k]

4) Form the matrix: T P[−k:q−k] = O[−k:q−k] Wc O[−k:q−k] .

5) Find the matrix coefficients A0 , A1 , . . . , Aq−1 in (7) from [A0 , A1 , . . . , Aq−1 ]P[−k:q−k−1] =   P−k,q−k  P−k+1,q−k    − . ..   . Pq−k,q−k−1

0

Now we can state a fourth property of the approximants. Property 4. For any asymptotically stable, controllable and observable linear time–invariant system (1)– (2) and fixed k, a reduced–order model (4)–(5) obtained by minimizing the functional J is uniquely determined by the q matrix coefficients M−q+k , . . . , M−1 , M0 , M1 , . . . , Mk−1 and by the m20 · (q + 1) i,i+1 ii entries eii of E[−k:q−k] , i = −k, −k + hh , ehl , ehl 1, . . . , 0, . . . , q − k , h, l = 1, . . . , m0 .

 HF −k   ..   .    = H  .   ..   . q−k HF

6) Compute the matrix coefficients B0 , B1 , . . . , Bq−1 in (8) as: B` =

l X

Ai Ml−i , ` = 0, 1, . . . , k − 1 ,

i=0

B` =

q X

Ai Ml−i , ` = k, k + 1, . . . , q − 1 .

i=l+1

Properties 2 and 4 clearly imply that the approaches based on the retention of Markov parameters and energy indices can be considered as particular cases of the weighted equation–error approach. However, the latter approach leads to simpler reduction algorithms, as shown in the next section.

4. REDUCTION PROCEDURE The necessary and sufficient conditions provided by Theorem 1 can be exploited to find the left MFD form

For any q, a family of reduced–order models can be obtained by changing k. Larger values of k lead to better approximations of the steady–state responses. However, the choice of the model to be adopted can also be based on a comparison of the entire time responses. To this purpose reference can be made to f (t) or to a the L2 norm of the output error W (t) − W visual inspection of the responses. An alternative algorithm that does not need the direct computation of any expansion coefficient of (12) and (13) but requires the singular value decomposition (SVD) of O[−k:q−k] is presented in (Krajewski, 2006).

Impulse Response 0.015

0.01

Amplitude

0.005

0

−0.005

−0.01

0

0.5

1

1.5

2

2.5

3

3.5

4

4.5

5

Time (sec)

Fig. 1. Impulse responses of the full–order model (solid line), the 10th–order model obtained according to the TBR method (dotted line), and the 10th–order model obtained according to the weighted equation–error approach with q = 10 and k = 6 (dashed line). 4.2 Benchmark example Several SISO and MIMO systems have been considered to verify the efficiency of the reduction approach previously illustrated. In all cases the method based on the weighted equation–error has performed well. Here we consider a benchmark example whose details can be found in http://www.icm.tu-bs.de/NICONET. Specifically, this example refers to the 48th–order model of a building of the Los Angeles University Hospital. The same example has been considered in (Antoulas et al., 2000) to compare the reduction methods based, respectively, on truncated balanced realizations (TBR), optimal Hankel–norm approximation and moment matching (Pad´e technique): overall, the TBR method has turned out to be the best. The analysis of the Hankel singular values of the full– order model shows that a 10th–order simplified model should lead to an acceptable approximation. We have applied the above algorithm with q = 10 and k = 0, 1, . . . , 10, thus obtaining a family of 11 reduced– order models. The value of the respective L2 –norms ranges between 5.7 · 10−3 and 1.4 · 10−3 . The approximant with the largest error norm is obtained for k = 0; in this case, only Markov parameters are retained (together with a suitable number of energy indices). The approximant with the smallest error norm is obtained for k = 6: the corresponding reduced– order model retains four Markov parameters and six time moments. The L2 –norm of the 10th–order model

obtained with the TBR method is almost the same as that of the model corresponding to k = 6. However, the latter preserves a number of coefficients of both the asymptotic and the MacLaurin expansion, which ensures a better approximation at both the low and the high frequencies. The impulse responses for the full–order model, for the 10th–order model obtained according to the TBR method, and for the 10th–order model obtained using the weighted equation–error approach with q = 10 and k = 6 are shown in Fig. 1. The step responses for the same models are shown in Fig. 2.

5. CONCLUSIONS A set of necessary and sufficient conditions for the optimality of the reduced–order models minimizing the L2 norm of the weighted equation error (10) has been provided, and the main properties of the optimal models have been pointed out. In particular, it turns out the reduction approaches based on the retention of Markov parameters and energy indices are special cases of the weighted equation–error approach. From the optimality conditions, an efficient algorithm has been derived to find models that match different numbers of Markov parameters and time moments. A benchmark example has been worked out to show the performance of the suggested procedure.

Step Response

−4

8

x 10

6

4

Amplitude

2

0

−2

−4

−6

−8

0

5

10

15

Time (sec)

Fig. 2. Step responses of the full–order model (solid line), the 10th–order model obtained according to the TBR method (dotted line), and the 10th–order model obtained according to the weighted equation–error approach with q = 10 and k = 6 (dashed line). REFERENCES Antoulas, A. C., D. C. Sorensen and S. Gugercin (2000). A survey of model reduction methods for large-scale systems. Technical Report TR0038. Dept. of Comp. and Applied Math., Rice University. Houston, Texas 77251-1892. Bierman, G.J. (1973). Weighted least squares stationary approximation to linear systems. IEEE Trans. Aut. Control AC–17, 2126–2129. de Villemagne, Ch. and R.E. Skelton (1980). Model reduction using a projection formulation. Int. J. Control 46(6), 1121–1127. Eitelberg, E. (1981). Model reduction by minimizing the weighted equqtion error. Int. J. Control 34, 1113–1123. Ferrante, A., W. Krajewski, A. Lepschy and U. Viaro (1999). Convergent algorithm for L2 model reduction. Automatica 35(1), 75–79. Inouye, Y. (1993). Approximation of multivariable linear systems with impulse responce and autocorrelation sequences. Automatica 19(3), 265– 277. Kailath, T. (1980). Linear Systems. Prentice Hall, Inc.. Englewood Cliffs, N.J. Krajewski, W. (2006). Selected methods for optimal model reduction (in Polish). Akademicka Oficyna Wydawnicza EXIT. Warsaw. (In polish). Krajewski, W., A. Lepschy and U. Viaro (1994). Reduction of linear continuous-time multivariable system by matching first- and second-order information. IEEE Trans. Aut. Control 39(5), 2126– 2129. Krajewski, W., A. Lepschy and U. Viaro (1995a). Model reduction by matching Markov parame-

ters, time moments, and impulse-response energies. IEEE Trans. Aut. Control 40(5), 949–953. Krajewski, W., A. Lepschy and U. Viaro (2004). On mimo model reduction by equation error approach. In: Proceedings of the 10th IEEE International Conference on Mathemtical Models in Automation and Robotics (S. Domek and R. Kaszynski, Eds.). Wydawnictwa Politechniki Szczecinskiej. Miedzyzdroje, Poland. pp. 291– 296. Krajewski, W., A. Lepschy, M. Redivo-Zaglia and U. Viaro (1995b). A program for solving the L2 reduced-order model problem with fixed denominator degree. Numerical Algorithms 9, 355–377. Lepschy, A. and U. Viaro (1984). Model reduction using the output equation error. Int. J. Systems Sci. 15(9), 1011–1021. Regalia, P. A. (1995). Adaptive IIR Filtering in Signal Processing and Control. Marcel Dekker, Inc.. New York. Yousuff, A., D .A. Wagie and R. E. Skelton (1985). Linear system approximation via covariance equivalent realizations. J. Math. Anal. Appl. 106, 91–115. Zigic, D., L.T. Watson, E.G. Collins and D.S. Bernstein (1993). Homotopy approaches to the H2 reduced order model problem. Journal of Mathematical Systems, Estimation, and Control 3(2), 173–205.