Dimensionally reduced Krylov subspace model reduction for large scale systems

Dimensionally reduced Krylov subspace model reduction for large scale systems

Applied Mathematics and Computation 191 (2007) 21–30 www.elsevier.com/locate/amc Dimensionally reduced Krylov subspace model reduction for large scal...

255KB Sizes 5 Downloads 108 Views

Applied Mathematics and Computation 191 (2007) 21–30 www.elsevier.com/locate/amc

Dimensionally reduced Krylov subspace model reduction for large scale systems M.M. Awais

a,*

, Shafay Shamail a, Nisar Ahmed

b

a

b

Computer Science Department, LUMS, Opposite Sector U, DHA, Lahore, Pakistan Faculty of Electronic Engineering, GIK Institute of Engineering Sciences and Technology, Topi, 23460 Swabi, Pakistan

Abstract This paper introduces a new mathematical approach that combines the concepts of dimensional reduction and Krylov subspace techniques for use in the model reduction problem for large-scale systems. Krylov subspace methods for model reduction uses the Arnoldi algorithm in order to construct the bases for controllability, observability, and oblique subspaces of state space realization. The newly developed algorithm uses principal component analysis along with Krylov oblique projection model reduction technique to provide computationally efficient and inexpensive model reduction method. To demonstrate the effectiveness of the proposed hybrid scheme the residual error, forward error and stability response analyses have been performed for various randomly generated large-scale systems.  2006 Elsevier Inc. All rights reserved. Keywords: Krylov subspace; Oblique projection; Principal component analysis; State space models

1. Model reduction: Introduction In control systems, a high-order, linear, time invariant system, f(s) has state-space representation in the time-domain given as x_ ðtÞ ¼ AxðtÞ þ BuðtÞ; yðtÞ ¼ CxðtÞ þ DuðtÞ;

ð1Þ ð2Þ

where, A 2 Rnn ; B 2 Rnp ; C 2 Rqn ; and D 2 Rqp ; x(t) is the system state vector of dimension n, u(t) and y(t) are vector functions representing the input and the output of the system, respectively. The associated transfer function of the system is given by 1

f ðsÞ ¼ CðsI  AÞ B þ D:

ð3Þ

The high-order model with an appropriate accuracy, less storage space and minimum computational effort can be reduced to fr(s) of order r (where r  n) having a state-space representation in the time-domain as *

Corresponding author. E-mail address: [email protected] (M.M. Awais).

0096-3003/$ - see front matter  2006 Elsevier Inc. All rights reserved. doi:10.1016/j.amc.2006.09.056

22

M.M. Awais et al. / Applied Mathematics and Computation 191 (2007) 21–30

x_ r ðtÞ ¼ Ar xr ðtÞ þ Br uðtÞ; y r ðtÞ ¼ C r xr ðtÞ þ Dr uðtÞ;

ð4Þ ð5Þ

where, Ar 2 Rrr ; Br 2 Rrp ; C r 2 Rqr ; and Dr 2 Rqp ; xr ðtÞ is the state vector of dimension r,u(t) and yr(t) are vector functions representing the input and the output of the system, respectively. The associated transfer function of such a system is given by fr ðsÞ ¼ C r ðsI r  Ar Þ1 Br þ Dr :

ð6Þ

The above mentioned model reduction problem can generally be posed as follows: Given an nth order model, f(s), find a lower order model, say, of rth order (r  n), fr(s), such that f(s) and fr(s) are close in some sense ðe:g: kf ðsÞ  fr ðsÞk1 is less than a predefined tolerance [6]). A satisfactory model reduction algorithm would considerably reduce the plant model size to one that is sufficiently small to facilitate the analysis and design of control systems. Yet, at the same time, it is adequately accurate so that the salient features of the original system model are still retained. To obtain lower order model approximation techniques such as Balanced Model Truncation [2,4,5], Optimal Hankel Norm Approximation [2,3], Coprime Factor Model Reduction [6], Singular Perturbation Approximation [2,7] can be used. Most of the standard model reduction algorithms require direct solution of the Lyapunov equations which is very expensive in computational terms [10]. However, alternate reduction methods have also been suggested in the literature such as Krylov subspace methods [9,10]. Krylov subspace methods provide less computationally expensive approach to model reduction. The present paper suggests further reduction in computational effort required for reducing large scale models by introducing principal component analysis step in the standard Krylov subspace approach. We start by introducing the standard Krylov subspace methods (based on the discussion in Refs. [1,8–10]) in the following section. 2. Defining Krylov subspace From Eq. (3), lets approximate, fB ðsÞ :¼ ðsI  AÞ1 B:

ð7Þ

1

By expanding (sI  A) around s = 1, we get       1 A 1 A A2 An1 An 1 ðsI  AÞ ¼ I ¼ I þ þ 2 þ    þ n1 þ n þ    : s s s s s s s The expansion of above can also be done from the Caley Hamilton theorem [11], as   1 ðsI  AÞ ¼ a0 ðsÞI n þ a1 ðsÞA þ    þ an2 ðsÞAn2 þ an1 ðsÞAn1 ; substituting the value of (sI  A)1 in (7), we get 2

fB ðsÞ ¼ ½B

AB . . . An1 B

3 a0 ðsÞ 6 a1 ðsÞ 7 6 7 n2 7 A B 6 6 .. 7: 4 . 5 an1 ðsÞ

ð8Þ

The scalars coefficients a0(s), a1(s), . . . , an1(s) can be evaluated only if the characteristic polynomial det(sI  A) is evaluated. This is a very expensive calculation as A is a very large matrix. To avoid this calculation, we can state that, fB ðSÞ ¼ ½B

AB . . . An2 B

An1 B F ðsÞ;

ð9Þ

for some vector F(s). Lets define the spanning vectors as a Krylov controllability subspace of order n as jn ðA; BÞ :¼ span ½B

AB

A2 B . . . An1 B ;

ð10Þ

M.M. Awais et al. / Applied Mathematics and Computation 191 (2007) 21–30

23

The above subspace can be approximated with fB(s), where the original subspace is truncated to mth level such that m  n, fB,m(s) 2 jm(A,B) and Fm(s) is a function to be evaluated. fB;m ðsÞ ¼ ½B AB . . . Am2 B Am1 B F m ðsÞ; jm ðA; BÞ :¼ ½B AB A2 B . . . Am2 Am1 B : The Arnoldi algorithm is an orthogonal projection method for calculating the orthonormal basis Vm for the Krylov subspace jm. The procedure was first introduced in 1951 as a means of reducing a dense matrix into Hessenberg form [12–15]. The basic outline of Arnoldi algorithm is set out in Refs. [1,12]. The governing equation for the overall Arnoldi process for Krylov controllability (Eq. (11)) and observability (Eq. (12)) subspaces are given below B ¼ V m I m;

ð11Þ

e m; AV m ¼ V m H m þ Ve m H C T ¼ W m k Tm;

ð12Þ

e T: e mG AT W m ¼ W m GTm þ W m 3. Model reduction algorithms

In the last section the general Krylov subspace reduction procedure has been discussed. In this section, the Krylov method is applied to develop model reduction algorithms using controllability, observability, and oblique subspaces (based on Refs. [1,9,10]). 3.1. Krylov controllability/oberservability subspace method For the development of the Krylov controllability subspace based model reduction algorithm, fB(s) from Eq. (3) can be approximated by fB;m ðsÞ ¼ V m F B;m ðsÞ: The overall transfer function in terms of Krylov controllability subspace becomes fm ðsÞ ¼ CV m F B;m ðsÞ

ð13Þ

The residual error can be calculated as below: Rm ðsÞ ¼ B  ðsI  AÞfB;m ðsÞ ¼ B  ðsV m  AV m ÞF B;m ðsÞ: Using Eq. (11) and H m ¼ V Tm AV m in the above expression, and further simplifying the expression, we get, "

lm  ðsI  H m ÞF B;m ðsÞ Rm ðsÞ ¼ ½V m Ve m  e m F B;m ðsÞ H

# ð14Þ

By applying the Galerkin condition on the residual error expression above i.e., V Tm Rm ðsÞ ¼ 0; the expression changes to "

½I

lm  ðsI  H m ÞF B;m ðsÞ 0 e m F B;m ðsÞ H

# ¼ 0;

or F B;m ðsÞ ¼ ðsI  H m Þ1 lm : Substituting into (14), reduces the residual error expression reduces to e m F B;m ðsÞk1 ¼ k H e m F B;m ðsÞk1 kRm ðsÞk1 ¼ k Ve m H

24

M.M. Awais et al. / Applied Mathematics and Computation 191 (2007) 21–30

and the mth order realization can be constructed using the following expression:

Similarly the observability subspace can be use to construct the reduced order model as given below:

The computable residual error for observability subspace can be given as below (following the controllability method). em  W e mk : e T k1 ¼ kF c;m ðsÞ G kRm ðsÞk1 ¼ F c;m ðsÞ G 1 m 3.2. Model reduction via oblique projection [1] The above-mentioned methods can be combined into one subspace known as oblique subspace. The combination is implemented through the introduction of T m ¼ W Tm V m . The details of the implementation are explained below. From controllability and Observability methods, we have f(s) = CfB(s) = fc(s)B, and associated residual errors are as RB;m ðsÞ ¼ B  ðsI  AÞfB;m ðsÞ; and RC;m ðsÞ ¼ C  fC;m ðsÞðsI  AÞ: ð15Þ The approximate models are constructed that satisfy the following two Galerkin-type conditions [9,10]. Controllability RB;m ðsÞ ? Lm ðAT ; C T Þ i:e: W Tm ½B  ðsI  AÞV m hm ðsÞ ¼ 0 8s: ð16aÞ Observability RC;m ðsÞ ? jm ðA; BÞ i:e: ½C  gm ðsÞW Tm ðsI  AÞV m ¼ 0 8s:

ð16bÞ

For convenience we define T m ¼ W Tm V m , which leads to the final reduced order model based on (16a) and (16b), respectively. e m; H mm ¼ T 1 W T AV m ¼ H m þ T 1 W T A Ve m H m

Gmm ¼ W

m

m

T 1 m AV m T m

e mW e m V m T 1 : ¼ Gm þ G m

m

Suppose that m steps of the Arnoldi process have been performed and T m ¼ W Tm V m is non-singular. Then, the Galerkin conditions in (16) are satisfied if and only if hm(s) = (sI  Hmm)1lm and gm(s) = km(sI  Gmm)1. Under these conditions, the residual error infinity-norms are  " #   T 1 W T Ve   m m m e H m hm ðsÞ ; kRB;m ðsÞk1 ¼  ð17Þ   I 1   eT W e T V m T 1 I j1 : ð18Þ kRC;m ðsÞk1 ¼ kgm ðsÞ G m m m The approximations

" 1

s

fm;1 ðsÞ ¼ CV m hm ðsÞ ¼ CV m ðsI  H mm Þ lm ¼

T T 1 m W m AV m

T T 1 m W mB

CV m

0

" s

fm;2 ðsÞ ¼ gm ðsÞW Tm B ¼ k m ðsI  Gmm Þ1 W Tm B ¼

W Tm AV m T 1 m CV m T 1 m

W

T mB

0

#

#

 :¼

 ¼:

H mm

lm

kmT m

0

Gmm km

T m lm 0



 ¼: "

 ¼:

Am

Bm

Cm

0

bm A bm C

bm B



ð19Þ #

0 ð20Þ

M.M. Awais et al. / Applied Mathematics and Computation 191 (2007) 21–30

25

are different realisations of same transfer function. Based on the above discussion an algorithm for model reduction has been proposed in [1]. The algorithm is presented below for reference. This paper is an attempt to further reduce the computational cost of Algorithm 1 without compromising the performance of the reduced order system. The next section outlines a hybrid approach to realise a new algorithm for efficient large scale model reduction. Algorithm 1 (Oblique Projection Method via Arnoldi Process). (1) (2) (3) (4) (5) (6) (7)

Input Data: Define the nth order state space data f(s) = (A, B, C, 0) and set reduced order m. Start: specify tolerances c > 0 and e > 0. e m and lm . Perform m steps of the Arnoldi process with (A, B) to produce H m ; V m ; Ve m ; H e m; W ; W e m and k m . Perform m steps of the Arnoldi process with (AT, CT) to produce Gm ; G m Form the reduced order model from either (19) or (20). Reduce the model further to rth level through balanced truncation. Test the errors in (17) and (18), if either (17) > c or (18) > e increase m and continue the Arnoldi process.

4. Dimensionally reduced oblique projection method The basic idea implemented here is to map the high-dimensional space into a space of lower dimension, through the combined implementation of Principal Component Analysis (PCA), and the previously explained Krylov oblique subspace method (Algorithm 1). The standard algorithm implements the balanced truncation method in step 6 for further reduction, which is an expensive operation in terms of computational load. Balanced truncation is still important to ensure stability in the reduced order model in case the original model is stable. Instability in the reduced order model may arise due to the Arnoldi based oblique projection method implemented in Algorithm 1 (steps 3 and 4). Therefore, one may not eliminate balanced truncation step from Algorithm 1. However, a significant computational load can be reduced if step 6 in Algorithm 1 is applied to a relatively lower order model. This paper proposes a dimensional reduction step just before the balanced truncation is implemented. The suggested modification in Algorithm 1 will further reduce the size of the input to step 6 and realise a low computational cost solution. The new algorithm would modify the transformations for generating reduced model due to the inclusion of dimensional reduction steps in the standard algorithm. The derivation of the transformations for the new dimensionally reduced order model is explained below along with the new hybrid reduction algorithm. Let us start by defining new Vm as V m ¼ ZU ; where

Z ¼ U TV m:

Retaining only M coefficients of Z, we get, V m ¼ ZRU :

Fig. 1. Hybrid scheme.

26

M.M. Awais et al. / Applied Mathematics and Computation 191 (2007) 21–30

U can be found by solving the following equation: U T RU ¼ E; where P T 1. R is the covariance matrix given as R ¼ n ðV m  V m ÞðV m  V m Þ . 2. E is the matrix comprising of eigenvectors of R which could further be transformed into diagonal matrix through orthogonal matrix (w) as K = wTEw. e w where U e is transformed solution 3. U is the matrix comprising of the eigenvalues of R. Also written as U of U. Using the above equations and combining the Krylov subspace method based on oblique projection with it, the reduced order state space matrix for A is given as

Error Plots for Oblique Projection Forward Err.

0.015 - obq.contr. -- obq.obser.

0.01 0.005 0 41

42

43

44

45

46

47

48

49

50

- obq.contr. -- obq.obser.

0.01 0.005

Residual Error

DC.G.Error

0.015

0 41 200

42

43

44

45

46

47

49

50

49

50

- R.E contr. -- R.E obser

100

0 41

48

42

43

44

45

46

47

48

Fig. 2. Error plots for oblique projection.

Magnitude Plot: n = 100

Nyquist Plot: m = 41 20 Imaginary Axis

gain(Db)

50 0 Original Reduced -50

-100 -5 10

0

10 Phase Plot

Amlitude

phase(degree)

-50

-150

20

30

50 0

10 Error Plot

-50

5

10

0

0.2

0.4 0.6 Step Response

0.8

1

0

0.2

0.4 0.6 Time(sec)

0.8

1

30

Amlitude

-100 gain(Db)

10 Real Axis

Impulse Response 0

0

-200 Error -300 -400 -5 10

0

100

-100

-200 -5 10

0 -10 -20 -10

5

10

0

10

0

10 Freq(rad/sec)

5

10

20 10 0

Fig. 3. System response for original and reduced order systems.

M.M. Awais et al. / Applied Mathematics and Computation 191 (2007) 21–30

27

T T 1 T 1 T 1 T e wT : e TV m U H mm ¼ T 1 wU m W m AVm ¼ T m W m AZ R U ¼ T m W m AU V m U ¼ T m W m A |fflfflfflfflfflfflfflfflffl{zfflfflfflfflfflfflfflfflffl} Vr

Similar derivations can be applied to all the other matrices such as B, C and D. The hybrid scheme is implemented as shown in Fig. 1. Hybrid Algorithm 1A 1. Input Data: Define the nth order state space data f(s) = (A, B, C, 0) and set reduced order m. 2. Start: specify tolerances c > 0 and e > 0. e m and lm : 3. Perform m steps of the Arnoldi process with (A, B) to produce H m ; V m ; Ve m ; H e m and lm 4. Perform PCA on the transformation matrices to generate H m ; V m ; Ve m ; H e m; W m; W e m and k m . 5. Perform steps 3 and 4 for (AT,CT) to produce Gm ; G 6. Form the reduced order model using the hybrid scheme. 7. Test the errors with the modified matrices (the equations of the errors remain essentially the same). Magnitude Plot: n = 100

Nyquist Plot: m = 50 20 Imaginary Axi s

50

gain(Db)

0 Original Reduced -50 -100 -5 10

0

0 -10 -20 -10

5

10 Phase Plot

10

0 -50 -100 -150 0

20

30

50 0 -50

5

10 Error Plot

10

0

0

0.2

0.4 0.6 Step Response

0.8

1

0

0.2

0.4 0.6 Time(sec)

0.8

1

30

Amlitude

-100 gain(Db)

10 Real Axi s

Impulse Response

-200 -5 10

-200 Error -300 -400 -5 10

0

100

Amlitude

phase(degree)

10

0

10 0

5

10 Freq(rad/sec)

20

10

Fig. 4. Stability responses for n = 100 and m = 50.

Error Plots for Oblique Projection Forward Err.

0.4 - obq.contr. -- obq.obser.

0.2

0 38

40

42

44

46

48

50

52

54

50

52

54

50

52

54

Residual Error

DC.G.Error

0.4 - obq.contr. -- obq.obser.

0.2

0 38 200

40

42

44

46

- R.E contr. -- R.E obser

100

0 38

48

40

42

44

46

48

Fig. 5. Error plots for oblique projection methods.

28

M.M. Awais et al. / Applied Mathematics and Computation 191 (2007) 21–30

5. Implementation The results shown in this paper only deal with the oblique projection method. Random model of order 100 and 200 for continuous linear time invariant, single input, single output, and stable systems were generated to validate and compare the performance of the afore-mentioned hybrid techniques, with the controllability and observability based oblique methods. The newly developed algorithm was validated for the error bounds, and stability response analyses. To check the accuracy of approximate transfer function the following error types were evaluated. • Infinity norm of forward error: The infinity norm of the difference of the original and reduced order transfer functions is given by kEm(s)k1 = kf(s)  fm(s)k1.

Magnitude Plot: n = 200

Nyquist Plot: m = 39 10 Imaginary Axis

gain(Db)

50 0 Original Reduced -50

-100 -5 10

0

10 Phase Plot

Amlitude

phase(degree)

0 Real Axis

5

10

Impulse Response

0

-200

0

10 Error Plot

-20 -40 -60

5

10

0

0

0.1

0.2 0.3 Step Response

0.4

0.5

5

Amlitude

-100 gain(Db)

-5

20

0

-200 Error

-300 -400 -5 10

0 -5 -10 -10

5

10

200

-400 -5 10

5

0

10 Freq(rad/sec)

0 -5 -10

5

10

0

0.2

0.4 Time(sec)

0.6

0.8

Fig. 6. Stability responses for n = 200 and m = 39.

Magnitude Plot: n = 200

Nyquist Plot: m = 52 10 Imaginary Axis

gain(Db)

50 0 -50 -100 -5 10

Original Reduced

0

10 Phase Plot

5 0 -5 -10 -10

5

10

200

-5

0 Real Axis

5

10

20

Amlitude

phase(degree)

Impulse Response 0

0 -200 -400 -5 10

0

10 Error Plot

-60

5

0

0.1

0.2 0.3 Step Response

0.4

0.5

5

-100 Amlitude

gain(Db)

-40

10

0

-200 Error -300 -400 -5 10

-20

0

10 Freq(rad/sec)

5

10

0 -5 -10 0

0.2

0.4 Time(sec)

0.6

Fig. 7. Stability responses for n = 200 and m = 52.

0.8

M.M. Awais et al. / Applied Mathematics and Computation 191 (2007) 21–30

29

Flops Consumption (MFlops) 60

MFlops

50 40 30 20 10 0 1

2 PCA VS BT

Fig. 8. Computational load comparison in terms of MFlops.

Time Comparison

CPU Time

1 0.8 0.6 0.4 0.2 0

1

PCA VS BT

2

Fig. 9. Computational time comparison in terms of CPU Time.

• DC error: To conduct DC error analysis of the reduced order model at lower frequencies insert s = 0, in the forward error expression above. • Residual error: The residual error analysis is done by evaluating the computable expressions (17) and (18). Figs. 2–9 represent the implementation results of the modified algorithm (depicted as Path A in Fig. 1). Fig. 2 shows the approximated Forward Error, Dc Error and the Residual Error when hybrid scheme is implemented. The results in Fig. 2 are generated for different models of sizes from 41 to 50. The errors are not monotonically reducing, thus no advantage in terms of monotonicity is seen here with the application of hybrid scheme. The behaviour is thus similar to standard oblique projection methods as reported in [1]. The stability responses of the reduced order model obtained through the application of hybrid scheme with reference to the original model show almost perfect matching, again consistent with the standard algorithm. The stability responses are shown for two different values of reduce order level of 41 and 50 (Figs. 3 and 4, respectively). The Flops consumed through the implementation of balanced truncation (BT) as the secondary reduction method are 83% higher than the proposed PCA based hybrid scheme, whereas the CPU time consumed is 75% less. Figs. 7–9 represent results for n = 200. The results are even better than the results shown in Figs. 2–4. One may argue that the balanced truncation method ensures the stability of the reduced order model, which the dimensionally reduced methods may not guarantee. This comment is correct but then the hybrid algorithm could further be modified to include another step, which is to follow the PCA application, by balanced truncation at a lower level (r), which will be computationally less expensive compared to its application at level m. This has been implemented for n = 200 and is seen that if the model is reduced to m = 60 and then BT is applied directly to reduce it to r = 10, the computational time is 43% higher and the flops consumption is 60% higher than the approach to reduce n = 200 to r = 10 using the hybrid PCA application followed by BT based reduction (Path B in Fig. 1). 6. Conclusions The results shown in this paper represent computational savings brought about by the implementation of hybrid dimensionally reduced Krylov subspace method. The final conclusions of this research are

30

M.M. Awais et al. / Applied Mathematics and Computation 191 (2007) 21–30

• The PCA based hybrid system may not remedy the instabilities generated by the oblique projection methods but can still generate huge computational advantages. The problem of instabilities can be tackled through the application of balanced truncation after PCA is implemented. This does not effect the overall computational saving, which the hybrid algorithm achieves, although the margin of savings reduces. • The non-monotonicity of the errors observed is still maintained, and thus the possibility of determining the error bounds is yet to be materialised. • An Important research direction for future work could be the implementation of the hybrid scheme for implicitly restarted algorithm to check for the computational savings that may be achieved.

References [1] N. Ahmed, Implicit restart schemes for Krylov subspace model reduction methods. PhD thesis, University of London, Imperial College of Science, Technology and Medicine London, 1999. [2] M. Green, D.J.N. Limebeer, Linear Robust Control, Prentice-Hall, Englewood Cliffs, NJ, 1995. [3] K. Glover, All optimal Hankel norm approximations of linear multivariable systems and their infinity error bounds, Int. J. Control 39 (1984) 1115–1193. [4] B.C. Moore, Principal component analysis in linear systems: controllability, observability and model reduction, IEEE Trans. Autom. Control AC-26 (1) (1981) 17–31. [5] M.G. Safonov, R.Y. Chiang, A Schur method for balanced truncation model reduction, IEEE Trans. Autom. Control AC-34 (1989) 729–733. [6] K. Zhou, J.C. Doyle, Essentials of Robust Control, Prentice-Hall, Upper Saddle River, NJ, 1998. [7] K.V. Fernando, H. Nicholson, Singular perturbational model reduction of balanced systems, IEEE Trans. Autom. Control 27 (1982) 466–468. [8] D.L. Boley, G.H. Golub, The Lanczos–Arnoldi algorithm and controllability, Syst. Control Lett. 4 (1984) 317–327. [9] I.M. Jaimoukha, E.M. Kasenally, Model reduction of large scale system via Arnoldi process, in: SIAM, Proceedings of the Cornelius Lanczos International Conference, 384–386, Raleigh, North Carolina, December, 1993. [10] I.M. Jaimoukha, E.M. Kasenally, Krylov subspace methods for solving large Lyapunov equations, SIAM J. Numer. Anal. 31 (1) (1994) 227–251. [11] G.H. Golub, C.F. Van Loan, Matrix Computations, second ed., John Hopkins University Press, Baltimore, MD, 1990. [12] Y. Saad, Iterative Methods for Sparse Linear Systems, PWS Publishing Company, 20 Park Plaza, Bostan MA, 1996. [13] Y. Saad, Numerical Methods for Large Eigenvalue Problems, Manchester University Press, Manchester, UK, 1992. [14] Y. Saad, Numerical Solution of Large Lyapunov Equations, Signal Processing, Scattering, Operator Theory and Numerical Methods, Boston, MA, 1990, pp. 503–511. [15] C.D. Villemagne, R.E. Skelton, Model reduction using a projection formulation, Int. J. Control 46 (1987) 2141–2169.