DIRECT CLOSED-LOOP IDENTIFICATION OF MULTI-INPUT SYSTEMS: VARIANCE ANALYSIS

DIRECT CLOSED-LOOP IDENTIFICATION OF MULTI-INPUT SYSTEMS: VARIANCE ANALYSIS

14th IFAC Symposium on System Identification, Newcastle, Australia, 2006 DIRECT CLOSED-LOOP IDENTIFICATION OF MULTI-INPUT SYSTEMS: VARIANCE ANALYSIS ...

3MB Sizes 1 Downloads 95 Views

14th IFAC Symposium on System Identification, Newcastle, Australia, 2006

DIRECT CLOSED-LOOP IDENTIFICATION OF MULTI-INPUT SYSTEMS: VARIANCE ANALYSIS L. Miˇskovi´c ∗ , A. Karimi ∗ , D. Bonvin ∗ and M. Gevers

∗∗,1



Laboratoire d’Automatique, Ecole Polytechnique F´ed´erale de Lausanne (EPFL), CH-1015 Lausanne, Switzerland. {ljubisa.miskovic,dominique.bonvin,alireza.karimi}@epfl.ch ∗∗

Center for Systems Engineering and Applied Mechanics (CESAME), Universit´e Catholique de Louvain, B-1348 Louvain-la-Neuve, Belgium. [email protected]

Abstract: An analysis of the variance of the parameters of a multi-input plant estimated in closed-loop operation is performed. More specifically, the effect of the simultaneous excitation of an additional input on the variance of the estimated parameters is investigated. The resulting expressions are valid for all conventional Prediction Error Models (PEM). It is shown that, regardless of the parametrization, the presence of an additional reference signal never impairs and, in most cases, improves the accuracy of the parameter estimates. The analytical results are illustrated by two simulation examples. Keywords: Closed-loop identification, covariance matrices, multi-input systems, system excitation.

1. INTRODUCTION

racy improvement resulting from an additional input extends beyond the case of common parameters in all transfer functions.

The accuracy aspects of open-loop identification of multi-input systems have been considered recently by (Gevers et al., 2005). More specifically, the effect of an additional input signal on the variance of the polynomial coefficients in the case of FIR, ARX, ARMAX, OE and BJ models has been investigated. Necessary and sufficient conditions on the parametrization of MISO models under which the addition of an input decreases the covariance of the parameter estimates have been provided. It has been shown that, for SISO model structures that have common parameters in the plant and noise models, any additional input reduces the covariance of all parameters, including the parameters of the noise model and those associated with the other inputs. It has also been shown that, the accu-

The present contribution is a continuation of that work but in the context of direct closed-loop identification. A Linear Time-Invariant (LTI) system with m inputs and n outputs is to be identified using data collected in closed-loop operation. It is assumed that there are no common parameters between the models associated with each of the outputs, and, furthermore, that the disturbances acting on the different outputs are not correlated. For systems fulfilling these two assumptions, a MISO model can be identified for each output separately and the resulting individual models combined into a final MIMO model (Dayal and MacGregor, 1997). Quite naturally, the following questions arise: Do the conditions on the parametrization of the MISO structures that apply to open-loop identification also hold for the case of direct closed-loop identification (with the difference that in closed-loop operation

1 The work of this author is partially supported by the Belgian Programme on Interuniversity Attraction Poles, initiated by the Belgian Federal Science Policy Office.

873

the external reference signals are excited instead of the inputs)? In what way does the correlation of the input signals, due to the presence of the feedback, affects the accuracy of the parameter estimates?

η(t)

? H

To answer these questions, a general model structure is introduced that encompasses all commonly-used parametric model structures. It is assumed that the system (including the noise model) is in the model set. Moreover, for simplicity of presentation it is assumed here that m = n = 2. Note, however, that the results in this paper are valid for arbitrary values of m and n. An analysis, which is asymptotic in data length but not in model order, of the variance of the estimated parameters in this structure is performed for two cases of excitation: (i) a single reference signal is used to excite the closed-loop system; (ii) both references are applied simultaneously. A similar asymptotic analysis has been performed in (Bombois et al., 2005), where the variances in both open and closed loop for the BJ models are compared for SISO systems.

r(t)

- j 6

u(t)

K

-

G

v(t) y(t) + ? - j -

Fig. 1. Closed-loop configuration where G11 , G12 , G21 and G22 are strictly causal, finite-order, rational transfer functions that are not necessarily analytic outside the unit circle, and H1 and H2 are stable and inversely stable transfer functions. The backward-shift operator q −1 will be omitted in the sequel whenever appropriate. The signal y(t) ∈ R2 is the output of the true plant, u(t) ∈ R2 the control signal, r(t) ∈ R2 an external reference signal and η(t) ∈ R2 white noise input with variance ση2 = diag(ση21 , ση22 ). The system S is controlled by the stabilizing controller K ∈ R2×2 as depicted in Fig. 1. The control signal u(t) can be expressed as a function of r(t) as follows:

The result of this analysis is that, in the case of closedloop identification, the following two situations can be distinguished: (i) If all parameters of the noise model are present in the plant model, or if there is no noise model at all, then the accuracy of all parameter estimates is always improved by applying both references simultaneously. For the FIR and OE structures, this result is in contrast to the open-loop case, where existence of common parameters between the plant and noise models is required to improve the accuracy of all parameter estimates. (ii) If the noise model contains some parameters that are independent of the plant model, then simultaneous excitation of both reference signals may improve and can never worsen the quality of the parameter estimates.

u(t) = U (r(t) − Hη(t))   U11 U12 = (r(t) − Hη(t)) U21 U22

(2)

where the input sensitivity function U is U = KS, with S = (I + GK)−1 the output sensitivity function. Consider now the direct closed-loop identification of the subsystem S1 of the system S: S1 : y1 (t) = G11 u1 (t) + G12 u2 (t) + H1 η1 (t) (3) using the following model structure:

The paper is organized as follows. Preliminaries concerning prediction error identification are given in Section 2. In Section 3, the expression describing the influence of the reference signals on the information matrix is derived. This expression is used for the computation of the variance of the parameter and transfer function estimates in Section 4. Section 5 illustrates the analytical results via two simulation examples. Finally, conclusions are given in Section 6.

M = {G11 (α), G12 (α, β), H1 (α, β, γ),  T  (4) ∈ Dθ ⊂ Rnθ θ = αT β T γ T where G11 (α), G12 (α, β) and H1 (α, β, γ) are rational transfer functions, θ ∈ Rnθ is the vector of model parameters, and Dθ is a subset of admissible values for θ. It is assumed that the true subsystem S1 can be described by this model structure for some θ0 = (αT0 , β0T , γ0T )T ∈ Dθ . Note that this parametrization covers a wide range of model structures. For example, if one considers the ARMAX structure Ay1 (t) = B11 u1 (t) + B12 u2 (t) + C1 η1 (t) then the subvector α contains the parameters of the polynomials A and B11 , β contains the parameters of B12 and γ contains the parameters of C1 . Also, in this case H1 = H1 (α, γ).

2. PRELIMINARIES The following unknown LTI 2 × 2 “true” plant is considered:

The direct identification method gives consistent estimates of the open-loop system if the data is sufficiently informative with respect to the adopted model structure and if the true system, including the noise model, can be described within the chosen

S : y(t) = G(q −1 )u(t) + H(q −1 )η(t)     H1 0 G11 G12 η(t) (1) u(t) + = 0 H2 G21 G22

874

parametrization (Ljung and Forssell, 1999). Here, sufficiently informative data means that the signals u(t) are persistently exciting of appropriate order. In closed loop, this is ensured e.g. by a persistently exciting reference signal. Using a set of input-output data of length N acquired in closed-loop operation, the estimate θˆN is calculated via the prediction error criterion (Ljung, 1999): ⎛ ⎞ α ˆN N 1 θˆN = ⎝ βˆN ⎠ = arg min [ε(t, θ)]2 (5) θ∈Dθ N t=1 γˆN

where θ = g11

and hθ1 =

1 M= 2π

Δ

= H1 (θ)−1 (y1 (t) − G11 (θ)u1 (t)



π

{Π1 Π∗1 Φr1 + Π2 Π∗2 Φr2 +  + Π3 Π∗3 ση12 + Π4 Π∗4 ση22 dω

−π

 M (r1 ) + M (r2 ) + M (η1 ) + M (η2 ) (12)

(6)

where (.)∗ is used to denote the complex conjugate.

and the transfer functions are written generically as functions of the parameter vector θ.

Consider now the partition of the parameter vector θ in (4). The sensitivities of the transfer functions G11 , G12 and H1 with respect to θ read:

Let us assume that the parameter estimates θˆN converge to the true parameter vector θ0 as N tends to infinity. Then, the parameter error converges to a Gaussian random variable:

√ dist N θˆN − θ0 −→ N (0, Pθ ) (7)

 α θ = g11 g11  α θ g12 = g12  hθ1 = hα 1

where the covariance matrix Pθ is given by: Pθ =

(11)

From (8)-(11), and using Parseval’s theorem and the fact that r1 (t), r2 (t), η1 (t) and η2 (t) are not correlated, the information matrix can be rewritten as:

ε(t, θ) = y1 (t) − yˆ1 (t|t − 1, θ)

ση21 [Eψ(t, θ0 )ψ T (t, θ0 )]−1

 ∂H1 (θ)  . ∂θ θ=θ0

The quantities Π1 , Π2 , Π3 and Π4 are introduced in (10) for the sake of simplicity of notation.

where the one-step ahead prediction error ε(t, θ) for (3) is defined as:

−G12 (θ)u2 (t))

  ∂G11 (θ)  ∂G12 (θ)  θ ; g = ∂θ θ=θ0 12 ∂θ θ=θ0

(8)

0

0

β g12

0

hβ1

h1

T T

 γ T

, and (13)

θ θ , g12 and where the definition of the components of g11 hθ1 is analogous to that in (11). It follows from (10), (11) and (13) that the quantity Π1 reduces to:

∂ε(t,θ) ∂θ .

Typically, to compute approxwith ψ(t, θ)  imate expressions for the covariance of the parameter vector estimates, the asymptotic covariance formulas (7)-(8) are used:

 α  β α Π1 = H1 −1 g11 U11 + g12 U21 g12 U21 0 (14)

ση2 (t) 1 Pθ  1 M −1 . (9) N N M is called the information matrix. In the next section, an expression for M is derived that shows the dependence of this matrix on the external excitation signals. This expression, together with (9), will help analyze the dependence of the covariance of the parameter vector estimate θˆN on r1 (t) and r2 (t).

Consequently, the contribution of r1 (t) to M can formally be expressed as: ⎞ ⎛ M11 (r1 ) M12 (r1 ) 0 M (r1 ) = ⎝ M21 (r1 ) M22 (r1 ) 0 ⎠ . (15) 0 0 0

cov(θˆN ) ≈

Similar calculations provide expressions for M (r2 ), M (η1 ) and M (η2 ), from which the information matrix M becomes: ⎛ ⎞ M11 (r, η) M12 (r, η) M13 (η1 ) M = ⎝ M21 (r, η) M22 (r, η) M23 (η1 ) ⎠ . (16) M31 (η1 ) M32 (η1 ) M33 (η1 )

3. EXPRESSION FOR THE INFORMATION MATRIX M

If a submatrix of M carries the argument r and η, this means that this particular submatrix depends on the statistics of both r1 (t) and r2 (t), as well as η1 (t) and η2 (t). Otherwise, the submatrices of M carry as argument only the particular component, for example M33 (η1 ) depends only on the statistics of η1 (t).

Combining (2), (3) and (6), the gradient of the prediction error with respect to the parameters at θ = θ0 can be expressed as follows:  θ  θ ψ(t, θ0 ) = H1 −1 g11 U11 + g12 U21 r1 (t)  θ  θ + g11 U12 + g12 U22 r2 (t)   θ θ + hθ1 − g11 U11 H1 − g12 U21 H1 η1 (t)  θ   θ − g11 U12 H2 − g12 U22 H2 η2 (t)

In the sequel, the effect of the presence/absence of the second external reference signal r2 (t) on the variance of the elements of the parameter vector estimate is analyzed. Note that, for a given model structure, the presence/absence of a particular external reference

 Π1 r1 (t) + Π2 r2 (t) + Π3 η1 (t) + Π4 η2 (t)(10)

875

4.1 Group A

signal does not change the structure of the information matrix M due to the fact that, in closed-loop operation, both inputs u1 (t) and u2 (t) are excited by both reference signals.

When the vector γ is empty and both excitation signals r1 (t) and r2 (t) are present, the information matrix M in (16) reduces to   M11 (r, η) M12 (r, η) (2) . (19) MA = M21 (r, η) M22 (r, η)

4. EFFECT OF THE SECOND REFERENCE SIGNAL

When exciting r1 (t) alone, the corresponding information matrix reads:   M11 (r1 , η) M12 (r1 , η) (1) . (20) MA = M21 (r1 , η) M22 (r1 , η)

Consider the form of the matrix M in (16). All the possible model structures that can be derived from the parametrization (4) can be divided in two groups: A) The model structures that have no noise model or where the subvector γ of the vector θ is empty (there are no parameters in the noise model H1 that are independent of the plant model). This group includes the classical FIR, ARX and OE model structures. B) The noise model contains some (not necessarily all) parameters that are independent of the plant model. This group includes the ARMAX and BJ model structures.

(2)

The matrix MA can be written as: (2) (1) ¯ MA = MA + ΔM

with

ση21 N

ση21 N

Cα , cov(βˆN ) ≈

ση21 N

M11 (r2 ) M12 (r2 ) M21 (r2 ) M22 (r2 )

 .

(22)

The following result is an immediate consequence of ¯ > 0. the expression (21) and the fact that ΔM

In order to study the effect of r1 (t) and r2 (t) on the accuracy of the parameter estimates of α, β and γ, we introduce: ⎛ ⎞ Cα Cαβ Cαγ C  M −1 = ⎝ Cβα Cβ Cβγ ⎠ (17) Cγα Cγβ Cγ Note that cov(ˆ αN ) ≈

 ¯  ΔM

(21)

Theorem 4.1. Consider the closed-loop identification of the parameter vectors α and β of the model structure A ⊂ M. Let the excitation signals r1 (t) and r2 (t) be independent and persistently exciting of sufficient order. Then, the covariance matrices of the parameter estimates α ˆ and βˆ decrease by addition of the second excitation r2 (t), i.e.

Cβ ,

(2)

(1)

Cα,A < Cα,A

and cov(ˆ γN ) ≈ Cγ . Furthermore, the variance of the identified plant models G11 (θˆN ) and G12 (θˆN ) and the identified noise model H1 (θˆN ) can be calculated using Gauss’ approximation formula (Ljung, 1999). For a large number of data N and by inserting (13) θ θ for g11 , g12 and hθ1 , one gets:

(2)

(1)

and Cβ,A < Cβ,A .

(23)

Proof. The inequalities (23) are a direct consequence of the following expression:

−1



−1 (1) (2) (2) (2) (1) (1) CA − CA = MA MA − MA MA

−1 (1) ¯ −1 M (1) + M (1) = MA ΔM > 0. (24) A A



σ2 η α ∗ α var G11 (ejω , θˆN ) ≈ 1 (g11 ) Cα g11 N

σ2  η α ∗ α var G12 (ejω , θˆN ) ≈ 1 (g12 ) Cα g12 N 

2 Comments

β ∗ β +(g12 ) Cβ g12

1) For a structure of group A, the simultaneous excitation of r1 (t) and r2 (t) reduces the covariance of the estimates of the parameter vectors α and β compared to the case when r1 (t) alone is excited. (2) 2) If the variance of r2 (t) tends to infinity, MA ¯ also tend to infinity and consequently and ΔM (2) CA tends to zero. The intuition is that α and β become perfectly known when the power of r2 (t), and therefore also the power of u1 (t) and u2 (t), tend to infinity. 3) The presence of r2 (t) reduces the variance of all transfer function estimates. If the power of r2 (t) grows unbounded, the variances of G11 (θˆN ), G12 (θˆN ) and H1 (θˆN ) tend to zero.



σ2  η ∗ α var H1 (ejω , θˆN ) = 1 (hα 1 ) Cα h1 N  ∗ +(hβ1 )∗ Cβ hβ1 + (hγ1 ) Cγ hγ1 .(18) In the sequel, the analysis is performed separately for the two groups A and B, and thus the corresponding covariance matrices C and their elements will carry the appropriate subscripts “A” and “B”, respectively. Furthermore, the block-diagonal elements Cα , Cβ , Cγ , and the matrices C and M will carry the superscript “(1)” when only reference signal r1 (t) is applied and “(2)” when both reference signals are applied simultaneously.

876

4.2 Group B

(2)

Cγ,B = (M33 (η1 ) − (M31 (η1 ) M32 (η1 ))  −1  M11 (r1 , η) M12 (r1 , η) ¯ + ΔM × M21 (r1 , η) M22 (r1 , η)

−1 T × (M13 (η1 ) M23 (η1 )) (32)

When only r1 (t) is excited, it follows from (10)-(13) (1) that the information matrix MB has the following form: ⎞ ⎛ M11 (r1 , η) M12 (r1 , η) M13 (η1 ) (1) MB =⎝ M21 (r1 , η) M22 (r1 , η) M23 (η1 ) ⎠ . (25) M31 (η1 ) M32 (η1 ) M33 (η1 )

¯ > 0 is given in (22). By where the matrix ΔM comparing expressions (31) and (32), the expression (29) follows immediately. 2

When both r1 (t) and r2 (t) are present, the information (2) (1) matrix MB is given by expression (16). MB and (2) MB are related as follows: (2)

(1)

MB = MB + ΔM with

Comments 1) For a structure of group B, the presence of a second reference signal r2 (t) does not increase the covariance of the estimates of the parameter vectors α, β and reduces the covariance of the estimates of γ. This statement is valid also for model structures with independent parametrization of the plant and noise models such as BJ. 2) If the energy of r2 (t) grows unbounded, expres(2) sions (32) and (22) reveal that Cγ,B tends to −1 M33 (η1 ). At the same time, using (16), it is (2) (2) straightforward to show that Cα,B and Cβ,B tend to zero. This can be explained as follows: when r2 (t) goes to infinity, u1 (t) and u2 (t) also go to infinity, and the parameters α and β become perfectly known; then, the estimation of γ corresponds to the identification of the unknown parameters of the Moving Average (MA) model y(t) = H1 (q −1 )η1 (t) (note that some parameters of H1 might already be known as they are part of α and/or β). 3) The excitation r2 (t) never impairs, and in most cases improves, the accuracy of all transfer function estimates: see (28), (29) and (18). When the power of r2 (t) goes to infinity, the variances of G11 (θˆN ) and G12 (θˆN ) tend to zero. 4) Even when the plant and noise models are parameterized independently, there is a strong correlation between the parameter estimates due to closed-loop operation. A smaller variance of the plant parameter estimates implies a smaller variance of estimates of the parameters associated with the noise model and vice versa.

(26)



⎞ M11 (r2 ) M12 (r2 ) 0 ΔM = ⎝ M21 (r2 ) M22 (r2 ) 0 ⎠ . 0 0 0

(27)

Next, the following result can be established. Theorem 4.2. Consider the closed-loop identification of the parameter vectors α, β and γ of the model structure B ⊂ M. Let the excitation signals r1 (t) and r2 (t) be independent and persistently exciting of sufficient order. Then, the covariance matrices of the parameter estimates α ˆ and βˆ cannot increase by addition of the second excitation r2 (t), i.e. (2)

(1)

Cα,B ≤ Cα,B

(2)

(1)

and Cβ,B ≤ Cβ,B .

(28)

In addition, the covariance matrices of γˆ are strictly smaller for r2 (t) = 0 compared to r2 (t) = 0, i.e. (2)

(1)

Cγ,B < Cγ,B .

(29)

Proof. First note that the matrix ΔM is positive semi-definite (observe the non-negative contribution of r2 (t) to the elements of M in (12)). Consequently, (1)

(2)

(2)

(1)

CB − CB = CB ΔM CB

−1 (1) (1) (1) = MB ΔM −1 MB + MB ≥ 0. (30)

It follows from Theorems 4.1 and 4.2 that, regardless of the parametrization, the presence of the external signal r2 (t) does not reduce the accuracy of the parameter vector estimates obtained via direct closedloop identification. This conclusion holds for any controller K that guarantees informative experiments in closed loop. It follows from (12) that, for direct closed-loop identification and for both groups A and B, the contribution of the noise is never detrimental to the precision of the parameter estimates.

Now, the expression (28) follows from the fact that any principal submatrix of a positive semi-definite matrix is positive semi-definite. Also, it follows from (30) (2) (1) that Cγ,B ≤ Cγ,B . However, this inequality can be strengthened as follows. When r1 (t) alone is present, by straightforward calculation of the inverse of the (1) (3, 3) block-element of MB , one obtains: (1)

(31) Cγ,B = (M33 (η1 ) − (M31 (η1 ) M32 (η1 ))   −1   −1 M11 (r1 , η) M12 (r1 , η) M13 (η1 ) × × . M23 (η1 ) M21 (r1 , η) M22 (r1 , η)

5. SIMULATION RESULTS In order to illustrate the analytical results for both groups A and B, two academic examples are con-

Similarly, when both r1 (t) and r2 (t) are applied:

877

sidered. In both simulation examples, the plants are controlled by the following 2 × 2 controller:   0.04(1 − 0.3q −1 ) 1 0.1 −1 (33) K(q ) = −0.1 1 (1 − 0.4q −1 )

var G

var G

11

var H

12

1

−2

10 0

10

0.1

10

−3

10

The controller is designed so as to stabilize both plants without other performance consideration. −1

10

0

10

A Monte-Carlo simulation is performed to compare the case where the reference signal r1 (t) alone is excited with the case where the two reference signals are applied simultaneously. The reference signals r1 (t) and r2 (t) are PRBS generated by a 10-bit shift register with data length N = 1023 with standard deviations σr1 = 1 and σr2 = 10. The disturbance signals η1 (t) and η2 (t) are white noises with standard deviations ση1 = ση2 = 4. The signals r1 (t), r2 (t), η1 (t) and η2 (t) are mutually independent. This way, the assumptions of Theorems 4.1 and 4.2 are verified.

−4

10

−0.1

10

−2

−2

10

0

10 ω [rad/s]

10

−2

10

0

10 ω [rad/s]

−2

10

0

10 ω [rad/s]

Fig. 2. Variance of the transfer function estimates: G11 (q −1 , α ˆ n ) (left), G12 (q −1 , α ˆ n , βˆn , ) (middle) −1 ˆ ˆ n , βn , γˆn ) (right), for the ARand H1 (q , α MAX model with 2 reference inputs (solid line) and one input (dashed line).

Simulation 1: Group A The following FIR model is simulated: y1 (t) = B11 u1 (t) + B12 u2 (t) + η1 (t)

var(θˆn(2) ) = (1.9221 518.046 589.6793 66.99271 139.8771 1.3696 1.3517).

y2 (t) = B21 u1 (t) + B22 u2 (t) + η2 (t)

Observe that the presence of r2 (t) improves the precision of all estimated coefficients. The corresponding variances of the transfer function estimates G11 (q −1 , α ˆ n ), G12 (q −1 , α ˆ n , βˆn , ) and −1 ˆ n , βˆn , γˆn ) are computed at 500 frequency H1 (q , α points for the two cases of excitation and compared in Fig. 2. As expected, the accuracy of the three transfer function estimates is improved.

with B11 = 10q −1 + q −2 , B12 = 0.5q −1 + 4q −2 , B21 = 0.4q −1 +3q −2 and B22 = 2q −1 +0.25q −2. The variance of the estimates of the following parameter vector θ = (b111 , b211 , b112 , b212 )T is computed for both cases of excitation. When r1 (t) alone is excited, the asymptotic variances of the elements of θ computed by 1000 Monte-Carlo runs are: var(θˆ(1) ) = (599.603 585.486 603.934 612.293) n

6. CONCLUSIONS

The asymptotic variances of θ computed when both r1 (t) and r2 (t) are excited simultaneously are:

A variance analysis for parameters identified by direct closed-loop identification is performed for two situations: (i) when only r1 (t) is excited; (ii) when both r1 (t) and r2 (t) are excited simultaneously. It is shown that the presence of r2 (t) cannot worsen the quality of the parameter estimates; in fact, the quality improves in most cases. REFERENCES

var(θˆn(2) ) = (523.286 461.777 84.8456 96.4518) The variances are decreased by addition of the second excitation. The result here is due to the fact that the input signals u1 (t) and u2 (t) are correlated, which in turn affects the correlation between the parameter estimates. Note that, in the case of open-loop identification of FIR models, the asymptotic accuracy of the estimates of the bj11 coefficients is totally independent of the presence of u2 (t).

Bombois, X., M. Gevers and G. Scorletti (2005). Open-loop versus closed-loop identification of Box-Jenkins models: A new variance analysis. In: CDC-ECC 2005, to appear. Sevilla, Spain. Dayal, B. S. and J. F. MacGregor (1997). Multi-output process identification. Journal of Process Control 7(4), 269–282. Gevers, M., L. Miˇskovi´c, D. Bonvin and A. Karimi (2005). Identification of multi-input systems: Variance analysis and input design issues. submitted to Automatica. Ljung, L. (1999). System Identification - Theory for the User. II ed.. Prentice Hall. Upper Saddle River, NJ. Ljung, L. and U. Forssell (1999). An alternative motivation for the indirect approach to closed-loop identification. IEEE Transactions on Automatic Control AC-44(11), 2206–2209.

Simulation 2: Group B The following ARMAX structure is considered: A1 y1 (t) = B11 u1 (t) + B12 u2 (t) + C1 η1 (t) A2 y2 (t) = B21 u1 (t) + B22 u2 (t) + C2 η2 (t) with A1 = 1 − 0.2q −1 , B11 = 10q −1 + q −2 , B12 = 0.1q −1 +4q −2 , C1 = 1−1.6q −1 +0.64q −2 , A2 = 1− 0.25q −1 , B21 = 0.2q −1 +3q −2 , B22 = 2q −1 +0.1q −2 and C2 = 1 − 1.5q −1 + 0.5625q −2. We consider the parameter vector θ = (a1 , b111 , b211 , b112 , b212 , c11 , c21 )T associated with the output y1 (t). The Monte-Carlo simulations provide the following variances: var(θˆn(1) ) = (4.9336 675.5294 1614.4743 635.1392 620.4109 3.0875 3.1263).

878