Variance-error quantification for identified poles and zeros

Variance-error quantification for identified poles and zeros

Automatica 45 (2009) 2512–2525 Contents lists available at ScienceDirect Automatica journal homepage: www.elsevier.com/locate/automatica Variance-e...

3MB Sizes 0 Downloads 89 Views

Automatica 45 (2009) 2512–2525

Contents lists available at ScienceDirect

Automatica journal homepage: www.elsevier.com/locate/automatica

Variance-error quantification for identified poles and zerosI Jonas Mårtensson ∗ , Håkan Hjalmarsson ACCESS Linnaeus Center, School of Electrical Engineering, KTH – Royal Institute of Technology, S-100 44 Stockholm, Sweden

article

info

Article history: Received 11 July 2007 Received in revised form 7 December 2008 Accepted 9 July 2009 Available online 10 September 2009 Keywords: Accuracy of identification Asymptotic variance expressions

abstract This paper deals with quantification of noise induced errors in identified discrete-time models of causal linear time-invariant systems, where the model error is described by the asymptotic (in data length) variance of the estimated poles and zeros. The main conclusion is that there is a fundamental difference in the accuracy of the estimates depending on whether the zeros and poles lie inside or outside the unit circle. As the model order goes to infinity, the asymptotic variance approaches a finite limit for estimates of zeros and poles having magnitude larger than one, but for zeros and poles strictly inside the unit circle the asymptotic variance grows exponentially with the model order. We analyze how the variance of poles and zeros is affected by model order, model structure and input excitation. We treat general black-box model structures including ARMAX and Box–Jenkins models. © 2009 Elsevier Ltd. All rights reserved.

1. Introduction Model accuracy is an important issue in system identification applications. The experimental conditions, such as excitation signals, feedback mechanisms, disturbances and measurement noise, naturally affect the accuracy of the model, but also the choice of model structure and model order can have a strong influence. The model error can generally be divided into two parts: biaserror and variance-error, where the bias-error is due to un-modeled dynamics and the variance-error is caused by disturbances which are modeled as stochastic processes (Ljung, 1999; Söderström & Stoica, 1989). The focus in this paper is on the variance-error, which will be the dominating part if the model structure is flexible enough to describe the true underlying dynamics. We will thus assume that the true system belongs to the model set. The problem of quantifying the expected model error has received substantial research interest over the past decades. In particular, the variance of frequency function estimates has been studied extensively. In the mid-eighties Ljung (1985) presented a variance expression which showed that, for high model orders, the asymptotic variance1 of the frequency function estimate does

I The material in this paper was partially presented at the 44th IEEE Conference on Decision and Control and European Control Conference ECC 2005, 12th–15th December 2005 in Seville, Spain, and at the 16th IFAC World Congress on Automatic Control, 3rd–8th July 2005 in Prague, Czech Republic. This paper was recommended for publication in revised form by Associate Editor Brett Ninness under the direction of Editor Torsten Söderström. ∗ Corresponding author. Tel.: +46 8 7907434; fax: +46 8 7907329. E-mail addresses: [email protected] (J. Mårtensson), [email protected] (H. Hjalmarsson). 1 Henceforth, asymptotic variance denotes the variance of an estimated quantity,

normalized by the sample size, as the sample size tends to infinity. 0005-1098/$ – see front matter © 2009 Elsevier Ltd. All rights reserved. doi:10.1016/j.automatica.2009.08.001

not depend on the model structure but only on the model order. Furthermore, it was shown that the asymptotic variance at a particular frequency only depends on the ratio of the input and noise spectra at that particular frequency. A refined asymptotic variance expression (still asymptotic in model order) for frequency function estimates with improved accuracy (for many model structures) was proposed in Ninness, Hjalmarsson, and Gustafsson (1999a,b) and expressions that are exact for finite model orders were derived in Ninness and Hjalmarsson (2004) and Xie and Ljung (2001, 2004). More recently, a variance expression for finite sample sizes and model orders has been presented in Hjalmarsson and Ninness (2006). The original result in Ljung (1985) can also be used for closed loop identification and this was extended to some alternative closed loop identification methods in Gevers, Ljung, and Van den Hof (2001). The asymptotic variance for parameter estimates in a Box–Jenkins model was analyzed in Forssell and Ljung (1999) for a whole range of different identification methods. Closed loop asymptotic variance expressions that are exact for finite model orders are also presented in Ninness and Hjalmarsson (2005a,b). There are numerous other properties of the system, besides the frequency function, that could be of interest to estimate. Recently, a geometric interpretation of the asymptotic variance has been developed (Hjalmarsson & Mårtensson, 2007) and, when the underlying system is causal, linear and time-invariant (LTI), these techniques have been used to derive more transparent expressions for the asymptotic variance for estimates of system quantities that can be represented by smooth functions of the estimated parameters (Mårtensson & Hjalmarsson, 2007). In this paper the focus is on poles and zeros of causal LTI systems. Particularly interesting from a control perspective are unstable poles and nonminimum phase zeros, i.e. poles and zeros outside the unit circle, since they pose fundamental performance limitation on the closed loop system (Skogestad & Postlethwaite, 1996).

J. Mårtensson, H. Hjalmarsson / Automatica 45 (2009) 2512–2525

A series of results regarding the asymptotic variance of poles and zeros has been presented over the past years. Nonminimum phase zeros were treated for different model structures in Hjalmarsson and Lindqvist (2002), Lindqvist (2001) and Mårtensson and Hjalmarsson (2003) where it was found that the asymptotic variance approaches a finite limit as the model order goes to infinity. For ARX models these results were extended to closed loop identification in Mårtensson and Hjalmarsson (2005a), where also similar results were presented for poles with magnitude larger than one. Asymptotic variance expressions that are exact for finite model orders were derived for zeros with arbitrary location in Mårtensson and Hjalmarsson (2005b). It should be noted that the asymptotic variance can be a poor measure of the variance for finite sample size, see Vuerinckx, Pintelon, Schoukens, and Rolain (2001) where also a more accurate method to construct finite sample size confidence bounds for estimated zeros and poles is presented. In this paper, the results on the asymptotic variance of poles and zeros referred to above are combined and generalized to a unified theory on the asymptotic variance of poles and zeros of estimated causal LTI systems. We put emphasis on presenting, analyzing and interpreting the results, but the derivations (proofs) of the variance expressions are also included to make the document selfcontained. The results hold for prediction error identification. Outline This paper is organized in the following way. In Section 2 the problem set-up, the standing assumptions, that will hold throughout the paper, and some notation are introduced. The main technical results are presented in Section 3. The asymptotic variance of estimated zeros and poles located in the open unit disc |z | < 1 is discussed in Section 4 which is followed by Section 5 covering poles with magnitude larger than one. Section 6 covers zeros with magnitude at least one and in Section 7 some conclusions are drawn based on the analysis of the variance expressions and some simulations are presented to investigate the accuracy of the expressions. Section 8 contains a brief summary of the findings and Appendix A contains results that are needed for the proofs of the paper. Notation The multiplicity of a root z o of a polynomial A(z ) is defined as the integer k for which the limit limz →z o A(z )/(z − z o )k = c 6= 0 exists. Two polynomials are said to be coprime if they have no common roots. A rational transfer function Q = Qn /Qd , where Qn and Qd are coprime polynomials in z −1 , is said to be minimum phase if Qn (z ) = 0 or Qd (z ) = 0 implies |z | < 1. A zero z o of Q is said to be non-minimum phase (NMP) if |z o | > 1. Q is said to be stable if Qd (z ) = 0 implies |z | < 1. A root zp to Qd (z ) = 0 with |zp | > 1 is said to be an unstable pole of Q . For a transfer function Z (q) with a zero at z o , we will use the notation Z˜ (q) = Z (q)/(1 − z o q−1 ). It will be clear from the context which zero z o that is used. For a rational transfer function G, b G denotes a transfer function which satisfies |b G(ejω )| = |G(ejω )| for all ω for which G(ejω ) is defined and which has all its zeros and poles in the closed unit disc |z | ≤ 1. That such a function exists follows from the spectral factorization theorem. We shall consider vector-valued complex functions as row vectors and the inner product of two such functions f (z ), g (z ) : Rπ C → C1×m is defined as hf , g i = 21π −π f (ejω )g ∗ (ejω ) d ω where g ∗ denotes the complex conjugate transpose of g. When f and g are matrix-valued functions, R π we will still use the notation hf , g i to denote the integral 21π −π f (ejω )g ∗ (ejω ) d ω whenever the dimensions of f and g are compatible. The space L2 consists of all

2513

Fig. 1. Block diagram of SISO LTI system with output feedback.

functions f : C → C1×m such that hf , f i < ∞. A set of functions {Bk }nk=1 is said to be orthonormal if hBj , Bk i = δk−j , where δj is the Kronecker delta function. Suppose that all elements of a function η : C → Cn×m are in L2 , then the subspace of L2 that is formed by the span of the rows of η is denoted the rowspace of η. 2. Problem set-up and notation We will consider a parametrized family of transfer functions2 G(q, θ ) = H (q, θ ) =

B(q, θb ) A(q, θa )F (q, θf )

,

C (q, θc ) A(q, θa )D(q, θd )

(1)

,

where A(q, θa ) = 1 + a1 q−1 + · · · + ana q−na , B(q, θb ) = b1 q−1 + · · · + bnb q−nb , C (q, θc ) = 1 + c1 q−1 + · · · + cnc q−nc , D(q, θd ) = 1 + d1 q

−1

F (q, θf ) = 1 + f1 q

−1

−n d

+ · · · + dnd q

−n f

+ · · · + fnf q

,

(2) and

.

The full parameter vector is θ = [θaT , θbT , θcT , θdT , θfT ] ∈ Rn where T

θa = [a1 , . . . , ana ]T ∈ Rna , and where θb – θf are defined in the same way as θa . Superscript o, e.g. θao , will refer to a specific parameter vector θ o and we will use the short-hand notation G(q, θ o ) , Go (q) , H (q, θ o ) , Ho (q) ,

Bo (q) Fo (q)Ao (q) Co (q)

,

Do (q)Ao (q)

(3)

.

Poles of the model G(q, θ ) correspond to roots of the polynomials A and F , see (2), and poles of H (q, θ ) correspond to roots of the polynomials A and D. Zeros of G(q, θ ) correspond to roots of the polynomial B and zeros of H (q, θ ) correspond to roots of C . The roots of A(z , θ ) are defined as the na roots (or zeros) of na the polynomial z na A(z , θa ) and they will be denoted {za,k (θa )}k= 1. Since the particular ordering of the roots will play no role for our considerations, the subscript k will be omitted. Zeros corresponding to z na Ao (z ) will be denoted zao = za (θao ). The roots of the other polynomials B–F are analogously denoted with subscripts b–f . A root of any of the polynomials is denoted without subscript. Sometimes a root of one of the polynomials will be regarded as a function of the entire parameter vector and we then write, e.g., za (θ ). Consider the system depicted in Fig. 1 where yt = Go (q)ut + Ho (q)eot , where Go (q) and Ho (q) are given by (3), where ut and yt represent the measured input and output, respectively, where eot is zero mean white noise with variance λo , where wt is zero mean white

2 q is the forward shift operator: qu = u . t t +1

2514

J. Mårtensson, H. Hjalmarsson / Automatica 45 (2009) 2512–2525

noise with unit variance, and where we assume that the feedback controller K stabilizes the system. When this system is modeled as yt = G(q, θ )ut + H (q, θ )et ,

P = λo Π

,

Ao

zHo (z ) e Go (z )So (z )R(z )

Bo

zHo (z ) e Go (z )R(z )

Co

zHo (z ) Ď e Ho (z )

Do

o (z ) − zH e H (z )

1 1 2

Ď

2

o

Ď zHo (z ) e Go (z )So (z )R(z )

Fo

1

Z



π

Ψ (ejω )Ψ ∗ (ejω ) d ω,

(4)

−π

where in turn

 Go So

RΓna (q)

 Ho Ao  Go So −  Ho Bo RΓnb (q)   Ψ (q) =  0    0   Go So RΓnf (q) − Ho Fo

So p

 λo Γna (q) Ao   KGo So p λo Γnb (q)  Bo  1p  − λo Γnc (q)  ,  Co  1 p λo Γnd (q)   Do  KGo So p λo Γnf (q)

(5)

Fo

where Γn (q) = [q−1 , . . . , q−n ]T , and where So ,

i Ď

1

‘‘Ď’’ — See Remark 3.2.

where

Π=

L(z )

Polynomial

with G and H given by (1), and the parameter vector θ is estimated using prediction error identification with a least-squares cost function, the asymptotic covariance matrix for the parameter estimate θˆN (N is the sample size, i.e. the number of input/output measurements) is given by −1

Table 1 Details for Theorem 3.1.

1 1 + Go K

3. General results

is the sensitivity function. It can be shown also, using Gauss’ approximation formula, that the asymptotic variance of an estimate of a root z o of one of the polynomials Ao –Fo of the true system is given by AsVar z o , Λ∗ P Λ,

(6)

where d z (θ ) . Λ, d θ θ=θ o

In this section we present the main technical results of this paper. The first result, Theorem 3.1, will give us a general expression for the asymptotic variance of a pole or zero identified in open or closed loop, and that expression will be a starting point for deriving more explicit expressions. 3.1. Characterization in terms of basis functions

For exact conditions for the above asymptotic results to hold we refer to Chapters 8–9 in Ljung (1999). Since (6) implies the approximation E|z (θˆN ) − z o |2 ≈ AsVar z o /N

reference signal matters and R should be interpreted as a spectral factor of this spectrum and hence the stability and minimum phase conditions on this quantity are not particularly restrictive, save that periodic signals are not covered. Furthermore, the data should be generated under stationary conditions which require the transfer functions from r and e to u and y to be stable. Since, as noted above, Ao contains all poles on or outside the unit circle, this is equivalent to the closed loop pole polynomial Acl being minimum phase. The non-singularity of Π is a joint condition on identifiability at θ o and persistence of excitation. A sufficient and necessary condition for this assumption to hold is that the rows of Ψ are linearly independent. This in turn requires both pairs {Co , Do } and {Bo , Fo } to be coprime. We refer to Gevers, Bazanella, Bombois and Miskovic (2009) for interesting recent sufficient and necessary conditions for the asymptotic covariance matrix to be non-singular.

(7)

for the finite sample size variance of an estimate z (θˆN ) of a root of one of the polynomials Ao –Fo , AsVar z o provides information about the variance of root estimates for large sample sizes. The objective of this paper is to analyze the right-hand side of (6) and we will make the following assumptions which will hold throughout the paper.

Theorem 3.1 (A General Result). The asymptotic variance of a root z o of any of the polynomials Ao , Bo , Co , Do or Fo is given by AsVar z o = λo |L(z o )|2 K i (z o , z o ),

(8)

i

where K is the ith diagonal element of

K (µ, z ) ,

n X

Bk∗ (µ)Bk (z ),

(9)

k=1

Standing Assumptions 2.1. The polynomials Co , Do and Fo are minimum phase. The rational transfer function R is minimum phase. With K = Kn /Kd , where Kn and Kd are coprime polynomials in z −1 , it holds that

where {Bk }nk=1 is any orthonormal basis for the rowspace of Ψ (defined by (5)). Table 1 provides L and i for the different cases Ao –Fo . For roots of Ao , Bo and Fo , the result (8) holds only if R 6= 0. Furthermore, when some parameters are known, the above result still holds, save that the rows of Ψ that correspond to the known parameters should be removed when computing the subspace used to generate K .

Acl = AFKd + BKn

Proof. The proof is given in Appendix B.

has all its roots strictly inside the unit circle. The matrix Π , defined in (4), is non-singular and the noise variance is positive, λo > 0. Finally, all results are valid only for roots of multiplicity 1. The conditions above are standard in a system identification context, cf. Ljung (1999). The minimum phase conditions on Co , Do and Fo guarantee that the one-step ahead predictor of the output is stable. Thus poles outside the open unit disc |z | < 1 are located in Ao . In prediction error identification, only the spectrum of the



Remark 3.2. In the cases marked by Ď in Tables 1 and 2we have that L(z o ) = 0 (for A this only holds in open loop operation). This does of course not mean that the asymptotic variance is zero. The reason is that K i (z , z ) has a pole at the same location that cancels the zero in L. To see this we use that K i (z , z ) can be written

K ( z , z ) = Ψi ( z ) i





1 2π

Z

π −π





Ψ (e )Ψ (e ) d ω ∗

−1

Ψi (z ),

(10)

J. Mårtensson, H. Hjalmarsson / Automatica 45 (2009) 2512–2525

where Ψi is the ith column of Ψ , see Appendix B. Take now, e.g., the case Co where L(z o ) = 0 since Co (z o ) = 0. In that case there is a factor 1/Co (z o ) in Ψ2 (zco ) that cancels the factor Co (z o ) in L(z o ).  We illustrate the use of Theorem 3.1 with an example.

yt = (1 − α ejβ q−1 )(1 − α e−jβ q−1 )ut −1 + et ,

(11)

where α > 0, where the input is white noise with unit variance, i.e. R = 1 and where et is white noise with variance λo = 1. We will use Theorem 3.1 to analyze the asymptotic variance of zero estimates associated with a FIR model of order n. In order to compute the asymptotic variance for the zero z o = α ejβ we first need to determine L in (8). From the second row in Table 1 we obtain L = zHo (z )/e Go (z )R(z ) for zeros of Bo . For a FIR model Ho ≡ 1 and furthermore e Go (z ) = Go (z )/(1 − z o z −1 ) = z −1 (1 − α e−jβ z −1 ). Thus

α 2 ej2β L(α ejβ ) = . 1 − e−j2β

n X

2(1 − cos(2β)) k=1

α2

Lemma 3.5 (General Lower Bound). The asymptotic variance of a root of any of the polynomials Ao , Bo , Co , Do or Fo is bounded from below according to

2

(13)

Bk∗ (µ)Bk (z ).

(14)

AsVar z o ≥ λo Llb (z o ) Klb (z o , z o ),



where

X k

AsVar z o = λo |L(z o )|2 K 1 (z o , z o )

=

However, we can also combine Theorem 3.1 and Lemma 3.4 to obtain lower bounds on the asymptotic variance where K is replaced by a quantity that is easier to characterize.

Klb (µ, z ) ,

From Table 1 it also follows that i = 1 in (8). We thus have to determine the (1, 1)-element of K and for this we need an orthonormal basis for the predictor gradient Ψ . This can be done numerically by, e.g. Gram–Schmidt orthogonalization, and in Section 6.4.2 we will see that such a basis can sometimes be obtained computations. In this example we  without   notice that Ψ = Γn 0 so an orthonormal basis is given by z −k 0 , k = 1, . . . , n. Thus we obtain

α4

that Go = Bo /(Ao Fo ) and Ho = Co /(Ao Do ) can be described by both model structures. Then the asymptotic variance for any root of Ao –Fo , when M1 is used, is no greater than the corresponding asymptotic variance when M2 is used. Proof. The result follows directly from Theorem 3.1 and Lemma A.1. 

Example 3.3. Consider the FIR system

=

2515

|α ej2β |−2k

1 − α −2n

2(1 − cos(2β)) 1 − α −2

.

Here {Bk } is any orthonormal basis for the rowspace of the vector specified in Table 2 for the different polynomials. Table 2 also defines the function Llb . In the table, Ry and Ru are transfer functions with all zeros strictly inside the unit circle and no poles outside the unit circle, satisfying

λo |Ho (ejω )|2 , |Go (ejω )|2 |Ru (ejω )|2 = |R(ejω )|2 + λo |Ho (ejω ) K (ejω )|2 .

|Ry (ejω )|2 = |R(ejω )|2 +

Proof. The proof is given in Appendix C.

(12)

Eq. (12) shows that the asymptotic variance grows exponentially with the model order n when α < 1, i.e. for zeros strictly inside the unit circle, whereas it converges to finite limit when α > 1, i.e. for NMP-zeros. Fig. 2 shows the zero estimates for 100 different noise realization when 1000 input–output samples are used to estimate (11). Comparing Fig. 2(a)–(b) shows that for α = 1/2 the sample variance increases indeed quite significantly when the number of parameters is increased from n = 3 to n = 5. On the other hand, Fig. 2(c)–(d) show that the increase is hardly visible when α = 3/2 for the same change in model order. Eq. (12) shows also that the asymptotic variance will be large when the zeros are close together, i.e. when β ≈ 0. That this is the case for estimated zeros is confirmed by comparing Fig. 2(c) and (e) which only differ in the angular position of the zeros.

(15)



The usefulness of Lemma 3.5 is that the subspace which the basis functions {Bk } should span is much simpler than the corresponding space in Theorem 3.1, cf. Ψ (5) with the functions given in Table 2. In fact, we will be able to use results in Appendix A to provide explicit bounds and expressions for Klb . Corollary 3.6. Equality holds in (13) for FIR and OE models. This also holds for Box–Jenkins models when the condition KGo So Bo

Γnb +nf

is orthogonal to

1 Co D o

Γnc +nd

(16)

is satisfied. Proof. The proof is given in Appendix D.



Remark 3.7. Notice that (16) holds when the system is operating in open loop, i.e. K ≡ 0. λ |H |2

The objective of this paper is to analyze the asymptotic variance (6) and even though Theorem 3.1 provides a characterization of this quantity for any of the polynomials Ao –Fo under any stationary experimental condition, the result (8) requires further elaboration in order to provide useful insights. What remains to be done is to characterize the function K and this in turn requires a characterization of the rowspace of Ψ . We shall be able to provide an explicit formula for K only for some special cases but we will be able to provide useful bounds on K under quite general conditions. In fact, (8) contains structural information that we can extract without having to compute K explicitly. We pursue this idea next.

Remark 3.8. The ratio between the two terms |oG |o2 and |R|2 that o define Ry is the ratio of the parts of the output spectrum that are due to the noise and the reference signal. Similarly, the ratio of the two terms that define Ru is the ratio of the parts of the input spectrum that are due to the noise and the reference signal.

3.2. Lower bounds in terms of basis functions

1 + inf

Lemma 3.4 (Model Structure Comparison). Consider two different model structures M1 and M2 defined as in (1)–(2) corresponding to predictor gradients Ψ 1 and Ψ 2 (see (5)), respectively, and suppose that the rowspace of Ψ 1 is contained in the rowspace of Ψ 2 . Suppose

The following lemma provides bounds for Ry and Ru defined in (15). Lemma 3.9 (Bounds on Ry and Ru ). When Go does not have zeros on the unit circle, it holds that Ry , defined by (15) in Lemma 3.5, satisfies

ω

Ry (z o ) 2 λo |Ho (ejω )|2 ≤ R(z o ) |R(ejω ) Go (ejω )|2 ≤ 1 + sup ω

λo |Ho (ejω )|2 , |R(ejω ) Go (ejω )|2

\ when R is not identically zero, and Ry = λo H o /Go when R ≡ 0.

(17)

2516

J. Mårtensson, H. Hjalmarsson / Automatica 45 (2009) 2512–2525

(a) n = 3, α = 1/2, β = π/4.

(b) n = 5, α = 1/2, β = π/4.

(d) n = 5, α = 3/2, β = π/4.

(c) n = 3, α = 3/2, β = π/4.

(e) n = 3, α = 3/2, β = π/10. Fig. 2. Simulation results for Example 3.3.

Table 2 Details for Lemma 3.5.

Theorem 3.10. The asymptotic variance of a root of any of the polynomials Ao , Bo , Co , Do or Fo is bounded according to

Polynomial

Llb (z )

{Bk } spans rows of

Ao

zHo (z ) e Go (z )So (z )Ry (z )

Go So Ry Ho Ao

Γna

Bo

zHo (z ) e Go (z )Ru (z )

Go So Ru H o Bo F o

Γnb +nf

e Ho ( z )

1 Co Do

Γnc +nd

1 Co Do

Γnc +nd

zHo (z ) Ď

Co

zHo (z ) Ď e Ho ( z )

Do



Fo

zHo (z ) e Go (z )So (z )Ru (z )

Go So Ru H o Bo F o

λo |z o |2 c1

Γnb +nf

Furthermore when Ho and K are stable, Ru , also defined by (15) in Lemma 3.5, satisfies

ω

Ru (z o ) 2 λo |Ho (ejω )K (ejω )|2 ≤ R(z o ) |R(ejω )|2 ≤ 1 + sup ω

λo |Ho (ejω )K (ejω )|2 , |R(ejω )|2

when R ≡ 0.

Proof. The proof is given in Appendix E.

≤ AsVar z o 1 − |z o |−2m¯ −2 1 − |z o |−2

,

(20)

where m and c1 are given in Table 3 and where c2 = c2 (Go , Ho , K , R, λo ) also depends on which polynomial that z o belongs to (but not on any of the orders of the estimated polynomials), and where ¯ =m ¯ abf , na + nb + nf + mo for some finite mo ≥ 0, for z o belonging m ¯ =m ¯ cd , na + nb + nc + nd + nf + mo to Ao , Bo and Fo and where m for some finite mo ≥ 0, for z o belonging to Co and Do . Proof. The proof is given in Appendix F.



Notice that Theorem 3.10 also applies to the case |z o | = 1, by (18)

setting

1−|z o |−2m−2 1−|z o |−2

= m in (20). This is relevant for Ao and Bo which

are allowed to have such roots, but not for the other polynomials (by Standing Assumptions 2.1).

when R is not identically zero, and

d Ru = λo H oK

1 − |z o |−2

≤ c2

‘‘Ď’’ — See Remark 3.2.

1 + inf

1 − |z o |−2m−2

(19) 

3.3. Explicit lower and upper bounds We now provide bounds which indicate how the asymptotic variance of estimated roots depends on the model order.

3.4. Explicit characterization for Box–Jenkins models Building on Corollary 3.6, we can provide explicit upper bounds for the asymptotic variance of the roots for FIR, output error and Box–Jenkins models under certain restrictions. In some cases it follows from Corollary 3.6 that these bounds are tight.

J. Mårtensson, H. Hjalmarsson / Automatica 45 (2009) 2512–2525 Table 3 Details for Theorem 3.10. m

c1 1

Ao

na

Bo

nb + nf

|e Ao (z o )|2

Co

nc + nd

Do

nc + nd

Fo

nb + nf



H (ejω )A (ejω )

2

minω S (ejωo )G (ejωo )R (ejω ) o y o

H (ejω )B (ejω )F (ejω ) 2 o u o 2 1 minω Co (ejω )Do (ejω ) |e Co (z o )Do (z o )|2 2 1 minω Co (ejω )Do (ejω ) |Co (z o )e Do (z o )|2 H (ejω )B (ejω )F (ejω ) 2 1 minω S o(ejω )G o(ejω )Ro (ejω ) |Bo (z o )e Fo (z o )|2 o o u 1

|e Bo (z o )Fo (z o )|2

minω S o(ejω )G o(ejω )Ro (ejω )

Theorem 3.11 (FIR, OE and BJ Models). Assume that the model structure is FIR, OE or Box–Jenkins, where for the latter we assume that (16) holds. The asymptotic variances of the roots of the Co and Do polynomials are given by AsVar z o =

|Co (1/z o )Do (1/z o )|2 |z o |−2(nc +nd ) , |Z (z o )|2 (|z o |−2 − 1)

(21)

where Z = e Co Do for a root of Co and Z = e Do Co for a root of Do . Furthermore, denote the denominator of BGoFSoHRu (after cancellao o o tions) by AĎ and suppose in the following that the order nĎ of this polynomial is less than nb + nf . Denote the order of the numerator of BGoFSoHRu (after cancellations) o o o by m. Then the asymptotic variance of a root z o 6= 0 of the Bo or Fo o polynomials, with |z | 6= 1, is bounded from above according to

λo |Ho (z o )|2 AsVar z o ≤ o − 2 e (1 − |z | ) |Go (z o )So (z o )Ru (z o )|2   |AĎ (1/z o )|2 o −2(nb +nf +m) |z | , × 1− |AĎ (z o )|2 1

(22)

λo |z o Ho (z o )|2 |e Go (z o )So (z o )Ru (z o )|2 ×

nĎ X 1 − |ξk |2 nb + nf + m − nĎ + |z o − ξk |2 k=1

! ,

(23)



where the {ξk }k=1 are the roots of AĎ . When m = 0, i.e. when the numerator of equality holds in (22) and (23). Proof. The proof is given in Appendix G.

For open loop operation, or whenever (16) holds, an exact formula for the asymptotic variance of the roots of the noise model polynomials Co and Do is given in Theorem 3.11 by (21). This theorem also gives an exact expression for roots of Fo under certain conditions. Notice that these formulae exhibit the exponential growth with the order discussed in the preceding paragraph. We conclude that accurate identification of zeros and poles strictly inside the unit circle is inherently difficult for model structures of high orders. 5. Unstable poles In this section we will study the asymptotic variance of estimates of poles located strictly outside the unit circle. By assumption, all such poles are contained in Ao , see the discussion below Standing Assumptions 2.1. Since the theory covers stationary experimental conditions, closed loop operation is necessary for the results in this section to hold. Theorem 3.10 provides generally applicable lower and upper bounds for the asymptotic variance of an unstable pole. However, more transparent expressions can be obtained by considering large model orders. We have the following result. Theorem 5.1 (Unstable Poles). Consider a root z o of Ao with |z o | > 1. The asymptotic variance AsVar z o increases monotonically to a finite limit as n = na + nb + nc + nd + nf tends to infinity. The limit is bounded from above according to

Co (z o ) 2 K (z o ) 2 1 lim AsVar z ≤ λo o o o e n→∞ R(z ) (1 − |z o |−2 ) Ao (z )Do (z ) o

(24)

and, provided na → ∞, bounded from below by

where Ru is defined in Lemma 3.5. For a root of Bo with magnitude 1, i.e. |z o | = 1, the asymptotic variance is bounded from above according to AsVar z o ≤

2517

Go So Ru Bo Fo Ho

is constant,



In the following sections we will interpret and elaborate on the results in this section. 4. Zeros and poles in the open unit disc |z | < 1 Due to the factor |z o |−2m in the numerator of (20), Theorem 3.10 shows that the asymptotic variance for zeros and poles with magnitude less than one increases exponentially with the order of the corresponding polynomial, cf. Example 3.3. Thus, the accuracy can be poor even when only a moderate number of parameters are used for a certain polynomial. We alert the reader that the result holds regardless of whether the system is operating in open or closed loop. Also notice that the asymptotic variance for a root of Co (Do ) can be large even if nc (nd ) is small. This happens if nd (nc ) is large since m = nc + nd for the Co and Do polynomials (see Table 3). A similar conclusion holds for Bo and Fo .

Co (z o ) 2 K (z o ) 2 1 λo ≤ lim AsVar z o , o ) (1 − |z o |−2 ) o o e n→∞ R ( z Ao (z )Do (z ) y

(25)

where Ry is defined by (15) in Lemma 3.5. Proof. The proof is given in Appendix H.



The upper bound in Theorem 5.1 shows that, contrary to poles strictly inside the unit circle (see Section 4), the asymptotic variance for unstable poles converges to a finite limit as the model order increases. From Theorem 5.1 we see that the location of the pole is critical for the asymptotic variance when the number of parameters in A is large. A pole close to the unit circle will have high asymptotic variance. Furthermore, we see that also the gain of the controller at the pole is important for the asymptotic variance. Notice that Lemma 3.9 gives that |Ry (z o )| > |R(z o )| which shows that the lower bound in Theorem 5.1 is indeed lower than the upper bound in the same theorem. In the following subsections we will consider some special cases where Ry can be expressed explicitly and which will shed further light on the asymptotic variance of unstable poles. 5.1. High SNR at the output Corollary 5.2. For a root z o of Ao with |z o | > 1, it holds lim

AsVar z o

2 na →∞ o o 2 λo eA (zCoo)(Dz )(z o ) KR((zzo )) o o

= 1,

(26)

1 (1−|z o |−2 )

when sup ω

λo |Ho (ejω )|2 → 0. |R(ejω ) Go (ejω )|2

(27)

Proof. In the limit (27) it follows from (17) that |R(z o )/Ry (z o )| → 1 and hence the lower bound (25) equals the upper bound (24) in the limit (27). 

2518

J. Mårtensson, H. Hjalmarsson / Automatica 45 (2009) 2512–2525

Corollary 5.2 shows that at high SNR at the output, cf. Remark 3.8, the asymptotic variance of a pole of the system, located outside the unit disc, is approximately equal to the upper bound in (24).

6.1. Open loop operation or high SNR at the input Corollary 6.2. For a root z o of Bo with |z o | > 1, it holds

5.2. Proportional signal and noise spectra at the output When the two terms defining Ry , see Lemma 3.5, are proportional to each other, the lower bound in Theorem 5.1 can be expressed explicitly.

√ \ Corollary 5.3. Assume that R = µ H o /Go for some µ > 0. Then o o for a root z of Ao with |z | > 1 it holds 1 µ

1+ λ o

AsVar z o

lim

2 1 Bbo (z o ) o K (z ) e Ao (z o )Fo (z o ) (1 − |z o |−2 )

2 nb +nf →∞ o λo eG (Hzoo()zR()z o ) o

= 1,

(32)

1 (1−|z o |−2 )

when sup ω

λo |Ho (ejω )K (ejω )|2 → 0. |R(ejω )|2

(33)

In particular, (32) holds in open loop operation, i.e. when K ≡ 0.

≤ lim AsVar z o .

(28)

na →∞

√ \ √ µ Ho /Go = µ Fo /(Bbo Do ), it follows that √ Ry = λo + µ Fo /(Bbo Do ) and (25) in Theorem 5.1 gives the result. Proof. With R = 

5.3. Costless identification The case when there is no external excitation, i.e. R ≡ 0, has been coined costless in Bombois, Scorletti, Gevers, Van den Hof, and Hildebrand (2006) since identification can be performed in closed loop without perturbing the system from normal operating conditions.

Proof. In the limit (33) it follows from (18) that |R(z o )/Ru (z o )| → 1 and hence the lower bound (31) equals the upper bound (30) in the limit (33).  From Corollary 6.2 we are able to conclude that the upper bound in Theorem 6.1 is attained when the system is operating in open loop. Furthermore, for closed loop operation at high SNR at the input, cf. Remark 3.8, and with a large number of parameters in B and/or F , the asymptotic variance becomes equal to the asymptotic variance in open loop operation. 6.2. Proportional signal and noise spectra at the input When the two terms defining Ru , see Lemma 3.5, are proportional to each other, the lower bound in Theorem 5.1 can be expressed explicitly.

Corollary 5.4. Assume R ≡ 0. Then for a root z o of Ao with |z o | > 1, it holds

√ d Corollary 6.3. Assume that R = µ H o K for some µ > 0. Then for a root z o of Bo with |z o | > 1 it holds

2 1 Bbo (z o ) o ≤ lim AsVar z o . K (z ) o o e na →∞ ( 1 − | z o | −2 ) Ao (z )Fo (z )

e 1 + λ b o Bo (z o ) o K (z ) b o

Proof. Follows directly from Corollary 5.3 by setting µ = 0.

1 (29) 

6. Non-minimum phase zeros Now we turn to the asymptotic variance of non-minimum phase zeros of the system. Here the results below cover both open and closed loop identification. Theorem 6.1 (NMP-zeros). Consider a root z o with |z o | > 1 of Bo . The asymptotic variance increases monotonically to a finite limit as n = na + nb + nc + nd + nf tends to infinity. The limit is bounded from above according to

λo |Ho (z o )|2 1 o o 2 e n→∞ ( 1 − | z o |−2 ) |Go (z )R(z )| and, provided nb + nf → ∞, bounded from below by 1 λo |Ho (z )| ≤ lim AsVar z o . n→∞ |e Go (z o )Ru (z o )|2 (1 − |z o |−2 ) o

(30)

(31)

Equality holds in (31) for FIR and OE models as well as Box–Jenkins models subject to (16). 

As for poles outside the unit disc, we see that the asymptotic variance of an NMP-zero converges to a finite limit as the model order increases. We see also that the location is important for the asymptotic variance of a zero. An NMP-zero close to the unit circle will have large asymptotic variance.

1

≤ lim AsVar z o . (34) 2 (1 − |z o |−2 ) nb +nf →∞ )Fo (z o )

Equality holds in (34) for FIR and OE models as well as Box–Jenkins models subject to (16).



d Proof. With R = µ H o K , it follows that Ru = (31) in Theorem 5.1 gives the result. 



d λo + µ H o K and

Corollary 6.3 provides some insight regarding the role played by the controller for the asymptotic variance of NMP-zeros. Since b K (z o ) appears in the denominator in the lower bound (34), a small controller gain will result in a large asymptotic variance—exactly the opposite to what is the case for poles outside the unit disc, cf. Section 5. 6.3. Costless identification Corollary 6.4. Assume R ≡ 0. Then for a root z o of Bo with |z o | > 1 it holds 1

1

≤ lim AsVar z o . b o B˜ o (z o ) 2 (1 − |z o |−2 ) nb +nf →∞ K (z ) Ab (z o )F (z o ) o

2

Proof. The proof is given in Appendix I.



Ao (z

In Corollary 5.4 we see again the critical role that the pole location and the controller gain K (z o ) play for the asymptotic variance of a pole outside the unit disc.

lim AsVar z o ≤

1 µ

(35)

o

Equality holds in (35) for FIR and OE models as well as Box–Jenkins models subject to (16) Proof. Follows directly from Corollary 6.3 by setting µ = 0.



As in Corollary 6.3 we see that Corollary 6.4 embodies the message that a small controller gain at the NMP-zero of interest implies a large asymptotic variance. 6.4. Finite model order results For FIR, OE and Box–Jenkins models we can use Theorem 3.11 to obtain results for finite model orders for NMP-zeros.

J. Mårtensson, H. Hjalmarsson / Automatica 45 (2009) 2512–2525

6.4.1. Upper bounds Notice that So (z o ) = 1 for a zero of Bo so the upper bound in (22) is the same as the asymptotic limit in Corollary 6.2, save for the factor

|AĎ (1/z o )|2 o −2(nb +nf +m) |z | 1− |AĎ (z o )|2



which converges to 1 as nb + nf → ∞. The upper bound (22) can be evaluated also for the remaining special cases of Ru used in the preceding subsections, i.e. proportional noise and signal spectra at the input and costless identification. The results are finite model order upper bounds that are equal to the asymptotic in model order lower bounds (34) and (35), respectively, but with (36) as added factor. 6.4.2. Exact results Theorem 3.11 can also be used to obtain exact expressions for the asymptotic variance. For open loop operation we have the following result. Theorem 6.5 (Open Loop/FIR, OE and BJ Models). Assume that the model structure is FIR, OE or Box–Jenkins and that K ≡ 0. Assume further that Do = 1 and that R has no zeros. Denote the polynomial Fo2 Co /R by AĎ and suppose that nb + nf is larger than the order of this polynomial. Then the asymptotic variance of a root z o 6= 0 of Bo , with |z o | 6= 1, is given by AsVar z o =

λo |Ho (z o )|2 (1 − |z o |−2 ) |e Go (z o )R(z o )|2   |AĎ (1/z o )|2 o −2(nb +nf ) × 1− | z | . |AĎ (z o )|2 1

(37)

When |z o | = 1, the asymptotic variance of root z o of Bo is given by AsVar z o =

λo |z o Ho (z o )|2 |e Go (z o )R(z o )|2 ×

nĎ X 1 − |ξk |2 nb + nf − nĎ + |z o − ξk |2 k=1

,

(38)



Proof. Follows directly from Theorem 3.11.



Notice that the above theorem is valid not only for NMP-zeros but for any zeros. For closed loop operation we have the following result. Theorem 6.6 (Closed Loop/FIR and OE Models). Assume that the model structure is FIR or OE and that the controller K = Kn /Kd 6= 0 is such that Kn and Kd are coprime stable polynomials such that Fo = F1 Kn



for some polynomial F1 . Suppose further that R = µ K for some µ > 0 and that nb + nf is at least the order of Fo (Fo Kd + Bo ). Then the asymptotic variance of a root z o 6= 0 of Bo , with |z o | 6= 1, is given by 1

1

1 µ λo

(1 − | | ) 1 + |e G(z o )K (z o )|2   |AĎ (1/z o )|2 o −2(nb +nf ) × 1− | z | , |AĎ (z o )|2 z o −2

where AĎ = Fo (F1 Kd + Bo ).

AsVar z o =

1 |z o |2 µ o )K (z o )|2 1 + λ |e G ( z o

×

nĎ X 1 − |ξk |2 nb + nf − nĎ + |z o − ξk |2 k=1

! ,

(40)



where the {ξk }k=1 are the roots of AĎ . Proof. Follows directly from Theorem 3.11.



Theorem 6.6 is the NMP-zero counterpart to Theorem 4.1 in Ninness and Hjalmarsson (2005b) where asymptotic variance for frequency function estimates is considered. The condition on R can be interpreted as that the feedback controller is u = K (r − y) (rather than u = Ky − r as we have assumed throughout the paper) where the reference r is white noise. 7. Interpretations of the results In the previous sections we have presented a number of expressions and simplified bounds for the asymptotic variance of roots of the polynomials Ao –Fo . We will now summarize and interpret our major findings. Roots inside versus outside the unit circle An important result of this paper is that the asymptotic variances of zeros and poles strictly inside the unit circle behave fundamentally different from the asymptotic variances of NMPzeros and unstable poles when the model order is increased. Theorem 3.10 shows that the former grow exponentially while Theorems 5.1 and 6.1 show that the latter converge to finite limits. Theorems 6.5 and 6.6 show that for FIR, OE and Box–Jenkins models the convergence for NMP-zeros is exponentially fast, with rate 1/|zo |, under certain conditions. Thus roots closer to the unit circle imply slower convergence. Although this has not been proven, we believe this to be true in general for both poles outside the unit disc and NMP-zeros. Roots on the unit circle

!

where the {ξk }k=1 are the roots of AĎ .

AsVar z o =

When |z o | = 1, the asymptotic variance is given by

 (36)

2519

(39)

Combining the upper and lower bounds in Theorem 3.10 gives that for the borderline case with a root on the unit circle, the asymptotic variance grows linearly with the model order. This is similar to what happens for frequency function estimates (Ljung, 1985; Ninness & Hjalmarsson, 2004). Roots close to each other From Theorem 3.1 it follows that the asymptotic variance of a root of one of the polynomials blows up when there is a nearby root of the same polynomial. The reason is that a root is very sensitive to errors in the parameters of the polynomial in this case. For example, for the second order polynomial z 2 + θ1 z + θ2 , it holds that " #T 1 θ1 1 d z (θ ) q q ∓ − ± = , 2 dθ 2 θ12 − 4θ2 θ12 − 4θ2 which tends to infinity as θ12 → 4θ2 , i.e. when the polynomial has a double root at −θ1 /2. In fact the standard asymptotic analysis, which is based on the first order Gauss’ approximation formula, breaks down for roots of higher multiplicity than 1. This does not imply that the variance of root estimates becomes infinite for, e.g., double roots, instead the convergence rate becomes slower than the 1/N rate given in (7). We will not pursue this here but simply conclude that the result above indicates that there are significant problems associated with estimating roots of higher multiplicity than one.

2520

J. Mårtensson, H. Hjalmarsson / Automatica 45 (2009) 2512–2525 Table 4 Variance of estimated non-minimum phase zeros from Monte Carlo simulations in open and closed loops.

The universal upper bound 1 λo |Ho (z o )|2 lim AsVar z ≤ o o 2 e n→∞ |Go (z )R(z )| (1 − |z o |−2 ) o

(41)

from Theorem 6.1 is useful when the reference signal is non-zero. Notice that the bound is model structure independent. It is also tight in open loop operation and for high signal to noise ratios at the input (see Corollary 6.2) in closed loop operation, and above we have argued that the convergence rate with respect to the model order is likely to be exponentially fast (at least we have shown this for some special cases). Thus the right-hand side of (41) provides in many cases a quite reasonable approximation of the asymptotic variance for a NMP-zero. The reason why (41) becomes tight when the signal to noise ratio at the input increases can be understood from the fact that the sensitivity function of the closed loop becomes unity when evaluated at a zero of the system. Thus the feedback does not contribute to the excitation at a system zero and when the signal to noise ratio at the input is high so that the noise fed back to the input does little to excite the system, the open loop case is recovered. The above observation has several implications. The first concerns how open loop identification compares to closed loop identification. It is well known that for Box–Jenkins models closed loop identification with a reference signal having spectral factor R, so that the part of the input due to the reference signal is So R, is never worse than open loop identification with a reference signal having spectral factor So R (Agüero & Goodwin, 2007; Bombois, Gevers, & Scorletti, 2005). For NMP-zeros, it follows from (41) that this holds for high model orders for any model structure also when R is the spectral factor of the input in open loop identification. Secondly, the universality of (41) also implies that, for high model orders, indirect and direct identification result in similar asymptotic variance of NMP-zeros. Recall that in indirect identification r is used as input rather than u (Ljung, 1999). Thirdly, notice that very different input spectra will result in similar asymptotic accuracy. We illustrate this with an example. Example 7.1. Consider the first order ARX system

(1 − 0.9q−1 )yt = (q−1 + 1.1q−2 )ut + et , 1−0.5q−1

(42)

where the input is given by ut = rt − 1+0.3q−1 yt and where rt and et are independent white noise, both with unit variance. This shall be compared with the open loop case when the input is given by ut = rt . The input spectra are plotted in Fig. 3 and in this example, they are very different in open and closed loop (for the closed loop case both the total input spectrum Φu and Φur , which is the part of the spectrum that originates from the reference signal rt , are plotted). Despite this big difference, the variance of the estimated zero does not change, at least not for high model orders. The system (42) is simulated in both open and closed loops. From the closed loop simulations we use both direct and indirect identification. For the open loop case and for the direct identification in the closed loop case, an ARX model is estimated from 1000 samples of ut and yt . For the indirect identification we estimate an ARMAX model from 1000 samples of rt and yt . The variance of the estimated zero, from 5000 Monte Carlo simulations, is presented in Table 4. The model order no represents the order of the true system, i.e., na = 1 and nb = 2 in the ARX model and na = 3, nb = 3 and nc = 1 for the ARMAX model. When the model order is increased, no + k means that na and nb has been increased by k, and nc is unchanged. The values in the table are the variances from the simulations, multiplied by N = 1000. The asymptotic value 5.8 is the upper bound given by (41). 

n

Open loop

Closed loop direct id.

Closed loop indirect id.

no no + 2 no + 5 no + 10 no + 15 Asympt.

2.3 4.0 6.9 6.9 6.0 5.8

0.21 1.8 4.5 7.2 6.5 5.8

4.7 6.6 7.3 6.2 5.5 5.8

103 Φu (ω), closed loop r

Φu (ω), closed loop 102 Input spectrum

NMP-zeros

Φu (ω), open loop

101

100

10–1

10–2

0

0.5

1

1.5

2

2.5

3

frequency ω

Fig. 3. Input spectrum for Example 7.1. The input spectra are very different in open and closed loops. This does not affect the accuracy of the zero estimates.

What we see is that the direct identification in closed loop performs best for low model orders, but already from no + 5 and up, all three cases give about the same variance, just a little higher than the asymptotic value. Next we make some further comments on the accuracy of estimated non-minimum phase zeros, based on (41).

• Usually there is a tradeoff between bias and variance in the sense that a low order model results in a biased estimate with low(er) variance and a high order model gives an un-biased estimate with high(er) variance. When it comes to NMP-zeros, this increase in the variance is typically small and, hence, high order models can be used without the risk of loosing accuracy, unless the zero is close to the unit circle. • In many identification scenarios the input filter R(q) can be chosen by the user. The variance is proportional to 1/|R(zbo )|2 so that factor should be small. If R(q) has a pole at zbo this factor is zero, but that corresponds to an unstable, and hence infeasible, input filter. An interesting choice is an autoregressive filter with just one pole at 1/zbo . Such an input filter is optimal in the sense that it has the lowest input power among all LTI filters that achieve the same variance, see Mårtensson, Jansson, and Hjalmarsson (2005). • Consider open loop identification and suppose for a moment that e Go (q) is known and that only the part 1 − z o z −1 has to be estimated. In this scenario, e Go (q) = 1 and the input filter R should be the original input filter in series with the original e Go (q), since the latter part of the system shapes the input to the system 1 − z o z −1 , which is to be identified. Notice now that the input filter R(q) and the factor e Go (q) enter the bound (41) as a product, i.e. the bound is the same if the product between the input filter and the system that is identified, excluding the NMPzero part, is the same. Thus in the above scenario, the bound remains the same as when also e Go (q) is identified. Furthermore, we have from Corollary 6.2, that as nb + nf → ∞, the bound (41) is attained. Combining these observations we have that for high model orders and open loop identification it is insignificant whether e Go (q) is known or estimated, i.e. knowing e Go (q) and only estimating 1−z o z −1 helps little for improving the accuracy.

J. Mårtensson, H. Hjalmarsson / Automatica 45 (2009) 2512–2525

2521

The function K (µ, z ) is called a reproducing kernel since it holds that

Unstable poles The universal upper bound

Co (z o ) 2 K (z o ) 2 1 o lim AsVar z ≤ λo o o o e n→∞ R(z ) (1 − |z o |−2 ) Ao (z )Do (z )

(43)

hf (·), K (µ, ·)i = f (µ), ∀f ∈ Sn hf (·), K (µ, ·)i = ProjSn {f }(µ), ∀f ∈ L2 (A.2) where ProjSn {f } denotes the orthogonal projection of f ∈ L2 onto Sn , see e.g., Heuberger, Wahlberg, and den Hof (2005).

from Theorem 5.1 is useful when the reference signal is non-zero. Notice that the bound is model structure independent. It is also tight for high signal to noise ratios at the output (see Corollary 5.2) and above we have argued that the convergence rate with respect to the model order is likely to be exponentially fast (at least we have shown this for some special cases). Thus the right-hand side of (43) provides in many cases a quite reasonable approximation of the asymptotic variance for a pole located outside the unit disc. The upper bound (43) and the corresponding lower bound (25) indicate that the controller should be designed so that the gain at the pole is small in order to ensure a small asymptotic variance. In this context, we point out that if the feedback configuration is changed to ut = K (q)(rt − yt ) instead of ut = rt − K (q)yt as above, the controller dependence in (43) and in other expressions for the asymptotic variance of unstable poles will disappear. This is easy to see since this change in controller structure corresponds to replacing R by KR.

The reproducing kernel is unique and can also be expressed as

K (µ, z ) = Ψ ∗ (µ)hΨ , Ψ i−1 Ψ (z ) (A.3) for any Ψ whose rowspace spans Sn (Heuberger et al., 2005). In the following lemmas we present some properties of the reproducing kernels of different subspaces which are used in the proofs of the paper.

em with reproducing Lemma A.1. Consider two subspaces Sn and S e(µ, z ) and let Sn ⊂ Sem . Then it holds that kernels K (µ, z ) and K e(z , z ). K (z , z ) ≤ K

(A.4)

ek }m Proof. Let {Bk }nk=1 and {B k=1 be orthonormal bases for Sn and em , respectively, with Bk = B ek , k = 1, . . . , n. Then it is clear that S e(z , z ) − K (z , z ) = K

m X

ek∗ (z )B ek (z ) ≥ 0, B

k=n+1

which concludes the proof. Costless identification For costless identification, i.e. when the external reference is not used to excite the system, the lower bounds 5.4 and 6.4 show that the controller gain should be small at unstable poles but large at NMP-zeros in order for the asymptotic variance to be small. Thus there is a conflict when designing suitable experiments when the objective is to identify a NMP-zero and an unstable pole that are close to each other. 8. Summary In this paper we derive explicit expressions for the asymptotic variance of estimated poles and zeros of dynamical systems. One of the main conclusions is that the asymptotic variance of nonminimum phase zeros and unstable poles is only slightly affected by the model order, while the variance of minimum phase zeros and stable poles is very sensitive and grows exponentially with the model order. Another important observation is that nearby roots have a very detrimental effect on the asymptotic variance. The asymptotic variance expressions give structural insight to how the variance is affected by e.g., model order, model structure and input excitation.



em have Lemma A.2. Let the scalar-valued subspaces Sn and S e(µ, z ). Suppose that there exists reproducing kernels K (µ, z ) and K a constant δ : 0 ≤ δ < 1 such that for every function f ∈ Sn there em that fulfills exists a function g ∈ S |f (z ) − g (z )| < δ · |f (z )|,

∀z ∈ Z

where Z is some region including the unit circle. Then it holds that

K (z , z ) ≤



1+δ

2

1−δ

e(z , z ), K

∀z ∈ Z.

Proof. Since K (µ, z ) itself, as a function of z, belongs to Sn , there em such that is by assumption a family of functions gµ ∈ S

|K (µ, z ) − gµ (z )| < δ · |K (µ, z )|,

∀z ∈ Z.

Thus, we can write

K (µ, z ) = gµ (z ) + ∆µ (z ), where

|∆µ (z )| < δ · |K (µ, z )|,

∀z ∈ Z

which also implies that k∆µ (·)k < δ · kK (µ, ·)k since Z includes the unit circle. Hence,

|K (µ, z )| ≤ |gµ (z )| + δ · |K (µ, z )|,

Acknowledgments

and by using the Cauchy–Schwarz inequality and the fact that hK (µ, ·), K (z , ·)i = K (µ, z ) we get

The authors would like to express their gratitude to the anonymous reviewers for their many insightful remarks and for spurring us to a major revision which, we believe, significantly improved the quality of the paper.

|K (µ, z )| ≤ ≤

Appendix A. Technical results for reproducing kernels

=

Let {Bk }nk=1 be an orthonormal basis for an n-dimensional subspace Sn ⊂ L2 and let K (µ, z ) denote the reproducing kernel for Sn , here defined as



K (µ, z ) ,

n X k=1

Bk∗ (µ)Bk (z ).

(A.1)

≤ =

1 1−δ 1

|gµ (z )| =

1 1−δ

e(z , ·)i| |hgµ (·), K

q 1 e(z , ·)k = e(z , z ) kgµ (·)kkK kgµ (·)k K 1−δ 1−δ q 1 e(z , z ) kK (µ, ·) − ∆µ (·)k K 1−δ q 1 e(z , z ) kK (µ, ·)k + k∆µ (·)k K 1−δ q 1 e(z , z ) (kK (µ, ·)k + δ · kK (µ, ·)k) K 1−δ q 1 + δp e(z , z ). K (µ, µ) K 1−δ

2522

J. Mårtensson, H. Hjalmarsson / Automatica 45 (2009) 2512–2525

For µ = z it holds that K (z , z ) = |K (z , z )| and then squaring the inequality above gives the result. 

ep as Proof. Define S

Lemma A.3. Consider the subspace

ep , Span S

z −1

 Sn , Span

M (z )

,...,

z −n



M (z )

,

(A.5)

where M (z ) ,

m Y

(1 − ξi z −1 ),

|ξi | < 1

(A.6)

i =1

|z |

K (z , z ) =

1 − |z |−2

,

for |z | > 1.

1 − φn (µ)φn (z ) zµ − 1

(A.7)

where φk (z )

,

(A.10)

Combining (A.9) and (A.10) gives the result (A.7) when |z | 6= 1. The result when |z | = 1 follows directly from Ninness and Gustafsson (1997). For (A.8) we notice that Bk (z ) = z −k , k = 1, 2, . . . form an orthonormal basis and hence K (z , z ) =

k=1

|z |−2k

|z |−2 , = 1 − | z | −2



1 − |z |−2n−2 1 − | z | −2



|F (z )|2

1 − |z | 1−

|z | > 1. 

Lemma A.4. Consider the subspace

Sn , Span



(A.11)

where M (z ) is given by (A.6) (with m ≤ n) and Q (z ) , q0 + q1 z −1 + · · · + qnq z −nq

(A.12)

has all zeros inside the unit circle. Let Kn be the reproducing kernel of Sn . Then Kn (z , z ) is bounded by

|z |−2 Kn (z , z ) ≤ 1 − |z |−2



 |M (1/z )|2 −2(n+nq ) 1− | z | , |M (z )|2 m X 1 − |ξk |2 k=1

|z − ξk |

. 2

(A.17)

Z



π

jω 2 F (e ) Γn (ejω )Γ ∗ (ejω )dω

(A.18)

n

−π

2

are bounded from below by infω F (ejω )



and from above

2 by supω F (ejω ) , see Grenander and Szegö (1958). Hence the

eigenvalues of the inverse of (A.18) are lower bounded by 1 and hence jω 2 supω F (e

)|

K (z , z ) = |F (z )|2

Γn ( z ) ∗



1 2π

≥ |F (z )|2

π

Z

jω 2 F (e ) Γn (ejω )Γ ∗ (ejω )dω n

 −1

Γn (z )

−π

1

∗ 2 Γn (z )Γn (z ), j ω sup F (e ) ω

which gives the lower bound. The upper bound follows using the lower bound for the eigenvalues.  Lemma A.6. Let Sn and Kn be as in Lemma A.4 and let |z | > 1. When the dimension n goes to infinity, the reproducing kernel is given by

|z |−2 . 1 − |z |−2

(A.19)

ep given by (A.11) Proof. Consider the two spaces Sn and S ep (µ, z ) be the associated and (A.15) and let Kn (µ, z ) and K reproducing kernels. For any value p of the dimension of the ep there is a number r (p) such that there exists subspace S parameters αk such that r (p) 1 X −k αk z < , 1 − L(z ) p k=0

∀z : |z | ≥ 1,

(A.13)

i.e. on and outside the unit circle the function 1/L(z ) can be approximated arbitrarily well with an FIR-filter by choosing a sufficiently large number r (p). This is a consequence of Mergelyan’s theorem, see e.g. Rudin (1987). Let this FIR-filter be Pr (p) −k ep can be written as denoted S (z ) , . A function f ∈ S k=0 αk z

(A.14)

f (z ) =

when z 6= 0 is such that |z | 6= 1 and when |z | = 1 by

K n ( z , z ) ≤ n + nq − m +

1

2 . inf F (ejω ) ω

lim Kn (z , z ) =

Q (z ) −1 Q (z ) −n z ,..., z , M (z ) M (z )

1

2 ≤ Kn (z , z ) sup F (ejω )

|F (z )|2

|z |−2

n→∞



(A.16)

Proof. Observe that the eigenvalues of

|

see Ninness, Hjalmarsson, and

Gustafsson (1998), and it can also be expressed in terms of the function M (z ) by noting that

∞ X

(A.15)

Lemma A.5. Consider the subspace

1

M ( z −1 ) −n φn (z ) = z . M (z )

.

ω

(A.9)

1−ξ j z j=1 z −ξj ,

Qk

M (z )

ep and Lemma A.1 gives If we let p = n + nq then Sn ⊂ S ep (z , z ) where K ep is the reproducing kernel of that Kn (z , z ) ≤ K ep . Lemma A.3 gives the right-hand sides of (A.13) and (A.14) as S expressions for e Kp (z , z ). 

−2n−2

(A.8)

,



where F is a rational stable transfer function. Let Kn be the reproducing kernel of Sn . Then Kn (z , z ) is bounded according to

Proof. For z : z 6= 0, |z | 6= 1, the reproducing kernel for Sn is given by

K (µ, z ) =

M (z )

z −p



Furthermore, the reproducing kernel for the space spanned by z −k , k = 1, 2, . . . can be expressed as −2

,...,

Sn , Span F (z )z −1 , . . . , F (z )z −n ,

and n ≥ m. The reproducing kernel of the space Sn is given by

   |M (1/z )|2 −2n |z |−2   1− |z | ,   |M (z )|2  1 − |z |−2 z 6= 0 & |z | 6= 1, K (z , z ) = m  X  1 − |ξk |2   , |z | = 1. n − m + |z − ξk |2 k=1

z −1



1

p X

M (z ) k=1

βk z −k

J. Mårtensson, H. Hjalmarsson / Automatica 45 (2009) 2512–2525

2523

for some arbitrary parameters βk . Now notice that the function L(z )S (z )f (z ) belongs to the subspace Sn if we take n = n(p) = ep , there is a g = LSf ∈ Sn such p + r (p). Thus, for any function f ∈ S that

and hence Ψ1 (zbo )L(zbo ) = Λ(θ o ) with L(z ) given in Table 1 since So (zbo ) = 1 regardless of the feedback. Thus, we can write (B.1) as

|f (z ) − g (z )| = |f (z ) − L(z )S (z )f (z )| = |1 − L(z )S (z )||f (z )|

Now, using (A.3), we recognize Ψ1 (µ)hΨ , Ψ i Ψ1 (z ) as the reproducing kernel to the rowspace to Ψ and can be written as (A.1), i.e. (9). This now gives the result (8) in Theorem 3.1. The other cases A, C , D and F follow in the same way, except for one detail which is explained here. For brevity only the case A is considered here but the result holds also for the other cases. It should first be noted that Ψ is singular at zao . This singularity is canceled by the factor Ao in the function L, see Table 1, and we still have that Ψ1 (zao )L(zao ) = Λ(θ o ). This pole/zero cancellation between L and K i must be made for the expression (8) to make sense.



1 p

|f (z )|,



∀z : |z | ≥ 1.

Lemma A.2 can now be applied and gives 1−

1 p

1+

1 p

!2 ep (z , z ) ≤ Kn(p) (z , z ). K

The sequence Kn (z , z ), n = 1, 2, . . . is monotonically increasing and for |z | > 1 Lemma A.4 gives the finite upper bound

Kn (z , z ) ≤

|z |−2 1−|z |−2

and hence it follows that Kn (z , z ) has a limit.

The subsequence Kn(p) (z , z ), p = 1, 2, . . . then converges to the same limit and we can conclude that

ep (z , z ) ≤ lim Kn (z , z ). lim K

p→∞

n→∞

Lemma A.3 gives that the lower bound approaches

ep (z , z ) = lim K

p→∞

|z |−2 , 1 − |z |−2

which is the same as the upper bound and that proves the result.  Appendix B. Proof of Theorem 3.1 The starting point is the variance expression (6) which also can be written as AsVar z o = λo Λ∗ (θ o )hΨ , Ψ i−1 Λ(θ o ),

(B.1)

where Ψ is given by (5) and where Λ(θ ) is the gradient of the pole/zero with respect to the parameters. This gradient is given by the next lemma. Lemma B.1. Let za (θa ) be a zero of A(q, θa ) of multiplicity 1. Then d za (θa ) d θa



=−

θa =θao

zao

e Ao (zao )

Proof. The proof can be found in, e.g., Oppenheim and Schafer (1989).  Now we return to the expression (B.1) and consider a zero of B(q), which is the most straightforward case. The result for the other cases will be commented at the end of the proof. Consider the function J (θ ) = zb (θb ), which does not depend on θa , θc , θd and θf . Thus, from Lemma B.1, 0na ×1



Appendix C. Proof of Lemma 3.5 First we note that existence of Ry and Ru satisfying (15) is ensured by the spectral factorization theorem. Poles on the unit circle can be extracted before the spectral factorization and reinserted afterwards and are thus left unchanged. Furthermore, notice that by Standing Assumptions 2.1, R has all zeros strictly inside the unit circle. Thus the right-hand sides of the equations in (15) are non-zero on the unit circle and thus Ry and Ru can be taken to have all their zeros strictly inside the unit circle. Consider now a zero of Bo . Lemma 3.4 gives that the asymptotic variance when only θb and θf are used is no greater than the asymptotic variance when all parameters are used. Considering the case when only θb and θf are used will hence give a lower bound. For this case, it is straightforward to verify that Ψ (ejω )Ψ ∗ (ejω ) = Φ (ejω )Φ ∗ (ejω ) where

Φ,−

Go So Ru Ho



1 Bo

ΓnTb

1 Fo

ΓnTf

T

and that

Φ (zbo ) ·

Ho (zbo )Bo (zbo ) Go (zbo )So (zbo )Ru (zbo )

=

  −Γnb (zbo ) 0

.

AsVar z o = λo |L(zbo )|2 Φ ∗ (zbo )hΦ , Φ i−1 Φ (zbo ),

(C.1)

with L(z ) = zHo (z )/(e Go (z )So (z )Ru (z )). Since So (zbo ) = 1 we can remove So from L. Now Φ ∗ (µ)hΦ , Φ i−1 Φ (z ) is the reproducing kernel for the rowspace of Φ . However, since Bo and Fo are coprime this is the same subspace as the one generated by BGoFSoHRu Γnb +nf , see o o o Ninness and Hjalmarsson (2004). This proves (13) for zeros of Bo . The proofs for zeros of the other polynomials are analogous. Appendix D. Proof of Corollary 3.6 That we have equality in (13) when only θb and θf are used, i.e. for FIR and OE models, follows directly from the proof of Lemma 3.5. We now turn to the Box–Jenkins case when (16) holds. Then

o

z   − b o Γnb (zbo )  e  B ( z ) o Λ(θ ) =  o b . 0nc ×1     0 nd ×1

0nf ×1 It is now readily verified that



(B.3) −1

Thus, if we consider a zero of B we can, analogously to the proof of Theorem 3.1, write (B.1) as

Γna (zao ),

where Γn (q) = [q−1 , . . . , q−n ]T . For zeros of the other polynomials B, C , D and F we get analogous expressions.



AsVar z o = λo |L(zbo )|2 Ψ1∗ (zbo )hΨ , Ψ i−1 Ψ1 (zbo ).

0



−Γnb (zbo )   o 0 Ψ1 (zb ) · =    Go (zbo )So (zbo )R(zbo ) 0 Ho (zbo )Bo (zbo )

0

(B.2)

 G S o o − RΓnb (q)  Ho Bo   0  Ψ =   0   G S o o − RΓnf (q) Ho Fo

KGo So p Bo 1p

λo Γnb (q)



  λo Γnc (q)   Co .  1 p λo Γnd (q)   Do  KGo So p λo Γnf (q) −

Fo

2524

J. Mårtensson, H. Hjalmarsson / Automatica 45 (2009) 2512–2525

By changing the rows of Ψ so that it corresponds to reordering the parameters as θ = [θbT , θfT , θcT , θdT ]T , Ψ can be partitioned as

 T Ψ = ΨbT,f ΨcT,d where   G S KGo So p o o − RΓnb (q) λo Γnb (q)   Bo o Bo Ψb ,f =  H , KGo So p Go So − RΓnf (q) λo Γnf (q)   Ψc ,d = 

0

Ho F o 1p

Fo

Do

 hΨb,f , Ψb,f i 0

0



hΨc ,d , Ψc ,d i

,

(D.1)

where hΨb,f , Ψb,f i equals hΨ , Ψ i for an OE model. This means that the asymptotic variance for zeros of Bo and Fo is identical to the corresponding asymptotic variance when an OE model is used, see the derivation above. Also from (D.1) it follows that the asymptotic variance for zeros of Co and Do has the same structure as the asymptotic variance for zeros of Bo and Fo for an OE model but where the subspace that the basis functions should span is the span of the second column of Ψc ,d which is identical to the span of C 1D Γnc +nd since Co and Do o o are assumed to be coprime. Appendix E. Proof of Lemma 3.9 Notice that R, Ry and Ru have no zeros on the unit circle. Furthermore, the assumptions in the lemma, imply that Ry and Ru do not have poles on the unit circle either, and by Standing Assumptions 2.1 the same applies to R. Thus, starting with the bounds in (18), Ru /R is a rational function Ru (z ) R(z )

m1 P

=

αk z −k

k=0 m2

P

1+

βk z −k

k=1

with poles and zeros strictly inside the unit circle. Thus f (z ) = R(z −1 )/Ru (z −1 ) is analytic and continuous in |z | < 1 + δ for some δ > 0. Thus, by the maximum modulus theorem, we have

|f (z )| ≤ sup |f (s)|,

|z | < 1,

|s|=1

i.e. |R(z )|

|Ru (z )| or |Ru (z )|

|R(z )|

|R(s)| , |s|=1 |Ru (s)|

|z | > 1,

|Ru (s)| , |s|=1 |R(s)|

|z | > 1.

≤ sup

≥ inf

Ψ1 0

0

Ψ2

. Furthermore, it can be seen that

the rows of Ψ1 are spanned by Γm¯ abf for a suitable choice of mo and where X is the least common denominator of all the transfer functions in Ψ1 . By a similar argument, the rows of Ψ2 are spanned by Z1 Γm¯ cd for some minimum phase polynomial Z . Thus we have that the rows of Ψ are spanned by 1 X

Notice that since Bo and Fo are coprime, the elements of the second column of Ψb,f span the same subspace as KGBo So Γnb +nf . Similarly, o since Bo and Fo are coprime, the elements of the second column of Ψc ,d span the same subspace as Co1Do Γnc +nd . Thus the assumption (16) implies that Ψb,f and Ψc ,d are orthogonal, i.e. hΨb,f , Ψc ,d i = 0. Thus

hΨ , Ψ i =

The lower bound follows directly by applying the lower bound provided by Lemma A.5 to Klb in the right-hand side in (13) in Lemma 3.5. For the upper bound, we first observe that the asymptotic variance is given by (8). We will now provide an overbound for the reproducing kernel K h . To this i end, notice that the span of Ψ is contained in the span of

 − λo Γnc (q)  Co . 1 p λo Γnd (q)

0

Appendix F. Proof of Theorem 3.10

Now, by definition of Ru ,

|Ru (ejω )|2 λo |Ho (ejω )K (ejω )|2 = 1 + |R(ejω )|2 |R(ejω )|2 and the lower bound in (18) follows. The proof of the upper bound is similar with f (z ) = Ru (z −1 )/R(z −1 ). The result (19) is trivially obtained from the definition of Ru by setting R ≡ 0. The bounds in (17) are shown in the same way as (18).



1

 Φ = X



Γm¯ abf

0 1

0

Γm¯

 .

cd Z Lemma A.1 gives that K in Theorem 3.1 is upper bounded by the reproducing kernel corresponding to the span of Φ . Due to the block-diagonal structure of Φ , this reproducing kernel is diagonal where the (1, 1) element is the reproducing kernel for X1 Γm¯ abf and

the (2, 2) element is the reproducing kernel for Z1 Γm¯ cd . The upper bound in Lemma A.5 now provides the desired upper bounds for K 1 (z o , z o ) and K 2 (z o , z o ). Appendix G. Proof of Theorem 3.11 Consider first zeros of Bo and Fo . From Corollary 3.6 we then have equality in (13) where Klb in (13) is the reproducing kernel for the subspace spanned by BGoFSoHRu Γnb +nf . The order condition on o o o AĎ allows us to use Lemma A.4 and hence the upper bounds in (A.13) and (A.14) give upper bounds for Klb for the cases z o 6= 1 and z o = 1, respectively, which in turn together with the equality in (13) gives the inequalities (22) and (23) (that (23) is valid only for zeros of Bo on the unit circle is only due to the fact that Fo is restricted to be minimum phase by assumption, which in turn is due to that otherwise the predictor is not stable). Equality when the numerator of BGoFSoHRu is constant is obtained by using o o o Lemma A.3 instead of Lemma A.4 on Klb . For zeros of Co or Do , Corollary 3.6 gives that equality holds in (13) where Klb in (13) is the reproducing kernel for the subspace spanned by C 1D Γnc +nd . o o This space is of the form (A.5) and, thus, Lemma A.3 gives the desired result. Appendix H. Proof of Theorem 5.1 Theorem 3.1 is applicable so the asymptotic variance is given by (8). Lemma A.1 applied to K (z o , z o ) gives that AsVar z o increases monotonically with n. However, since each element of Ψ belongs to the space spanned by {z −k }∞ k=1 , the reproducing kernel for the rowspace of Ψ is bounded from above  −   by the  reproducing kernel for the space spanned by z k 0 , 0 z −k , k = 1, 2, . . ., which, using (A.8) in Lemma A.3, is given by

|z |−2  1 − |z |−2   

0

 0

  | z | −2  1 − | z | −2

for |z | > 1. This gives that AsVar z o is bounded from above by the right-hand side of (8) with L given by Table 1 and with K (z o , z o ) |z |−2

replaced by 1−|z |−2 , regardless of n. Evaluating L for Ao gives (24). This bound together with the monotonicity of AsVar z o implies that limn→∞ AsVar z o must exist.

J. Mårtensson, H. Hjalmarsson / Automatica 45 (2009) 2512–2525

For the lower bound (25) we invoke Lemma 3.5 and then notice that Lemma A.6 is applicable to Klb (z o , z o ) under the conditions in the theorem. Evaluation of Llb for Ao using Table 2 now gives (25). Appendix I. Proof of Theorem 6.1 The proof parallels the proof of Theorem 5.1, see Appendix H but in the proof of (30) L is evaluated for Bo instead. Similarly, in the proof of (31), Llb is evaluated for Bo using Table 2. That equality holds in (31) for OE and FIR models as well as for Box–Jenkins models subject to (16) follows from Corollary 3.6. This concludes the proof. References Agüero, J., & Goodwin, G. (2007). Choosing between open- and closed-loop experiments in linear system identification. IEEE Transactions on Automatic Control, 52(8), 1475–1480. Bombois, X., Gevers, M., & Scorletti, G. (2005). Open-loop versus closed-loop identification of Box–Jenkins models: A new variance analysis. In 44th IEEE conference on decision and control and the European control conference 2005 (pp. 3117–3122). Bombois, X., Scorletti, G., Gevers, M., Van den Hof, P., & Hildebrand, R. (2006). Least costly identification experiment for control. Automatica, 42(10), 1651–1662. Forssell, U., & Ljung, L. (1999). Closed-loop identification revisited. Automatica, 35(7), 1215–1241. Gevers, M., Bazanella, A. S., Bombois, X., & Miskovic, L. (2009). Identification and the information matrix: How to get just sufficiently rich? IEEE Transactions on Automatic Control (in press). To appear December 2009. Gevers, M., Ljung, L., & Van den Hof, P. (2001). Asymptotic variance expressions for closed-loop identification. Automatica, 37(5), 781–786. Grenander, U., & Szegö, G. (1958). Toeplitz forms and their applications. Berkley, CA: University of California Press. Heuberger, P., Wahlberg, B., & den Hof, P. V. (Eds.). (2005). Modeling and identification with rational orthogonal basis functions. Springer Verlag. Hjalmarsson, H., & Lindqvist, K. (2002). Identification of performance limitations in control using ARX-models. In Proceedings of The 15th IFAC world congress. Hjalmarsson, H., & Mårtensson, J. (2007). A geometric approach to variance analysis in system identification: Theory and nonlinear systems. In Proceedings of the 46th IEEE conference on decision and control (pp. 5092–5097). Hjalmarsson, H., & Ninness, B. (2006). Least-squares estimation of a class of frequency functions: A finite sample variance expression. Automatica, 42(4), 589–600. Lindqvist, K. (2001). On experiment design in identification of smooth linear systems. Licentiate thesis. Royal Institute of Technology, Stockholm, Sweden. Ljung, L. (1985). Asymptotic variance expressions for identified Black–Box transfer function models. IEEE Transactions Automatic Control, 30(9), 834–844. Ljung, L. (1999). System identification: Theory for the user (second ed.). Prentice Hall. Mårtensson, J., & Hjalmarsson, H. (2003). Identification of performance limitations in control using general SISO-models. In Proceedings of the 13th IFAC symposium on system identification (pp. 519–524). Mårtensson, J., & Hjalmarsson, H. (2005a). Closed loop identification of unstable poles and non-minimum phase zeros. In Proceedings of the 16th IFAC world congress on automatic control. Mårtensson, J., & Hjalmarsson, H. (2005b). Exact quantification of the variance of estimated zeros. In Proceedings of the 44th IEEE conference on decision and control and European control conference (pp. 4275–4280). Mårtensson, J., & Hjalmarsson, H. (2007). A geometric approach to variance analysis in system identification: Linear time-invariant systems. In Proceedings of the 46th IEEE conference on decision and control (pp. 4269–4274).

2525

Mårtensson, J., Jansson, H., & Hjalmarsson, H. (2005). Input design for identification of zeros. In Proceedings of the 16th IFAC world congress on automatic control. Ninness, B., & Gustafsson, F. (1997). A unifying construction of orthonormal bases for system identification. IEEE Transactions on Automatic Control, 42(4), 515–521. Ninness, B., & Hjalmarsson, H. (2004). Variance error quantifications that are exact for finite model order. IEEE Transactions on Automatic Control, 49(8), 1275–1291. Ninness, B., & Hjalmarsson, H. (2005a). Analysis of the variability of joint input–output estimation methods. Automatica, 41(7), 1123–1132. Ninness, B., & Hjalmarsson, H. (2005b). On the frequency domain accuracy of closedloop estimates. Automatica, 41(7), 1109–1122. Ninness, B., Hjalmarsson, H., & Gustafsson, F. (1998). Generalized Fourier and Toeplitz results for rational orthonormal bases. SIAM Journal on Control and Optimization, 37(2), 429–460. Ninness, B., Hjalmarsson, H., & Gustafsson, F. (1999a). Asymptotic variance expressions for output error model structures. In 14th IFAC world congress, Vol. H (pp. 367–372). Ninness, B., Hjalmarsson, H., & Gustafsson, F. (1999b). The fundamental role of general orthonormal bases in system identification. IEEE Transactions on Automatic Control, 44(7), 1384–1406. Oppenheim, A., & Schafer, R. (1989). Discrete-time signal processing. Englewood Cliffs, NJ: Prentice-Hall. Rudin, W. (1987). Real and complex analysis (third ed.). McGraw-Hill, Inc.. Skogestad, S., & Postlethwaite, I. (1996). Multivariable feedback control—Analysis and design. Chichester: John Wiley. Söderström, T., & Stoica, P. (1989). Prentice Hall International series in systems and control engineering, System identification. Vuerinckx, R., Pintelon, R., Schoukens, J., & Rolain, Y. (2001). Obtaining accurate confidence regions for the estimated zeros and poles in system identification problems. IEEE Transactions on Automatic Control, 46(4), 656–659. Xie, L., & Ljung, L. (2001). Asymptotic variance expressions for estimated frequency functions. IEEE Transactions on Automatic Control, 46(12), 1887–1899. Xie, L., & Ljung, L. (2004). Variance expressions for spectra estimated using autoregressions. Journal of Econometrics, 118(1–2), 247–256.

Jonas Mårtensson received his M.Sc. degree in Vehicle Engineering and his Ph.D. degree in Automatic Control from KTH Royal Institute of Technology, Stockholm, Sweden, in 2002 and 2007 respectively. Since 2007 he is a researcher at the School of Electrical Engineering at KTH. His research interests include system identification, presently focusing on modeling and control of internal combustion engines.

Håkan Hjalmarsson was born in 1962. He received the M.S. degree in Electrical Engineering in 1988, and the Licentiate degree and the Ph.D. degree in Automatic Control in 1990 and 1993, respectively, all from Linköping University, Sweden. He has held visiting research positions at California Institute of Technology, Louvain University and at the University of Newcastle, Australia. He has served as an Associate Editor for Automatica (1996–2001), and IEEE Transactions on Automatic Control (2005–2007) and has been Guest Editor for European Journal of Control and Control Engineering Practice. He is Professor at the School of Electrical Engineering, KTH, Stockholm, Sweden. He is Chair of the IFAC Technical Committee on Modeling, Identification and Signal Processing. In 2001 he received the KTH award for outstanding contribution to undergraduate education. His research interests include system identification, signal processing, control and estimation in communication networks and automated tuning of controllers.