A NEW SUBSPACE IDENTIFICATION METHOD FOR CLOSED-LOOP SYSTEMS

A NEW SUBSPACE IDENTIFICATION METHOD FOR CLOSED-LOOP SYSTEMS

14th IFAC Symposium on System Identification, Newcastle, Australia, 2006 A NEW SUBSPACE IDENTIFICATION METHOD FOR CLOSED-LOOP SYSTEMS Koichi Onodera ...

3MB Sizes 0 Downloads 77 Views

14th IFAC Symposium on System Identification, Newcastle, Australia, 2006

A NEW SUBSPACE IDENTIFICATION METHOD FOR CLOSED-LOOP SYSTEMS Koichi Onodera ∗ , Genichi Emoto ∗∗ , and S. Joe Qin ∗∗∗ ∗ Mitsubishi Chemical Engineering Corporation 3-10, Ushiodori Kurashiki, Okayama 712-8054, Japan e-mail: [email protected] ∗∗ e-mail: [email protected] ∗∗∗ Department of Chemical Engineering The University of Texas at Austin Austin, TX 78712 e-mail: [email protected].

Abstract: In this paper we present a new subspace identification method applicable to closed-loop data. First, Kalman predictor Markov parameters in a framework of ARX modeling with high order are obtained. These parameters are made available for subspace identification to help the estimation of the Hankel matrix which consists of the estimated predictor Markov parameters. To estimate the observer matrices, eigensystem realization algorithm (ERA) with weightings related to canonical correlation analysis (CCA) is applied to the Hankel matrix. System matrices are easily derived from the estimated observer matrices. We then demonstrate the effectiveness of the proposed algorithm via simulated and industrial closed loop data. Keywords: subspace identification; Kalman predictor Markov parameters; eigensystem realization algorithm (ERA); canonical correlation analysis (CCA); observer/Kalman filter identification (OKID);

1. INTRODUCTION

especially with high order, can provide numerical simplicity and asymptotical consistent estimates even on closed loop data, identification algorithms with ARX modeling have been easily accepted in several industrial packages.

Subspace identification methods (SIM) have the advantages of their numerical stability, simplicity and state space form that is easy to apply to multivariable systems. However, so far a few drawbacks in the SIM have been pointed out (cf. (Qin and Ljung, 2003)). The typical drawback is the fact that it is difficult for SIM to handle closedloop data and generally the estimation accuracy of the SIM is not as good as the prediction error methods (PEM)(Ljung, 1999). Because of those drawbacks and theoretical complexity of the SIM, the progress of the use of SIM in industry appears to be slow. On the other hand, since ARX models,

Recently, the difficulty to handle closed-loop data in SIM has been overcome (Qin and Ljung, 2003; Jansson, 2003). The interesting point of SSARX by (Jansson, 2003) is to make use of observer Markov parameters. The so-called ARX modeling is utilized. This development makes it easily possible for us to understand the complex SIM algorithm and for the application to multivariable systems. Another feature of the SSARX method

1056

is that, from the canonical correlation analysis (CCA) point of view, singular value decomposition (SVD) with CCA weighting is used to estimate state vector. The estimated state vector are used to calculate system matrices with the least squares method.

from the input and output data. Here, since we analyze the system in discrete time, it is natural to set D to zero. Then we will discuss the following model from now.  x(t + 1) = Ax(t) + Bu(t) + Ke(t) (2) y(t) = Cx(t) + e(t)

In the literature, there is another algorithm from the stand point of utilizing ARX modeling. It is the observer/Kalman filter identification method (Juang et al., 1993), which is called OKID. This algorithm first obtains the observer Markov parameters, in a high order ARX model from inputoutput data. Then from the observer Markov parameters, a Hankel matrix which consists of the system Markov parameters is built. Finally, system matrices of the state space form are identified by using Eigensystem Realization Algorithm (ERA) (Juang and Pappa, 1985). The good thing of ERA is that the system matrices can be directly estimated from the Hankel matrix, and balanced realization is considered. The state vector is not estimated explicitly in OKID method as conventional subspace identification algorithms do, as OKID doesn’t belong to SIM.

We can rewrite (2) in a predictor form as following,  ¯ ¯ x(t + 1) = Ax(t) + Bz(t) (3) y(t) = Cx(t) + e(t) where, A¯ = A − KC ¯ = [B, K] B z(t) = [u(t)T , y(t)T ]T

(4) (5) (6)

A¯ can be regarded as asymptotically stable because K is the steady state Kalman gain. From equation (3), it is easy to show that x(t) = A¯p x(t − p) +

p−1 

¯ A¯k Bz(t − k − 1)

(7)

k=0

Since A¯ is assumed to be asymptotically stable, for some sufficiently large integer p, the first term of the right-hand side equation (7) can be neglected as A¯p ≈ 0 so that the state vector x(t) can be estimated by the past inputs, past outputs and predictor Markov parameters as follows:

In this paper, we present a new subspace identification algorithm, which we will call SOKID in the following. This method is inspired by OKID (Juang et al., 1993) and SSARX (Jansson, 2003). The feature of SOKID is that first we estimate the Kalman predictor Markov parameters. Then we construct extended state space form with a future horizon utilizing the estimated predictor Markov parameters as SSARX method does. However, we do not recover system Markov parameters as OKID method does, we apply ERA with CCA weighting to Hankel matrix which consists of the estimated predictor Markov parameters so that we can realize the observer matrices directly with optimal weighting. It is very simple to recover system matrices from the observer matrices.

x ˆ(t) =

p−1 

¯ A¯k Bz(t − k − 1)

(8)

k=0

Replacing the state in (2) by the estimated state, output becomes y(t) ≈ C x ˆ(t) + e(t) =C

p−1 

¯ A¯k Bz(t − k − 1) + e(t)

(9)

k=0

For the inputs and outputs time series data, the above equation yields ¯ z Zp + et yt ≈ C L

2. ANALYSIS OF SUBSPACE MODEL FORMULATION

(10)

where

  ¯z = B ¯ A¯B ¯ · · · A¯p−1 B ¯ L (11)   yt = y(t) y(t + 1) · · · y(t + N − 1)   zt = z(t) z(t + 1) · · · z(t + N − 1)  T T T T Zp = zt−1 (12) zt−2 · · · zt−p

We consider a linear time-invariant system described by the innovation model formulation as follows:  x(t + 1) = Ax(t) + Bu(t) + Ke(t) (1) y(t) = Cx(t) + Du(t) + e(t)

Similar formulations are made for ut and et as same as yt . The predictor Markov parameter se¯ z in equation (10) has the following quence C L least squares estimation:  −1  ˆ pp ˆ yp R ¯z = R (13) C L

where y(t) ∈ Rny and u(t) ∈ Rnu are the system output and input signals, respectively. x(t) ∈ Rn denotes is the state vector and n is the system order. e(t) ∈ Rny is the zero mean white innovation process. The matrices A ∈ Rn×n , B ∈ Rn×nu , C ∈ Rny ×n , D ∈ Rny ×nu and K ∈ Rn×ny and the covariance matrix of the innovations are unknown and have to be estimated

where we have defined the sample correlation matrices between two signals.

1057

state sequence X(t) can be written from equation ¯ z and the past (8) by the controllability matrix L data Zp as the following:

ˆ yp = 1 yt ZpT (14) R N ˆ pp = 1 Zp ZpT (15) R N The estimated predictor Markov parameter can be described as follows:   ¯ = C A¯k B, C A¯k K C A¯k B (16)

¯ z Zp X(t) ≈ L

Using this relationship, equation (17) can be described as ¯f L ¯ f −1 + Ef ¯ z Zp + ΦZ Yf ≈ Γ

as we see from the above form, equation (13) gives an ARX model with order p.

¯ where H(k) is the f ny × p(nu + ny ) Hankel Matrix, composed of estimated predictor Markov parameters. With k = 0 in equation (26), we can find that

3. SUBSPACE OKID ALGORITHM As we mentioned before, the idea of the new method in the following relies on OKID (Juang et al., 1993) and SSARX (Jansson, 2003). We will present the SOKID method by borrowing the ideas of those two methods. Moreover, we introduce ERA (eigensystem realization algorithm) (Juang and Pappa, 1985) in which CCA (canonical correlation analysis) weighting is considered. The ERA with CCA weighting will be applied to Hankel matrix which consists of the estimated predictor Markov parameters. After the predictor ¯ B, ¯ and C are estimated directly, the matrices A, matrices A, B, C, and K will be extracted easily from them.

¯f L ¯z ¯0 = Γ H

 ¯ f −1 ≈ Γ ¯f L ¯ z Zp + Ef Y˜f = Yf − ΦZ ¯ = H0 Zp + Ef

(28)

¯ 0 can be written by From least squares method, H ˆ¯ = R ˆ f p [R ˆ pp ]−1 H 0

(29)

where ˆf p = R

1 ˜ T Yf Zp N −f

(30)

Furthermore, using the sample correlation matrices, ˆ¯ = [R ˆ f f ]1/2 [R ˆ f f ]−1/2 R ˆ pp ]−1/2 [R ˆ pp ]−1/2 ˆ f p [R H 0



= W1−1 GW2

C ⎢ C A¯ ⎥ ⎥ ¯f = ⎢ (18) Γ ⎢ ⎥ .. ⎣ ⎦ . C A¯f −1   X(t) = x(t) x(t + 1) · · · x(t + N − 1) (19) ⎡ ⎤ 0 0 ··· 0 ¯ ⎢ CB 0 ··· 0 ⎥ ⎢ ⎥ ⎢ .. ⎥ .. ¯ ¯ ¯ ¯ ⎢ . . ⎥ CB Φ = ⎢ C AB (20) ⎥ ⎢ ⎥ .. .. . . ⎣ . 0 ⎦ . . f −2 ¯ f −3 ¯ ¯ ¯ ¯ CA B CA B · · · CB T  T T Yf = ytT yt+1 · · · yt+f −1  T Ef = eTt eTt+1 · · · eTt+f −1  T T T Zf −1 = ztT zt+1 · · · zt+f −2

(27)

We arrange equation (25) in order to utilize CCA on the future outputs that the effects of the future inputs are removed.

Let us expand equation (3) as the following state space form with future horizon from t to t + f − 1, ¯ f X(t) + ΦZ ¯ f −1 + Ef (17) Yf = Γ ⎡

(25)

This equation (25) can be also interpreted as the equation that is extended from equation (10). Here we define the following matrix. ⎡ ⎤ C ⎢ C A¯ ⎥  ⎥ ¯k  ¯ ¯ ¯ ¯k = ⎢ ¯ (26) H ⎢ ⎥ A B AB · · · A¯p−1 B .. ⎣ ⎦ . C A¯f −1

In the OKID algorithm, as the next step, the system Markov parameters are calculated from the estimated predictor Markov parameters (13). Then, a Hankel matrix which consists of the system Markov parameters is built, and finally system matrices are estimated by using ERA which deals with the Hankel matrix.

where

(24)

(31)

where ˆ −1/2 , W1 = R ff

ˆ −1/2 W2 = R pp

ˆ f p W2 G = W1 R

(32) (33)

The equation (33) is the canonical correlation matrix between two signals Y˜f and Zp . Applying singular value decomposition to G, CCA is resulted. W1 and W2 are matrices for normalization and are regarded as CCA weighting as well. Compute the singular value decomposition with the system order n,

(21)

G = U SV T ≈ Un Sn VnT

(22)

(34)

where U and V are orthonormal matrices of left and right singular vectors, respectively, and S is a diagonal matrix consisting of singular values in nonincreasing order along the diagonal. Un and

(23)

The estimated predictor Markov parameters in ¯ The equation (13) are used in the above matrix Φ.

1058

Vn are composed of the n first columns of U and V , respectively. Sn is the diagonal matrix with n singular values of S. ˆ¯ can be written as H 0 ˆ ¯ 0 = W −1 Un Sn V T W2 H n 1

¯ 0 and assign those block to the pth block of H ¯1 blocks into the first to p − 1 block columns of H ¯ 1 . It can be and set zeros the pth column block of H found that this operation is implicitly equivalent to using the state vector (See Appendix A).

(35)

To recover the system matrices with order n, we use the following relations with MATLAB notation,

We refer to (35) as the ERA with CCA weighting. ¯ 0 as Moore-Penrose pseudo-inverse Denoting the H ¯ † is of the matrix H ¯f L ¯f L ¯ 0 (36) ¯ †H ¯0 = Γ ¯zH ¯ †Γ ¯f L ¯z = Γ ¯z = H ¯ 0H H

¯ : n, 1 : nu ) B = B(1 ¯ K = B(1 : n, nu + 1 : nu + ny ) C = from (43) A = A¯ + KC

Using UnT Un = In and VnT Vn = In , substituting equation (35) into equation (36) yields ˆ † = W −1 V S −1 U T W ¯ (37) H 2

n n

n

1

¯ should be consistent estimates We find that Φ if we choose sufficiently large integer p which is equivalent to the order of the ARX model. Since this feature also holds for closed-loop data, it is clear that SOKID can handle closed-loop data as well.

¯ f and L ¯ z is to be balOne possible choice of Γ −1 ¯z = ¯ f = W Un Sn1/2 and L anced such that Γ 1 1/2 T Sn Vn W2 . We then have the following relation, ¯ †Γ ¯ f = In ¯zH (38) L Let us define Oi as a null matrix of order i, Ii as an identity matrix of order i. Since nu is the number of inputs, ny the number of outputs, we define the following matrices,   EnTu = Inu Onu · · · Onu (39)   T Eny = Iny Ony · · · Ony (40)

4. SIMULATION RESULTS

ERA with the CCA weighting is derived as the following formulation. ¯ C A¯k−1 B ¯ − 1)En = EnTy H(k u ¯ f A¯k−1 L ¯ z En = EnT Γ y

u

¯f L ¯zH ¯ †Γ ¯ f A¯k−1 L ¯zH ¯ †Γ ¯f L ¯ z En (from(38)) = EnTy Γ u ¯ 0 W −1 Vn Sn−1/2 A¯k−1 Sn−1/2 UnT W1 H ¯ 0 En = EnT H

u 2 y −1 T 1/2 = Eny W1 Un Sn ¯ 1 W −1 Vn Sn−1/2 ]k−1 Sn1/2 VnT W2 En × [Sn−1/2 UnT W1 H u 2

From this formulation, observer matrices can be obtained as follows: ¯ 1 W −1 Vn Sn−1/2 (41) A¯ = Sn−1/2 UnT W1 H 2 1/2 T ¯ (42) B = S V W 2 En C=

n n u −1 T 1/2 Eny W1 Un Sn

(46) (47) (48) (49)

A simulation example, which is presented by (Verhaegen, 1993) and in (Van Overschee and De Moor, 1996) as benchmark problem, is used here to test the effectiveness of the proposed SOKID method. The model is given by equation (2) with the following numerical values ⎡ ⎤ ⎡ ⎤ 4.40 1 0 0 0 0.00098 ⎢ −8.09 0 1 0 0 ⎥ ⎢ 0.01299 ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎥ A = ⎢ 7.83 0 0 1 0 ⎥ , B = ⎢ ⎢ 0.01859 ⎥ , ⎣ −4.00 0 0 0 1 ⎦ ⎣ 0.0033 ⎦ 0.86 0 0 0 0 −0.00002 ⎡ ⎤ ⎡ ⎤ 1 2.3 ⎢0⎥ ⎢ −6.64 ⎥ ⎢ ⎥ ⎢ ⎥ ⎥ ⎢ ⎥ CT = ⎢ ⎢ 0 ⎥ , K = ⎢ 7.515 ⎥ , D = 0 ⎣0⎦ ⎣ −4.0146 ⎦ 0 0.86336 The state space model of the feedback control has the following values: ⎡ ⎤ ⎡ ⎤ 2.65 −3.11 1.75 −0.39 1 ⎢ 1 ⎥ ⎢0⎥ 0 0 0 ⎥,B = ⎢ ⎥, Ac = ⎢ ⎣ 0 1 0 0 ⎦ c ⎣0⎦ 0 0 1 0 0

(43)

¯ 1 to On the other hand, we have to calculate H ¯ From definition (26), H ¯ 1 becomes estimate A. ⎡ ⎤ C ⎢ C A¯ ⎥   ⎥ ¯ ¯ ¯¯ ¯1 = ⎢ ¯ (44) H ⎢ ⎥ A B AB · · · A¯p−1 B .. ⎣ ⎦ . C A¯f −1 ⎡ ⎤ C ⎢ C A¯ ⎥   ⎢ ⎥ ¯¯ ¯ A¯p B ¯ (45) =⎢ ⎥ AB · · · A¯p−1 B .. ⎣ ⎦ . f −1 ¯ CA ¯ 1 is built from equation (45), If Hankel matrix H one may choose the column blocks from second

 T Cc = −0.4135 0.8629 −0.7625 0.2521 , Dc = 0.61 We generate 100 data sets, each with 1200 data points using Monte-Carlo simulation under the above feedback control system. The same conditions in (Van Overschee and De Moor, 1996) are considered here. The reference signal r is Gaussian white noise sequence with variance 1. For the innovation sequence e, Gaussian white noise with variance 1/9 is used. To decide the ARX

1059

lag first, the final output error criterion (FOE) (Zhu, 2001) is used. In each simulation, p and f are set as 20 which are the past and future horizon, respectively, except for f = 1 as OKID because OKID doesn’t belong to SIM framework. This means high order ARX modeling, as the system order n = 5.

inlet controller output. A process model by the open-loop data was obtained including R1 and R2 outlet temperatures and R2 inlet controller output signal as disturbances DV1, DV2, and DV3, respectively. PID parameters were designed using the process model, and then the PID tuning improved the TC1 performance dramatically.

Fig. 1 is scatter plots of the eigenvalues of the estimated A matrix. Both results of SOKID and SSARX are almost the same with each other. SOKID and SSARX gave unbiased estimates. Some estimated poles by OKID are unbiased, others have bias. Furthermore, the variances of OKID estimates are bigger than SOKID. It is clear that Matlab-N4SID with CVA weighting gave a biased estimate even the variances of the estimated eigenvalues are small.

Here, let us consider the process model by the open-loop data as the benchmark model which is shown with bold continuous line in Fig. 3. Furthermore, closed-loop operation data, which is under changing set-point of TC1 by operators, is collected with 2500 points and sampling interval of 1 minute. Using the closed-loop data and FOE criterion, closed-loop models are obtained with some suitable lag and order automatically. In the upper plot of Fig. 3, continuous line shows SOKID model by closed-loop data, dashed-dot-line is of SSARX, dot-line is of OKID, and dashed-line is of N4SID with CVA weighting. The lower of Fig. 3 shows the absolute residual between the benchmark model and those closed-loop models. The root-mean-squares of those residuals on the frequency domain are that SOKID is 0.22, SSARX is 0.34, OKID is 0.29, and N4SID with CVA weighting is 0.31. It can be found that SOKID is close to the benchmark model which is identified from open-loop data. All closed-loop models have low gain so that PID controller with high gain could be resulted. It might be better to design PID controller using the closed-loop model with the confidence intervals in mind.

SOKID (n=5,p=f=20)

SSARX (n=5,p=f=20)

0.6

0.6

0.4

0.4

0.2

0.2

0

0

−0.2

−0.2

−0.4

−0.4

−0.6

−0.6

0

0.5

1

1.5

0

OKID (n=5,p=20,f=1)

0.5

1

1.5

N4SID (n=5,p=f=20)

0.6

0.6

0.4

0.4

0.2

0.2

0

0

−0.2

−0.2

−0.4

−0.4

−0.6

Fresh Feed MV

−0.6

0

0.5

1

1.5

0

0.5

1

1.5

Product

Fig. 1.

The left and right figure of the upper plots show the poles of SOKID and SSARX, respectively, and as for the lower plots, the left and right are OKID and Matlab-N4SID with CVA weighting, respectively. As the past and future horizon, p = f = 20 are set, except for OKID with f = 1, and as the system order n = 5.

CV

TC 1

R1

DV1

TI 1

TC 2

DV3

R2

TI 2

DV2

Fig. 2. Reactor system in an actual plant 5. INDUSTRIAL EXAMPLE The next example of an actual plant is shown in Fig. 2 (Lakshminarayanan et al., 2001). Relying the detail explanation of the process and PID tuning results on the paper (Lakshminarayanan et al., 2001), we will concentrate on the identification study. The objective was to improve the control performance of the first reactor (R1) inlet temperature (CV) which was oscillating with unacceptable levels. Step testing without controller feedback was done by manipulating R1 inlet temperature controller (TC1) output (MV) and R2

6. CONCLUSION In this paper, we have proposed a new subspace identification method (SOKID). The idea of this method relies on OKID and SSARX, and is an algorithm that OKID is extended to SIM. A typical feature of SOKID is the fact that it explicitly estimates neither a state vector nor an observability matrix as some conventional subspace identification methods do. The procedure neglecting

1060

Lakshminarayanan, S., G. Emoto, K. Onodera, K. Akamatsu, S. Amano and S. Ebara (2001). Industrial applications of system identification and control of process with recycles. 6th IFAC Symposium of Dynamics and Control of Process Systems. Ljung, Lennart (1999). System Identification Theory for the User, second edition. Prentice Hall PTR. Qin, S. Joe and Lennert Ljung (2003). Closedloop subspace identification with innovation estimation. 13th IFAC Symposium on System Identification. Van Overschee, Peter and Bart De Moor (1996). Closed-Loop Subspace System Identification. ESAT-SISTA/TR 1996-52I. Katholieke Universiteit Leuven. Verhaegen, Michel (1993). Application of a subspace model identification technique to identify LTI systems operating in closed-loop. Automatica 29(4), 1027–1040. Zhu, Yucai (2001). Multivariable System Identification for Process Control. PERGAMON an imprint of Elsevier Science.

0.2

mag

0.15 0.1 0.05 0 0.000314

0.0032

0.0316 frequency[rad/min]

0.3162

3.14

0.0032

0.0316 frequency[rad/min]

0.3162

3.14

abs res

0.06 0.04 0.02 0 0.000314

Fig. 3.

Top figure shows frequency response with magni-

tude, and bottom shows absolute residuals between the benchmark model and closed-loop models. Bold continuous line of the top figure is the benchmark model. Continuous line shows SOKID model, dasheddot-line is of SSARX, dot-line is of OKID, and dashed-line is of N4SID with CVA weighting.

predictor matrix A¯p as equation 8 is important to closed-loop identification, which means equivalent to building high order ARX model. We have introduced ERA with CCA weighting and applied it to Hankel matrix composed of estimated predictor Markov parameters so that observer matrices have been calculated directly. To recover system matrices does not need complex calculation. By the first simulation example, it is seen that the accuracy of SOKID is similar to SSARX. And the second example result shows that SOKID is closest to the benchmark model which gave an improvement of the controller performance in the actual plant. Finally, SOKID can handle data collected in both open and closed loop. Although it has still seemed to be very difficult for any other algorithms to deal with actual plant data, appearing a lot of useful methods for closd-loop identification could help to build models in an actual field.

APPENDIX A We clarify the relationship between calculating ¯ 1 and obtaining state vector Hankel matrix H X(t). Multipling on the left both side of the ¯ f gives equation of predictor model by Γ ¯ f Ax(t) ¯ f x(t + 1) = Γ ¯ ¯ f Bz(t) ¯ Γ +Γ For the time series data, the above equation yields ¯ f AX(t) ¯ ¯ f Bz ¯ t ¯ f X(t + 1) = Γ +Γ Γ ˆ + 1) = L ¯ z Zp+1 and X(t) ˆ ¯ z Zp , Using X(t =L ¯ f AL ¯ f [B, ¯ 0, · · · , 0]Zp+1 ¯f L ¯ z Zp+1 = Γ ¯ z Zp + Γ Γ ¯1 = Γ ¯ f A¯L ¯ z , arrange the above equation, Since H   ¯ 1 Zp ¯ f 0 A¯B ¯ · · · A¯p−1 B ¯ Zp+1 = H Γ   p−1 ¯ ¯ 1 Zp ¯ ¯ ¯ ¯ Γf AB · · · A B 0 Zp = H

REFERENCES

Hence,

Jansson, Magnus (2003). Subspace identification and arx modeling. 13th IFAC Symposium on System Identification. Juang, Jer-Nan (1994). Applied System Identification. Prentice Hall PTR. Juang, Jer-Nan and Richard S. Pappa (1985). An eigensystem realization algorithm for modal parameter identification and model reduction. Journal of Guidance, Control, and Dynamics 8(5), 620–627. Juang, Jer-Nan, Mihn Phan, Lucas G. Horta and Richard W. Longman (1993). Identification of observer/kalman filter markov parameters: Theory and experiments. Journal of Guidance, Control, and Dynamics 16(2), 320–329.

  ¯ f A¯B ¯1 = Γ ¯ · · · A¯p−1 B ¯ 0 H

¯ to zero in equaIt can be found that setting A¯p B tion (45) is implicitly equivalent to estimating the state vector.

1061