A self-tuning regulator based on optimal output feedback theory

A self-tuning regulator based on optimal output feedback theory

0005 1098/84$3.00+ 0.00 PergamonPressLtd. © 1984InternationalFederationof AutomaticControl Vol.20, No. 5, pp. 671-679, 1984 Printed in Great Britain...

710KB Sizes 2 Downloads 74 Views

0005 1098/84$3.00+ 0.00 PergamonPressLtd. © 1984InternationalFederationof AutomaticControl

Vol.20, No. 5, pp. 671-679, 1984 Printed in Great Britain. Automatica,

A Self-tuning Regulator Based on Optimal Output Feedback Theory* P. M. M;4KILAt

Self-tuning regulators can be developedusing optimal output feedback theoryfor a variety of applications, including autotuning of PID-regulators and adaptive decentralized control. Key Words--Adaptive control; convergence of numerical methods; discrete time systems; multivariable control systems; optimal control; self-tuning regulators; state-space methods; stochastic control.

control problems (,~strrm and co-workers, 1977; Westerlund, 1981; M~ikil~i, Westerlund and Toivonen, 1984). Following Goodwin and Ramadge (1979), and Ljung and Trulsson (1981), design of adaptive regulators is discussed in this paper when the order, or structure, of the regulator is specified independently of the estimated process model. This flexibility offers advantages for instance in autotuning of simple regulators, such as PIDregulators, and in adaptive decentralized control (Goodwin and Ramadge, 1979). Furthermore the problem of zero cancellation, restricting the use of some adaptive schemes to systems with stable zeros only (Landau, M'Saad and Ortega, 1983), can be avoided by this approach. Goodwin and Ramadge (1979) have described a self-tuning regulator based on the parametric LQ approach. In Goodwin and Ramadge (1979) an explicit criterion minimization scheme was also proposed for the design of the adaptive regulator. Another explicit criterion minimization scheme has been given in Ljung and Trulsson (1981 ). Ljung and Trulsson (1981 ) presented some convergence results for their adaptive algorithm as well. In this paper an alternative approach is described to design parametric LQ self-tuners based on recent developments in numerical methods for solving optimal output feedback problems (Halyo and Broussard, 1981; M~ikilL 1983; M~ikil~i, Westerlund and Toivonen, 1984). An explicit parametric LQ self-tuner is developed using the linear descent mapping method (M~ikil~i, 1983) to solve the optimal output feedback problem (Anderson and Moore, 1971). The proposed self-tuning regulator can be interpreted as a generalization of state-space self-tuning regulators based on LQG theory to parametric LQ problems. Recently self-tuners based on LQG theory have attracted increasing interest in appli~ cations as well (Hallager and Jorgensen, 1983).

Abstract--Self-tuning control of stochastic systems is considered. The underlying control problem is a parametric linear quadratic problem for fixed structure controllers. An explicit selftuning regulator is described based on optimal output feedback theory. The proposed self-tuner is a generalization of state-space LQG self-tuners to parametric LQ problems. Two interesting application areas of parametric LQ self-tuners are autotuning of parameters of low-order regulators, such as PID-regulators, and adaptive decentralized control. I. INTRODUCTION

SELF-TUNING regulators (AstrSm and co-workers, 1977; AstrSm, 1981) have played an important role in adaptive control theory. Furthermore several successful industrial feasibility studies and applications of self-tuning regulators have been reported (/~strSm and co-workers, 1977; Dumont and Belanger, 1978; Westerlund, Toivonen and Nyman, 1980). Self-tuning regulators (STRs) with regulator design based on minimizing quadratic criteria include the minimum variance STR (AstrSm and Wittenmark, 1973), the generalized minimum variance (single-step optimal) STR (Clarke and Gawthrop, 1975), and STRs based on LQG theory (AstrSm and co-workers, 1977). In this paper the design of adaptive regulators is considered using a parametric LQ approach. The regulator design is based on the idea of minimizing a steady-state quadratic criterion with respect to the regulator parameters. The steady-state quadratic criterion is well-justified for instance in many industrial process *Received 6 September 1983; revised 4 April 1984. The original version of this paper was not presented at any IFAC meeting. This paper was recommended for publication in revised form by associate editor G. Goodwin under the direction of editor B. Anderson. t Abo Akademi (Swedish University of,~bo), Department of Chemical Engineering, SF-20500 .~bo 50, Finland. Present address: Pulp and Paper Research Institute of Canada, Systems and Control Division, 570 St. Johns Boulevard, Pointe Claire, P.Q., Canada H9R 3J9. 671

672

P . M . M~,KIL~

The structure of the paper is as follows. In Section 2 the parametric LQ problem is described for the case of known parameters. The linear descent mapping (LDM) method for solving the optimal output feedback problem is discussed. It is shown that the sequence of feedback gains generated by a specific LDM algorithm converges to a stationary point of the loss function for any initial stabilizing feedback gains under mild assumptions. In Section 3 an explicit self-tuning regulator is described for parametric LQ problems based on applying an LDM algorithm in a computationally attractive way. In Section 4, a 2-input-2-output simulation example is considered. 2. THE PARAMETRIC LQ PROBLEM

obtained when the system (1) is controlled with the time-invariant regulator (3) and F ~ Sv:

(la) (lb)

z(t) = Dx(t) + n(t)

where x is an n-dimensional process state vector, u is a p-dimensional input vector, z is an r-dimensional vector, and {w(t)} is a sequence of coloured noise with zero mean and rational spectral density [i.e. w(t) is the 'environmental' noise~, and {n(t)} is a white noise sequence with zero mean. Furthermore let w(t) be expressed as (lc)

w(t) = G(q-1)V(t) G(q -1) = Go + G l q - 1 + ... + Gnoq-n°,

Go = I

(5a)

P,, = E u(t)u(t) T

(5b)

Pi=Ex(t)w(t+i)

v,

i = 0 , 1 ....

Pw,i = E w(t)w(t + i)T, i = 0, 1....

(5c) (5d)

Introduce the quadratic loss function 1S-1

J(F) = EN~Nlim

t=~o CxT(t)Qx(t) + u(t)TRu(t)]

(6)

where Q and R are symmetric positive semidefinite matrices. The loss function (6) is to be evaluated for the stationary case. Then J(F) can be written as

In this section the regulator design problem is described for a system with known parameters. Consider a linear discrete time stochastic system x(t + 1) = Ax(t) + Bu(t) + w(t)

Px = E x(t)x(t) T

J(F) = E[x(t)TQx(t) + u(t)TRu(t)]

(7)

J(F) = tr QPx + tr RP,.

(8)

The stationary state and input covariance matrices, Px and P,, are given by Px = (A + BFD)Px(A + BFD) r + BFRnFTB r + Pw,o + (A + BFD)Po + [(A + BFD)Po l T P, = FDPxDTF T + FR,,F T

(9a) (9b)

where Pw,o is obtained from nG

Pw,o = ~ GkR~G~

(9c)

k=O

and Po is given recursively by Pi = (A + BFD)Pi+ I + Pw,i+ l,

(ld)

where {v(t)} is a sequence of white noise with zero mean value, and q-a the backward shift operator: q-~y(t) = y ( t 1), etc. Introduce the following definitions and conditions:

i - 0 , 1 ..... riG- 1

(9d)

nG

Pw,i + 1 =

2

Gk-i-IR~G~,

k=i+l

i = 0 , 1 .... , n G - I Pi = O, Pw,i+l = 0

for i > n~.

(9e) (9f)

E n(t)n(t) T = Rn

(2a)

E v(t)v(t) T = R~

(2b)

E x(t)n(t) T = 0

(2c)

Consider now the problem of minimizing the loss function (7) in the set of stabilizing feedback gains, i.e. consider the problem

(2d)

rain J(F).

E v(t)x(t - i)T = 0,

i -= 0, 1....

(10)

FESF

E n(t)w(t +_j)T = 0,

j = 0, 1....

(2e)

The control law is restricted to be a linear transformation of the vector z(t) u(t) = F z(t).

(3)

The control problem is thus a parametric LQ problem. It can also be interpreted as an optimal output feedback problem (Anderson and Moore, 1971; O'Reilly, 1980; M/ikilh, Westerlund and Toivonen, 1984).

Introduce the set of stabilizing feedback gain matrices Sv = { F I p ( A + BFD) < 1} (4)

Remark

where p(.) denotes the spectral radius of a square matrix. It is assumed that Sv ¢ O, i.e. that the triple (A, B, D) is stabilizable by the control law (3). Define the following stationary covariance matrices,

u(t) + HlU(t - 1) + "'- + Hn~u(t - nn)

The regulator structure (3) is flexible (O'Reilly, 1980). For instance, regulators in the transfer function form given below can be considered: = Koy(t) + "'" + Kn,,y(t - nr)

(11)

A parametric LQ self-tuner where y is an m-dimensional output vector given by y(t) = Cx(t) + e(t)

(12)

and e(t) is white noise with zero mean. Regulator (11) can be written in form (3) by defining z(t) = [y(t) T..... y(t - nK)T,u(t -- 1)T..... ~(t -

n,,)~] T

F = [Ko ..... K.~, - H 1 .... , - H . ~ ] and augmenting the state vector x accordingly. Furthermore, linear regulators in the dynamic form u(t) = Fly(t) + Fzs(t)

(13a)

s(t + I) = Fss(t) + F4y(t)

(13b)

can be written in form (3) by defining the augmented system Y(t) = [x(t)T,S(t)T]X,a(t)= [u(t)T,s(t + l)X] T, z(t) = [y(t) T, S(t) T ]T

:~(t+ 1)=[~4

0+.,+I: +io,,l

fi(t) = [F~3 F~lz(t ) z(t)= [C

~l~(t) + I ; ( t ) l .

(14a) (14b) (14c)

Control problem solution The loss function J(F), equation (8), is a rational function in the feedback gain F in SF and is thus Fr6chet differentiable to any order, i.e. J(') is C ® in Sv. The gradient of the loss function J(F) with respect to the feedback gain matrix F is given by Od O--ff = 2(BTLB + R)F(DPxDT + R,) (15)

where H (°~ is obtained recursively from H (k) = B T [(A + B F D ) k+ 1 ]TLp~+ 1D T q_ H(k+ 1)

(16a)

[

Consider now the solution of the control problem (10). Introduce the level set n(c) = { F 6 S v [ J ( F ) <_ c}

(18)

where c is a non-negative real number. Assume that 7r(c) is closed and bounded, i.e. compact, and nonempty for some c _> 0. Then, by the well-known theorem of Weierstrass, J(') attains an absolute minimum on lr(c). Existence conditions for an absolute minimum in terms of the system matrices (1) and (2) have been given for the optimal output feedback problem in O'Reilly (1980) and Halyo and Broussard (1981). A necessary optimality condition for F* ~Sv to be a minimum of J(F) in Sv is given by

The local condition (19) may have several solutions in some cases (see e.g. Mfikil~i, 1982a). The effective numerical solution of the optimal feedback gains is computationally a nontrivial task (S6derstr6m, 1978; O'Reilly, 1980; Srinivasa and Rajagopalan, 1979). In this paper the linear mapping algorithm (Anderson and Moore, 1971 ; Halyo and Broussard, 1981; Makila, 1983) is utilized with some modifications to guarantee convergence to a stationary point of the loss function, i.e. to a solution of condition (19). Note that sometimes it is convenient to fix some elements in the feedback gain matrix F, e.g. to zero to obtain a block-diagonal structure, c.f. decentralized control. In the sequel all elements in F are considered free for tuning; the general case is analogical (M~ikil/i, 1982a; M~ikil~i, Westerlund and Toivonen, 1984). The linear descent mapping method Consider a sequence of feedback gains, {Fk}, such that Fk e SF (it is assumed that Sv ~ 0), generated according to Fk+l = Fk + akSk

+ 2BTL(APxD T + P~D T) + 2H (°)

H t"a) = 0

673

(16b)

0pk-1

where H (k) = [H}~)] --- t r L ( a + BFD) k+~ OFi--~J'

k = 0, 1..... nG - 1, and L is a symmetric positive semidefinite solution to the discrete Lyapunov equation L = (A + BFD)TL(A + BFD) + Q + DTFTRFD. (17)

(20)

where ak is a steplength parameter, and Sk is given as a solution to the linear matrix equation, c.f. the necessary optimality condition (15) and (19), (BTLkB + R)(Fk + Sk)(DPx,RDT + R.) T T ) + H~o~ = 0 + BTLk(APx,kDT + Po,kD

(21)

where Lk -- L(F,), Px,k =- Px(Fk) and H~°) = H~°)(Fk). Assume that (BTLkB + R) and (DPx,kDT + R.) are positive definite. Then Sk can be written as Sk -- S(Fk) = - ~ ( B LkB + R) -~

(DPx,kDT + R.)- '

~J

(22)

674

P . M . M~,KIL)~

The increment Sk can be given a nice interpretation as the solution to the following quadratic program at Fk min qj(dF)

(23)

dF

where

1(73 ~T qs(dF) = tr L~f )k dE + tr(BTLk B + R)dF(DP~,kD T + R,)dF T.

(24)

In Halyo and Broussard (1981) the quadratic form qj(dF) is obtained as an approximation to an exact, 'almost' quadratic, expression for the loss increment

J(F) - J(Fk). It is seen that S k is a descent direction of the loss function J(F) at Fk, i.e.

trl J S <0 for

Sk =

- ~ ( B LkB Jr R + irk)-1

k(DPx.kDT + R, + Ak)-l

(27b)

where {Fk} and {Ak} are sequences of symmetric matrices such that (BTLk B + R + Fk) and (DP~.k Dv + R, + Ak) are positive definite matrices with condition numbers bounded above for all k. Let m k be the smallest non-negative integer such that Fg Jr (ZkflmkSk f=SF, and

l(?J l T J ( F k ) - J(Fk + Ctkflr"~Sk) > --)'~kfl r"~tr ~ l[~]kSk(27c) where ~k ~---0~Vk for some c( > 0. Let a k = O;kfl 'nk, 0 < 7 < 1 , 0 < f l < I. Then (a) lim J(Fk) = J(F*) for some F * e n [J(Fo)]. (28) (b)

= 0 for some k, or k

---, 0.

(29)

k

(25) Then for each 0 ___7 < 1 3 a k > 0 such that Fk Jr akS k ~ SF, and (26)

The linear descent mapping (LDM) algorithm given in M/ikilfi (1983) utilizes the Goldstein steplength rule (Goldstein, 1965; Fletcher, 1980) in a partial line search scheme for the steplength parameter ak. Next the convergence properties of a simpler L D M algorithm using condition (26) are discussed. For this purpose a useful lemma is stated.

Lemma. Consider the function h(X) = tr X v U X V , where X is an mx x n, matrix with real entries, and U, V > 0, i.e. U and V are positive definite matrices with appropriate dimensions. Then (a) h(X) > 0VX 4:0 and (b) for any given e, > 0 3 5 > 0 such that tr X T U X V < 5 implies ]]XIJ ~/(tr X T X ) < e. The proof is rather straightforward and is omitted for brevity. (a) can be shown, e.g. by using the Cholesky factors of U and V and (b) can be shown by developing X in an orthonormalized basis of R ~ ×"~ constructed from the eigenmatrices E to the eigenvalue problem U E V = 2E. Convergence of {J(Fk)},

~

k

Assume that SF ~ O. Let Fo ~ SF. Assume that ~ [J(Fo)] is bounded and closed. Let the sequence {Fk} be defined by Fk + l "~- Fk Jr akSk, k ~ 0

(c) l i m F k = F * , [ O ~ l k~

T

J(Fk) - J(Fk Jr akSk) >-- --~/aktr~--g~lkSk'cr

Let ~ J have a finite number of zeros on ~ [J(Fo)]. OF =0.

(30)

~UF J F*

ProoJ. The sequence {J(FR)} is monotonically decreasing and bounded below, and thus converges to a point J(F*) such that F*e~[J(Fo)] for J(.)is continuous on Sv.To show (29) note that given e> 036>0

Y/k

such that if - - t r [5~ l' T-s k- '<

~Cr l k

< e, for Fke~[J(Fo)] due to the matrix

lemma given above. Assume now that (29) is not JOJ\ T true. Then - - t r ikp , , • kS does not converge to 0. Thus 3 a subsequence {Fk, } and a number 5 > 0 such [~J I T that--trL~-~lk,Sk,, • > 5. Assume that % > a > O. If no such a > 0 then 3 a subsequence Fk, --, F* s [J(Fo)] and a sequence a k ~ O such that either l~k~ + akjSk~ ~ S r or J (Fkj ) - J (Fk.i + ak Skj) < I•

--Takjtr,,[~]kSkj"

Note

that

by

assumption

[J(Fo)] is a compact nonempty subset of the open set SF in R p ×~, and thus S ~ > 0 such that for any F ~ [ J ( F o ) ] , IIF - f'[I < ( implies that FsSF. Then due to the boundedness of S(F), or Sk given by equation (27b), in g[J(Fo)], 3 a non-negative integer such that for all kj >/~: Fkj + akj e SF. Then it follows that lim J(Fkj) - J(Fk, + akjSkj) = 1 < 7 < 1

(27a)

then

kj~ ~

-- akj tr I0 F ] k Ski

A parametric LQ self-tuner which is a contradiction. Thus 3 a number a > 0 such that ak, > a. But then J ( F k o ) - lira J(Fk,) ki--* at3

= J(Fko) -- J(F*) > a6 lim i = ~ contradicting the i~oo

property of J(F) being bounded below. Thus - t r 1~- ] kSk must converge to zero, completing the proof of (29). Equation (30) follows directly from Theorem lb in Goldstein (1965). Note that for the linear mapping method, equations (20) and (21), with ak - 1, it is well-known (O'Reilly, 1980; Halyo and Broussard, 1981; M~kil~, 1983) that the convergence properties (28)-(30) are not guaranteed. In fact, then the set of stabilizing feedback gains SF is not always even an invariant set of the method, i.e. F o • S r does not imply Fk • S v for all k > 0. In Halyo and Broussard (1981) it was shown that there exists a number a > 0 such that if 0 < ak < a, then property (29) is valid under somewhat more restrictive conditions than given here. Halyo and Broussard (1981) suggested the use of the descent property J(Fk) -- J ( F k -Jr-akSk) ~ 0 in an LDM algorithm. However, it is not known whether property (29) is then always valid. The L D M algorithm discussed in this paper is similar to the algorithm given in Makila (1983), with a simpler steplength scheme appropriate for adaptive control applications. The algorithm has good global convergence properties and can be generalized to arbitrarily fixed controller structures, c.f. adaptive decentralized control. Properties (28)-(30) of the L D M method are important when discussing the convergence properties of the parametric LQ self-tuner to be described in the next section. 3. A SELF-TUNING REGULATOR In this section a self-tuning regulator (STR) is developed based on the linear descent mapping (LDM) algorithm for solving the parametric LQ problem, equation (10), or equivalently for solving an optimal output feedback problem. Denote the estimated process model at time t with Mr. Let SM, be the set of stabilizing feedback gains F, for a specific control law of the form (3), for the model Mr. Assume now that Fr- 1 • S~t,_ ,. The STR is now constructed so that the following two conditions are fulfilled: (i) Model update condition: Ft- 1 • SM~_ , n SM~. (ii) Feedback gain update condition: Ft • SM,.

Conditions (i) and (ii) guarantee that the LDM method is well-defined at each time step. Condition (i) restricts the size of model parameter changes so that the successor model S~, 'remembers' parts of S~t,_,. Condition (ii) is monitored in the L D M method directly. These conditions are also moti-

675

vated by robustness and transient performance considerations of the self-tuning regulator. Note that because the L D M method is based on the descent property (27c) it is preferred, e.g. for computational reasons, to update the feedback gains so that condition (27c) is satisfied at each timestep rather than to try a complete solution of a parametric LQ problem at each time-step. Thus the computational simplicity of adaptive regulators based on explicit minimization of the loss function (Goodwin and Ramadge, 1979; Ljung and Trulsson, 1981) is obtained while at the same time the structure of parametric LQ problems can be utilized efficiently in the regulator parameter updating. A parametric LQ self-tuner

Initialization. Choose Fo e SMo. Set t = 0. Step 1. Update the parameters in the ARMAX model A(q-1)y(t) = B ( q - l ) u ( t - L - 1) + C(q-1)e(t)(31)

where y is the m-dimensional output vector, u is the p-dimensional input vector, and L is a time lag. Recursive identification methods that can be used for on-line parameter estimation for the model (31 ) are e.g. the recursive extended least squares method (RELS) and the recursive maximum likelihood method (RML) (see, e.g./~str6m and co-workers, 1977). The estimator then has the structure of a deflected gradient method, i.e. if M r - 1 is the vector of estimated model parameters at time t - 1, then Mt = Mt_~ - #tHtgradu,_, [g(e(t))]

(32)

where H, is a positive definite matrix, g(.) is a criterion function for parameter estimation, e(t) is the model prediction error, and #3• (0, 1 ] is a steplength parameter. Initially set #t = 1. Represent model (31) in the form (1) x(t + 1) = A x(t) + B u(t) + w(t)

(33a)

z(t) = O x(t) + n(t)

(33b)

so that the desired regulator is given as u(t) = F z(t).

(34)

IfFt_ 1 ¢ Su, then find the smallest positive integer nt such that #t = v" in (32) gives Ft_ 1 • Su,, i.e. find Mr to satisfy the model update condition (i), 0 < v < 1. Step 2. Compute

s , = - ~ [ S T L ( ~ _ , ) B + R +r~) -' oJ F,-1 [DPx(Ft_I)D T + R , + A,] -1

(35)

using the model parameters Mr. Let Ft = Ft-1 + a,S,

(36)

676

P . M . M)~K1LN

where the steplength parameter at = c~t/~"~, and m, is the smallest non-negative integer such that - 1 + ~tfl "~ ~ SM,, and J(Ft-1) - J(Ft-1 + ~tfl m~) >_ - 7~tflm' tr l~F-] v ' ,St

(37) where ~,~ [~,1], 0 < c~ < 1 , 0 < 7 < 1 , 0 < / 3 <

1.

Step 3. Compute new input u(t) = Ftz(t) + ~,K(t)

(38)

where ~c(t) is an input perturbation, or excitation, signal, e.g. a PRBS-signal, and ~t is an amplitude parameter. Repeat f r o m Step 1. End of algorithm.

Note that, in practice, Step 3 would be modified to u(t) = sat [F~z(t) + ~x(t), c-, c + ]

(39)

where c + > c-, and sati(x,c ,c + ) -

xi, if ci- <_ x <_ ci+ c l , if xi < cic/+, if xi > c~.

Equation (39) is an ad hoc way to introduce input signal limits to the self-tuner; see, e.g. M/ikil/i (1982b) for a discussion on the inclusion of linear input constraints in the design of some STRs. In Step 2 of the proposed self-tuning algorithm an estimate of the covariance matrix of the residuals {e(t)} of the model (31) is required. Then, for instance, the following filter can be utilized R~(t) = cotR~(t - 1) + (1 - co,){Ote(t)e(t) v

+ (1 - O,)diag[e,(t)e(t)x]]

(40)

where R~(t) is the estimate of the covariance matrix of the residuals at time t, 0_< cot_< 1, e.g. (~t = s a t [ ( t - - l)/t, co-, co+], 0 < ~ o - <~9 + ~ 1, 0 < 0t --< I, lira 0t = 1. e(t) is the model prediction error: e ( t ) = - y ( t ) - y ( t l t - 1), where y ( t l t 1)is the prediction of y(t) based on the model Mr-1. diag(.) denotes the diagonal of a matrix. Note that for notational simplicity the recursive identification of the model (31) has been described as a gradient method (32). However, this is not essential for the construction of the parametric LQ self-tuner and other types of recursive identification methods could also be used. In fact some of the wellknown recursive identification methods (such as RELS and recursive instrumental variable methods) can be interpreted as 'approximative' gradient methods (Ljung and S6derstr6m, 1983). Furthermore, in Step 1 of the proposed self-tuning

algorithm the actual choice of the representation of the model (31) in the form (1) has been left to the user. The representation could then be minimal or nonminimal, c.f. the remark in Section 2. It is also straightforward to generalize the LDM algorithm to the case when the noise terms w(t) and n(t), in (33), are correlated, i.e. Ew(t)n(t) T ¢ O. (c.f. Miikil~i, 1983). Thus the restriction placed by condition (2el on the choice of the representation (33) is in no way essential. This is important if the parametric LQ self-tuner is based on direct recursive identification of a state space model, which is easiest to perform using the well-known state-space innovation model (Ljung and S/SderstriSm, 1983) with correlated noise terms. In the proposed parametric LQ STR algorithm the regulator parameters are updated using the LDM algorithm for parametric LQ problems. The computational complexity of this explicit STR algorithm corresponds to that of self-tuners based on LQG theory. Convergence and stability properties of explicit STRs are known to be related to convergence of the estimated process model parameters to the true parameters of a linear plant (Ljung and Trulsson, 1981: Landau, M'Saad and Ortega, 1983), or in case of model mismatch to a stabilizable model, and thus it seems that the input signal, c.f. Step 3 of the proposed STR algorithm, must satisfy some persistently exciting condition. If the plant is open-loop unstable, convergence to stabilizing time-invariant regulator parameter settings requires the plant to be stabilizable by the chosen regulator structure. 4. EXAMPLE An example is now discussed to illustrate some properties of the proposed parametric LQ selftuner. In Westerlund (1981) a parametric LQ control problem formulation was used to design a fixed parameter regulator for digital quality control for an industrial cement kiln. The following multivariable model of the cement kiln was determined by an identification experiment (Westerlund, 1981) y(t + 1) = A4y(t) + Bou(t) + C4e(t) + e(t + 1)

where

(41 )

0.914 AI = L_o.126

0.0800] 0.917 J' Bo =

C,=[:

0

]

0.715 '

2.091 -0.211

R e = [ 0"0644 L0.000257

-0.0744] -0.0156J 0.000257] 0.0214 ]

where Re is the covariance matrix for the Gaussian white noise vector e(t). The outputs are the

A parametric LQ self-tuner combustion gas temperature of the first preheater (Yl) and the kiln drive power (Y2), and the inputs are the kiln exhaust fan speed (u 1) and the raw material feed rate into the kiln (u2). The sampling time is 5 min. Now let Py = Ey(t)y(t) T, i.e. Pr is the steady-state covariance matrix of the output y(t). In Westerlund (1981) the control criterion was taken to minimize the loss function 3 = trQrPy

(42)

where e~ = 10 -5 and e2 = 10 - 2 ° , and analogically for A,. The values of ~1 and e2 are not critical. Furthermore, in the self-tuner 7 = 0.1, fl = 0.35, and ~, = 1, were used in (37). Note that it is preferable to use the steplength test (37) in the self-tuner only when [I((?J/OF)r~_,II is larger than some test value due to truncation errors caused by finite precision arithmetics. In the simulations the initial values of the parameters were A1 =

where Qy is the identity matrix. A varianceconstrained LQG formulation (Mfikilfi, Westerlund and Toivonen, 1984) was used in Westerlund (1981 ) to account for the presence of engineering constraints on the permissible input signal variations. The corresponding LQG control problem was found to have the loss function (Westerlund, 198l) JL = tr Q~Py + tr Q,P, (43) where

°7° 0.0220 1" The diagonal elements in Qu are the Lagrange multipliers of the input variance inequality constraints. In Westerlund (1981) it was found that the four-parameter regulator

u(t) = V y(t)

ryl , ry2 r u : ru 2

u(t) = 1-0'1779 2.086

Fixed regulator (46)

0.1103, 0.1797 0.003945, 1.693

0.0930, 0.1635 0.003846, 1.4126

if [BTL(Ft_I)B + R] > 0

F~ = el tr [BTL(Ft_ 1)B + R ]. l + e2 •I, otherwise (45)

0.1258] 1.912 J

(46)

which was obtained by minimizing the loss function (43) with the LDM-method for the regulator structure (44). The sample averages for the output and input variances were as follows (after 2000 steps)

Se~tuner

using the self-tuning algorithm proposed in Section 3. The estimated model is chosen to have the same structure as the system (41). Let x ( t ) - y(t), z(t) =- y(t), and w(t) = Cle(t) + e(t + 1). With these definitions it is easy to write the parametric LQ problem in the form (33). The RELS method was used in the parameter estimation with the parameter adjustment steplength restriction scheme as given in Step 1 of the parametric LQ self-tuner with v = 0.35. In the self-tuner Ft and At, in Step 2, equation (35), were chosen as = 0,

Bo = C1 = F = 0.

In the RELS method the normalized parameter estimation error covariance matrix P, was started at Po = 10-I, and the exponential forgetting factor 2, was chosen in the RELS method as 2t = 2t_1(200 - 2t_1)/199, with 2o = 0.95. An estimate of the covariance matrix Re of the residuals was obtained using (40) with wt = t/(t + 1), and 0, = 0,_ , (50 - 0,_1)/49 with 0,=0.5. PRBSsignals, with amplitudes [0.0075, 0.15 ]T, were added to the input signals according to equation (39). The absolute values of the input signals were limited to < [0.2, 3.0] T. During the 20 first simulation steps the input signals were pure PRBS-signals. Figures 1-4 show the results of a 2000-step long simulation. For comparison the simulation was repeated with the fixed parameter regulator

(44)

when optimally tuned, gave almost the same control quality as the optimal full-order regulator minimizing (43). Consider now self-tuning control of the system (41). The regulator design is based on minimizing the loss function (43) for the regulator structure (44)

677

Regulator (47) 0.1009, 0.1685 0.004039, 1.826

where the last two columns [regulator (47)] were obtained by repeating the simulation with the fixed parameter regulator

u(t) = 1 - 0 " 1 5 1 3 2.236

0.1499] 2.354 ]

(47)

as well. The regulator settings (47) are those of the STR algorithm after 2000 steps. The results indicate that the self-tuning algorithm can give a nearoptimal control performance. Especially if the STR algorithm is used only for autotuning of the regulator parameters in (44) then relatively high PRBS-signal amplitudes can be tolerated in the inputs facilitating the convergence of the regulator parameters close to their optimal values in a relatively short tuning period.

678

P.M. MSKIL,~ 0.5

/i

0

I.,,,.u,Ja.,.L.l,~,.. , d ~ . ~ .J,~mllJ ~~...a & hll ~-.. ~ r v T l , - , q - . . -i~.- Tn.-f . l i t ~,w,lmp*.,,' n-w,~ ',r',T"~ r

-3 3

-0.5

21 Y2

0

0

-3 0.3

~71

-5 0

ut

r22

I000

2000

FIG. 3. Regulator parameters in the STR simulation in the example of Section 4.

0 -0. 3 5

1000

u2

-5 0

2000

I000

F'l~3. I. The outputs and inputs for the 2000-step long simulation with the parametric LQ self-tuner for the example of Section 4.

5. C O N C L U S I O N S

An explicit self-tuning regulator has been described based on optimal output feedback theory. The parametric LQ self-tuner utilizes the computationally attractive linear descent mapping method for solving parametric LQ problems. The proposed self-tuner is a generalization of state-space self-tuners to parametric LQ problems. The regulator structure is specified separately from that of the process model. This may offer advantages, for instance in autotuning of simple regulators, such as PID-regulators. Thus if the estimated model parameters converge to the true parameters of a linear plant of any order, the proposed self-tuner can converge to a minimizer of the steady-state loss function for the chosen regulator structure. The parametric LQ self-tuner

0.5

:21

-0. 5 2 1 o -1 0,1

-o. 1

i;

-o. 2 o

I000

2000

FIG. 2. Parameter estimates of the process model in the STR simulation in the example of Section 4.

0 o

iooo

2o0o

FK;. 4. The accumulated loss d~ = ~, [tr(Q~. + F,Q,FII )v t v(t)l ] ,=0 during 2000-step long simulation of the system in the example of Section 4 for the regulators (a) the STR algorithm, (b) the fixed parameter regulator (46t.

can be generalized to arbitrarily fixed linear regulator structures using an appropriate form of the linear descent mapping method (Makila, 1982a; Makila, Westerlund and Toivonen, 1984). This makes the STR algorithm useful also in adaptive decentralized control. Acknowledgements--This work was done while the author was a Senior Fulbright Scholar at the Department of Chemical Engineering at University of California, Berkeley. Financial support from the Academy of Finland is also deeply appreciated. REFERENCES Anderson, B. D. O. and J. B. Moore [1971 ). Linear Optimal Control. Prentice-Hall, New Jersey. AstriSm, K. J. (1981). Theory and applications of adaptive control. Proceedings o.1 the 8th Triennial World Congress IFAC, Kyoto. Astr6m, K. J., U. Borisson, L Ljung and B. Wittenmark (1977). Theory and applications of self-tuning regulators. Automatica, 13, 457-476. AstdSm, K. J. and B. Wittenmark (1973). On self-tuning regulators. Automatica, 9, 185 199. Clarke, D. W. and P. J. Gawthrop (I 975 t. Self-tuning controller. Proc. IEEE, 122, 929-934. Dumont, G. A. and P. R. Belanger (1978). Self-tuning control of a titanium dioxide kiln. 1EEE Trans. Aut, Control, AC-23, 532-538. Fletcher, R. (1980). Practical Methods q[ Optimization I. John Wiley, New York. Goldstein, A. A. (1965). On steepest descent. J. S I A M Control, 3, 147-151. Goodwin, G. C. and P. J. Ramadge (1979). Design of restricted complexity adaptive regulators. IEEE Trans. Aut. Control, AC-24, 584-588.

A parametric LQ self-tuner Hallager, L. and S. Bay Jorgensen (1983). Adaptive control of chemical engineering processes. Preprints of the IFAC Workshop on Adaptive Systems in Control and Signal Processing, San Francisco. Halyo, N. and J. R. Broussard (1981 ). A convergent algorithm for the stochastic infinite-time discrete optimal output feedback problem. Joint Automatic Control Conference, Charlottesville. Landau, I. D., M. M'Saad and R. Ortega (1983). Adaptive controllers for discrete-time systems with arbitrary zeros. A survey. Preprints of the IFA C Workshop on Adaptive Systems in Control and Signal Processing, San Francisco. Ljung, L. and T. S6derstr6m (1983). Theory and Practice of Recursive Identification. MIT Press, Massachusetts. Ljung, L. and E. Trulsson (1981). Adap.tive control based on explicit criterion minimization. Proceedings of the 8th Triennial World Congress IFAC, Kyoto. M~ikil~i, P. M. I1982a). Constrained linear quadratic gaussian control for process application. Ph.D. Thesis,/~bo Akademi, Abo, Finland. M~ikil~, P. M. (1982b). Self-tuning control with linear input constraints. Optimal Control appl. Meth., 3, 337-353. M~ikil~i, P. M. (1983). Linear quadratic design of structure-

679

constrained controllers. Proceedings of the 2nd American Control Conference, San Francisco. M~ikil~i, P. M., T. Westerlund and H. Toivonen (1984). Constrained linear quadratic gaussian control with process applications. Automatica (in press). O'Reilly, J. (1980). Optimal low-order feedback controllers for linear discrete-time systems. In Control and Dynamic Systems 16. Academic Press, New York. Srinivasa, Y. G. and T. Rajagopalan (1979). Algorithms for the computation of optimal output feedback gains. Proceedings of the 18th IEEE Conference on Decision and Control, Fort Lauderdale. S6derstr6m, T. (1978). On some algorithms for design of optimal constrained regulators. IEEE Trans. Aut. Control, AC-23, 1100-1101. Westerlund, T. (1981). A digital quality control system for an industrial dry process rotary cement kiln. IEEE Trans. Aut. Control, AC-26, 885-890. Westerlund, T., H. Toivonen and K.-E. Nyman (1980). Stochastic modeling and self-tuning control of a continuous cement raw material mixing system. Modeling, Identification and Control, 1, 17-37.