Existence of minimum upcrossing controllers

Existence of minimum upcrossing controllers

Pergamon PII: SOOO5-1098(%)00256-7 0 Automatica, Vol. 33, No. 5, pp. U-879, 1997 1997 Elsevier Science Ltd. All rights reserved Printed in Great Br...

902KB Sizes 0 Downloads 31 Views

Pergamon

PII: SOOO5-1098(%)00256-7

0

Automatica, Vol. 33, No. 5, pp. U-879, 1997 1997 Elsevier Science Ltd. All rights reserved Printed in Great Britam fms-1098/97 $17.co + 0.00

Existence of Minimum Upcrossing Controllers* A. HANSSON?

and P. HAGANDERS

An optimal stochastic control problem that minimizes the probability that a signal upcrosses a level is solved by rewriting it as a one-parametric optimization problem over a set of LQG control problem solutions. Key Words-Stochastic control: linear optimal control; LQG control; level-crossings; systems; multivariable control systems; stability; state-space methods.

Abstract-An optimal stochastic control problem that minimizes the probability that a signal upcrosses a level is solved by rewriting it as a one-parametric optimization problem over a set of LQG control problem solutions. Finding the optimal controller can be interpreted as finding an optimal costing transfer function. The existence of the optimal controller is investigated in a constructive way, and it is shown that it is equivalent to the existence of a controller with sufficiently small closed-loop variance of the controlled signal. 0 1997 Elsevier Science Ltd.

discrete-time

causing wet streaks (Astrom, 1970, pp. 188-209). Yet another example is power control of wind power plants, where the supervisory system initiates emergency shutdown, if the generated power exceeds 140% of rated power (Mattsson, 1984). Other examples can be found in Hansson (1995) and the references therein. The proposed controller, the minimum upcrossing (MU) controller, is obtained by minimizing the mean number of upcrossings of the critical level during a sample interval. In Hansson (1993a) the problem was solved in the continuous-time case in terms of necessary conditions; here the discrete-time case is treated in terms of both sufficient and necessary conditions. Necessary conditions for the discretetime case have been described previously (Hansson, 1994). Hansson (1994, 1995) has used the MU controller to approximate the so-called minimum-risk criterion and the mean time between failures criterion. Hansson (1993b) has also given sufficient conditions for the MU controller in the special case when the controlled process is a scalar ARMAX process. It is shown that the optimal controller can be found by solving a set of minimum variance control problems parameterized by a scalar, and the optimal controller can be interpreted as finding an optimal costing transfer function for the system output. For the more general multivariable process models treated in Hansson (1994) the solution is found by solving a set of LQG problems parameterized by a scalar, and the solution can be interpreted as finding optimal weightings in an LQG problem. The existence of the optimal controller is more difficult in the general case. This paper extends the existence results from the scalar ARMAX case to the case with several measurement signals. It also covers the case with a direct term from control signal to controlled signal.

1. INTRODUCTION

In many control problems the primary goal is to keep the controlled signal near a certain reference value. Sometimes it is also of interest to consider a secondary goal of preventing the controlled signal from upcrossing a level, where the upcrossing would cause some undesirable event such as, for example, emergency shutdown or instability. The distance between the level and the reference value is normally not small, since otherwise the upcrossing intensity will be intolerably high. However, there may be other control objectives that make it undesirable or impossible to choose the distance large. An example of a problem of this kind can be found in Borisson and Syding (1976) where the power of an ore crusher should be kept as high as possible but not exceed a certain level, in order that the overload protection does not cause shutdown. Another example is moisture control of a paper machine, where it is desired to keep the moisture content as high as possible without * Received 11 January 1993; Revised 24 February 1995, 29 January 1996; received in final form 19 November 1996. An earlier version of this paper was presented at the IFAC Symposium on Robust Control Design held in Rio de Janerio, Brazil, 1994. This paper was recommended for publication in revised form by Associate Editor Rent? K. Boel under the direction of Editor Tamer Basar. Corresponding author Dr Anders Hansson. Tel. + 1-415-7233024; Fax +l-4157238473; E-mail [email protected]. t Information Systems Laboratory, Durand lOlA, Stanford University, Stanford, CA 943054055, U.S.A. $ Department of Automatic Control, Lund Institute of Technology, P.O. Box 118, S-221 00 Lund, Sweden. 871

872

A. Hansson and P. Hagander

Engineering motivations for the MU controller are given in Hansson (1995). Therein, the performance of the MU controller is compared with the minimum-variance controller. Road normal force control in automotive suspensions are also investigated. Only the case of a linear process controlled by a linear controller will be treated here. With Gaussian disturbances acting on the system the closed-loop process is also Gaussian. It is very likely that a nonlinear controller will do better. However, the non-Gaussian nature of the closed-loop process would make the analysis much harder. Preliminary results are given in Andersson and Hansson (1994). The remainder of this paper is organized as follows. In Section 2 the control problem is formulated. It is an optimal stochastic control problem. In Section 3 the problem presented in Section 2 is solved, i.e. a solution procedure together with the necessary and sufficient conditions for existence of a solution are given. It will be seen that finding the optimal controller can be interpreted as finding an optimal costing transfer function for an LQG problem, and that the existence of the optimal controller is equivalent to the existence of a controller with sufficiently small closed-loop variance. Finally, in Section 4 the results are summarized. 2. CONTROL

PROBLEM

Let the Gaussian sequence z be defined by x(k + 1) = Ax(k) + Bu(k) + Ee(k), z(k) = C,x(k) + DC@) + Ge(k), y(k) =Cdk)

(1)

+ Fe(k),

d--A

-B

CI

D

=n+1

(2)

and that rank lZ1=1

u(k)

=

-CHv(k)

+

where r(k) E R is the reference

& = {H E ss ( cr 5 zo}, where z. E R+ is the critical level that should not be upcrossed. Stability is defined such that the eigenvalues of the closed-loop system matrix have absolute values strictly less than one. Introduce the following performance index:

~Zonz(k+l)>Zo~,

HE%,

+ &r(k),

min p(H; zo)

(6)

will be called the MU controller. There is no loss of generality to assume that x(0) = 0 and ~(0) = 0. This follows from the fact that p is evaluated in stationarity, and hence the initial values do not influence p for any H E gZ E %. The restriction on pZ will exclude the degenerated solution uZ = 03 for minimizing p. Also, assume that there exists no controller in aZ such that g’, = 0. Had this been the case, then this controller would trivially minimize p, and the minimal value would be zero. The following lemma gives an expression for the upcrossing probability p in (5) in terms of a double integral.

(4)

value, 7~ is the

(5)

where P denotes the probability measure induced by the random process e. The quantity p will in the sequel be called the upcrossing probability, and it is equal to the mean number of upcrossings during a sample interval (see e.g. Cramer and Leadbetter, 1967, p. 281). The use of lim,,, will in the sequel be suppressed for ease of notation. The solution to

IL(H;Zo)=~_~PIZ(k-l)~zonz(k)s-zo}

BHY@),

- &y(k)

gs = {H E 9 1H stabilizes (l)},

Lemma 1. For any H E & it holds that

Let the control signal be given by rl(k + 1) =&r)(k)

9 = {H as defined in (4)},

HE9:

where x(k) E R” is the state, u(k) E R is the scalar control signal, z(k) E R is the scalar signal to be controlled, y(k) E RP is the measurement signal, e(k) E R’ is a sequence of independent zero mean Gaussian random variables with covariance I, and A, B, C,, C2, D, E, F, and G are matrices of compatible dimensions. It will be assumed that (A, B) is stabilizable, that (C,, A) is detectable, that rank 121=1

finite dimensional state of the controller, and A H, BH, C,, DH, and D, are matrices of compatible dimensions. As only constant reference values will be considered, it is no loss of generality to assume that r(k) = 0 by a change of coordinates. This implies that the mean of z in stationarity is zero if a stabilizing controller is used. Denote by a: the variance of z(k) in stationarity. Introduce the notation H = {AH, BH, C,, DH} for the controller defined by (4). Further, let

b(x) d.~ dY

873

Minimum upcrossing controllers where (al

-

4(x) = (l/6) and

q3Y)b,~

exp (-x*/2), xu(y)

= (20

xl(y)

=

+ u~yY)/~,,

variances of (Yand p jointly minimal. This will be elaborated further in the following section.

and where at and a’p are the stationary variances of the independent variables a(k) = z(k) + z(k - l), /3(k) = z(k) - z(k - 1).

(7)

Proof

First notice that uZ > 0 implies that ai >O and a$>O. Assume the contrary. Start with u*LX = 0 . Then 0 = ~2, = E{(z(k)

+ z(k - l))*}

= 2~; + 2E{z(k)z(k

- l)},

where E denotes expectation with respect to the probability measure induced by the random process e. This implies that the correlation coefficient between z(k) and z(k - 1) is equal to -1. This contradicts that z is a stable process. The proof goes along the lines of Hansson (1994, Proof of Theorem 1 and footnote on p. 1491). If u’p = 0, then the correlation coefficient between z(k) and z(k - 1) is equal to 1, which also contradicts that z is a stable process. Hence the expressions in the lemma involving the inverses of a, and up are well defined. furthermore, by (7) and since (Yand /3 are independent, it holds that p = P{l, - 2z01< P]

3. REGULATOR

DESIGN

In this section the problem of minimizing the upcrossing probability is rephrased as a minimization over a set parameterized by a scalar. It is also shown how this set can be obtained by solving LQG control problems. Furthermore the existence of the optimal controller is investigated. It is shown that the existence is equivalent to the existence of a controller with sufficiently small closed-loop variance. Introduce J(H, P) =

E{(l - ,+*(k)

+ pp2(k)], (8)

H E %a,, P E ]O, II, and consider the optimization min J(H, P),

HE9,

problem

P E [O, 11.

(9)

Note that J is to be evaluated in stationarity. It will be seen that the minimization of p in (5) over 6j& can be done by first solving the optimization problem (9) for all p E [0, l] and then minimizing p over the solutions obtained in the first minimization, i.e. over “r/;n VZ, where 9~ = {H*(p) E 9s / H*(P) = a% Fz

from which the result follows by a change of variables. 0

% = {(d(H), “Irz= {(a:,

Thus p is easily calculated using some numerical routine. Further p only depends on the variances of cx and /3 and the critical level z@ This dependence will be investigated further in the following lemma. Lemma 2. Let

.W,

P), P E [O, II},

+J))

E R* 1H E %n,),

a;) E R* ) u, I zo, ua > 0, CT~> 0},

and where at and CT;are the variances of CYand p. More precisely, the following theorem holds. Theorem 1. With Lemma 1 one may define the function p,(u,(H), up(H); zO)= p(H; zO). Let

%= (H* E %

/H* = argFci;_pI[~,W), up(H)

Y’(r) = {(a:, u;) E R* ] u, 5 r, ua > 0, up > 0},

7; = K~aW),q#))

E w; (H E 9J.

where r > 0. Then for any H E C&the upcrossing probability p in (5) has strictly positive partial derivatives with respect to both ua and up on Y(r), if and only if r I zo.

Then under holds that

Proof

where the left-hand sides are nonempty only if, %$ is nonempty.

See Hansson (1994).

q

According to Lemma 1 the criterion can be minimized by varying p(H; zO) with respect to the admissible variances of (Yand /3, and Lemma 2 indicates that the optimal ocntroller can be found among all the controllers that make the

the assumptions

gKGgJaJ%$

and

inf p(H; zO)=

2, it

“v;r”LT,nv;,

Remark 1. Note that another the result is to say that llE9;

of Section

inf

Ht9,flV;

if, and

way to formulate

p(H; zo)

874

A. Hansson and P. Hagander

and (D i$ “1

where the second equality follows from noting that E{IY~}= 0 in stationarity. Cl ‘

,ul(flW @a; zo) =

(LTo;;fq,, “I

Pl(~a,

=p; zo)-

The minimization of 1~~ over “v;n %, will be shown to be a one-parameter optimization problem over p E [0, 11. Remark 2. It will be seen that the elements of VJ can be obtained by solving LQG problems. Remark 3. Note that an alternative formulation of the existence result is that there exists an MU controller if, and only if, there exists a controller with a closed-loop standard deviation o, 5 zo. Such a controller, if it exists, is obtained by minimizing a,. The remainder of this paper is concerned with proving the above theorem and showing how to compute the controllers that minimize J. The difficult part of the proof, Lemmas 5-7, is to establish the continuity of ot(H,) and a;(&) for p = 0, 1 when these variances are bounded. This is then used to show that the set “YJII Y; is connected, closed, and bounded, from which the proof can be derived fairly easy. This way of proof gives as a byproduct some useful properties of cLTJ. First, however, the minimization of J will be shown to be equivalent to solving LQG problems. Sufficient and necessary conditions for when these LQG problems have solutions will be given. It will also be seen that the solutions, whenever they exist, are unique with respect to the transfer function relating the measurement signal to the control signal. This will be done in Lemma 3 and Theorem 2. Lemma 3. Let a(p) = G + G and b(p) = G - 6, and introduce the filtered output

z(k) =

4P)4 + b(P)

z(k) =

q

-ad q

z(k),

where the operator q is defined via qZ(k - 1) = Z(k). Then for any H E 5@sit holds that the loss function J in (8) can be written as

Remark 4. The case p = 0.5 with J = E{z(k)* + z(k - 1)2} corresponds to minimum a, control. Remark 5. Note that the problem of minimizing the upcrossing probability can be interpreted as finding an optimal costing transfer function for the signal z(k) to be controlled. Costing transfer functions have been proposed (e.g. Clarke and Gawthrop, 1979) to generalize minimum variance control. A state-space realization for Z is, for example, given by the augmented system f(k + 1) =%(k)

+ h(k)

+ Ee(k),

T(k) =c,f(k)

+ h(k)

+ cc(k),

(10)

y(k) =c’,.f(k) + Fe(k), where T*(k) = (x*(k) r*(k)) and where

c2= (C, E=

0),

D = aD,

F=F,

c=aG.

The equations for solving LQG problems are summarized below. Let S, L, L,, and L, be solutions of s = (A - BL)TS(ji - BL) + (C, - bL)‘(C,

- DL),

(11)

and let P, K, K,, K,, and K, be solutions of P = (;i - Kc2)P(ii - KC2)T + (_i?- KF)(E - KF)T, HP = FF’ + c,Pc;,

J = E{?(k)}. Proof. It holds that J = E{(G = E{(G = E{f2(k)},

+))2

+ (G P(k))2]

a(k) + fi

p(W)2]

where S and P are the maximal real symmetric positive semidefinite solutions of the Riccati equations. Introduce the control signal u(k) = -La(k)

- D$(k),

(13)

875

Minimum upcrossing controllers where R(k)=%(k)

with 3(k) defined by f(k + 1) = (A - K@(k)

+ &(k)

and where y(k) = y(k) - c$(k). can be expressed as u(k) = -[C,(qZ - &-‘&

+ KY(~),

Equivalently

it A,=&BL,

+

Dcly(k),

(14)

where A,, = A - EL - KC, + ED&,

The closed-loop behavior of LYand /3 is governed by

+ (C,B, + G,)e(k)

Cc = L - D&, + L,K,.

See Appendix.

+ G,e(k + I),

(16)

P(k + 1) = C,(A, - I,($;,

Theorem 2. Assume that the conditions given in Section 2 hold. If p E (0, 1) then the controller defined by (14) is a solution to (9). If p = 0, 1, then there exists a solution to (9), if, and only if, rank (zZ -A,, B,) < rz + 1 for z = -1, 1, respectively. In case of existence the optimal controller is given by, for example, a minimal realization of {A,, B,, Cc, D,}. Furthermore the transfer function of the optimal controller is unique for all p E [0, 11. Proof

A, = 2 - KC,.

cx(k + 1) = C,(A, + I)($)

B,=K-BD,,

D, = LK, + L,K,

-P(k),

0

+ (C,B, - G,)e(k) + G&k + l),

where C, = ({(Cl

0) - DWC,

0) - DD,%

G, = G - DD,F. Now, the observability in a and /3 of the unstable mode of A, for p = 0, 1 will be investigated. To this end the equations given above are not suitable. Instead the following equations for a and /3 are used: a(k) = (C, - DL).?(k)

+ (C, - DD&)Z(k)

+ (G - DD,F)e(k), Remark 6. Note that there always exist maximal

solutions to the Riccati equations (see e.g. Hagander and Hansson, 1995). Furthermore, it holds that the controller defined above always makes the performance index attain its infimal value. The only problem is that it may not be stabilizing for p = 0, 1. This is due to the fact that - 1, 1, respectively, will be an eigenvalue of A,. Remark

7. The condition for existence of a solution for p = 0, 1 is equivalent to that the unstable eigenvalue of A, at -1, 1, respectively, is uncontrollable from B,, because the same eigenvalue is uncontrollable in the controller by A,, = A, - B&. Remark

8. Note

that the LQG controller for a minimal realization of P = 0, {A,,, B,, Cc, D,}. Hence the uncontrollable modes do not have to be removed when computing variances, as long as the initial covariance is zero. Remember that x(0) = 0 and ~(0) = f (0) = 0 was assumed in Section 2. This implies that the initial covariance is zero. 1

is

P(k) = (C, - DL)P(k)

(17)

+ (C, - DD$,)f(k)

+ (G - DD,F)e(k),

where C, = (C, l/b)

and C, = (C, -l/b),

p#

0.5. Lemma 4. Under the conditions stated in Section 2 it holds that the unstable eigenvalue of A, at -1 for p = 0 is observable in p but not in LY. Furthermore, it holds that the unstable eigenvalue of A, at 1 for p = 1 is observable in cy but not in P. Proof

See Appendix.

0

Lemma 5. Let Z-Z,and Hz be two controllers that solve (9) for p, and p2, respectively, where 0 5 p, < p2 5 1. Let the corresponding variances of (Y and p be aL(H,), a2,(H2), as(H,), and &Hz). It then holds that a:(HJ

L at(H,)

as(H,)

and

2 as(H2),

It also follows that &Z,)

- &(&)

2 a:(K)

J(H,, PI) sJ(H,, The closed loop is governed by

+ J(H2,

~2) sJ(H,,

- &(H, ), ~2)

(~2

-

pddW2)

-

&HJl,

~2)kdWJ

-

a;Wdl.

P,)

+ (PI -

876

A. Hansson and P. Hagander

Pro05

Introduce

ai = a;(&)

bi = ag(Hi),

and

i = 1, 2. Then it holds that

J(J&, ~1) = (1 -~,)a1 5 (IJW2,

~2)

+ pib,

02

+ ,clbz =

=

(1 -

~2b2

+

p2b2

5

Cl-

~2h

+

p2h

=

JW2,

P,),

JW,,

~21,

i.e.

(1 -

~,)(a2

-

ad

+

M2

(I-

~2)(4

-

~2)

+

P2@l-

-

W

2

0,

b2)

2

0,

that (p2 - pi) [(u2 - a,) + (b, - b2)] 20. p2 - p1 > 0, it follows that

As

so

~2 h

-

b2

zz

1~12 (I-

~1[@2 ~2)[@2

-

a,)

+

@,

-

b2)1~

0,

-

ad

+

(b,

-

b2)1~

0,

which proves the first two inequalities. remaining ones are immediate.

The 0

Lemma 6. Under the conditions stated in Section 2 it holds that the solutions S of (11) and P of (12) are continuous functions of p on [0, 11. Proof

(1995).

See, for example,

Stoorvogel

remains to show the continuity at p = 0, 1. Remember Lemma 5. This implies that the limits i = a, /3 exist, possibly unlim,,, 1 a&), bounded, and it only remains to show that the values in the limit are equal to the limiting values. The remaining part of the proof will only focus on proving that o:(O) = lim,_O a&) and as(O) = lim,,, a&). The other two equalities are proven analogously. Let U, = (U, U,,) be a transformation defined for p E [0, E] for some E > 0 such that A,U, = U,&, where

and Saberi El

Lemma 7. Under the conditions stated in Section 2 the set y, fl ?‘fzis connected, bounded, and closed.

with J,, = -1 for p = 0. Such a transformation exists because A, has only one eigenvalue on the unit circle for p = 0. Furthermore, the transformation U, is a continuous function of p (e.g. Golub and van Loan, 1983, Theorem 7.2-4), as A, is continuous. Let

be the inverse of U,, which is also continuous. Define

With p = VTfjV, it then holds that p = A,PA;

+ &H&,

implying Proof. It is sufficient to show that the variances of IYand p for the optimal LQG controllers are continuous functions of p on (0,l) and that they are continuous at 0 and 1 in the case the corresponding variances are both finite. Furthermore, it has to be shown that there exists optimal LQG controllers on the set where the continuity has been shown to hold. From Lemma 6 it is known that S and P are continuous functions of p on [0, 11. Since G, and HP are uniformly positive definite on [O, l] it holds that L, L,, and L, of (11) and K, K,, K,, and K, of (12) also are continuous functions of p on [0,11. This implies that p = lim,,, E{f (k)ZT(k)} is a continuous function of p on (0,l). This follows from the fact that A,, B,, and HP are continuous functions of p on [0,11,and that by (15) p satisfies P = A,pA: + B,H,B:, (18)

which has a continuous solution of (0,l). Note that i) may not necessarily be continuous at p = 0, 1, because A, is not stable for p = 0, 1. It now follows that the variances of (Y and p are continuous functions of p on (0, l), as by (16) these variances are affine in P and p. It now only

ij, = J,,pJT, + B, H,BT, 42

=

-Jcs&Jcu + B,H&,

(19)

p2 = Jf,,P2 + B2HPB;, for the elements of J? The first and second equations have continuous solutions p, and p,2 on [0, E], because J, is stable. The third equation is more tricky, and will be dealt with later on. However, note that 1’ = 0 for p = 0 if B2 = 0, as B, = VzuB, gives in (15) that V$(k + 1) = J,,, V&f (k) = 0, by 2 (0) = 0. Let Y = C, - DL, and define Y2) = (Y u,,

Y = (Y,

Y CT,,,).

Note that there exist E’ E (0, E] such that Yz > 0 and continuous for p E [0, E’]. This follows from the fact that U, is continuous and that Y2 # 0 for p = 0 by Lemma 4. Note also that (17) can be written as - -“‘p = YPYT + (C, - DD,c,)P(C, - DDcC2)T + (G - DD,F)(G

- DDcF)T,

or, equivalently, &P)

=

WP)P2(P)

for some continuous k(p).

+

k(P),

(20)

877

Minimum upcrossing controllers Now, start with the case for which the unstable eigenvalue of A, at -1 for p = 0 is controllable from B,, i.e. when B2(0) # 0. Then it holds that there exist E” E (0, E’] such that B,(p) ZO for p E [0, E”]. Hence lim,,,, p,(p) = ~0 by (19) which implies lim,,, a;(p) = M by (20). Note that the continuity of a”,(p) and g;(p) for p = 0 does not have to be investigated for this case, because a; = (~‘a + a;)‘/4 5 zi for elements of VJ rl “v;. Finally, note that there exist optimal controllers for p = (0, l), by Theorem 2. Now consider the case when the unstable eigenvalue of A, at -1 for p = 0 is uncontrollable from B,, i.e. when B*(O) = 0. Remember that p*(O) = 0. Suppose that p, is not continuous for p = 0. Then (20) shows that a&) > a;(O) for some p > 0, which contradicts Lemma 5. This together with the fact that p,, pj2, and ZJ, are continuous functions of p, implies that P is continuous for p = 0, and hence that a’p is continuous for p = 0. The continuity of p now implies that at is also continuous for p = 0 by (17). Finally, note that there exist optimal q controllers for p = [0, 1) by Theorem 2. Proof

of Theorem 1. Consider the case when gZ = 0. Then it is trivial that BP = 0, by the definition of the optimization problem, and hence the theorem is trivial. Now, consider the case when gZ # 0. Then V, fl YZ is nonempty, because the controller that minimizes J for p = 0.5 is a controller minimizing the variance of z. That such a controller exists follows from the Section 2. stated in Let assumptions [&H*), a#*)1 b e an element of 2’”n Tz that minimizes CL, on this set, which exists by Lemma 7. Denote the corresponding minimal value by PT. It is now sufficient to show that for any H in 5J for which (a, b) = [at(H), a;(H)] is not in VJ n “v; it holds that ~,(a, 6) > PT. This is implied by the existence of H,, in Bn, for which (a,, b,,) = [a%&), cg(H,)] is in V; n “v; and such that a,, 5 a and b, 5 b with at least one strict inequality, because for such an H, it holds that ~,(a, b) > p,(a,, b,) 2 ~7, where the first inequality follows from Lemma 2 and the second by the definition of p 7. Hence it only remains to show the existence of an H, with the above properties. Note that “1sn Y = {(a,, b,) 1P E [Pminr P,,,I), where pm,” = 0 and pmax= 1 if “v; rl Y, = VJ, and where prnln> O or prnax< 1 if “v; rl “Ir,# VJ. Also note that a ,,n,.,x + b,_dx = 4&

~max < 1,

a Plnln + b,_,” = 4z?i

pmin’ 0.

Furthermore,

the existence

(21)

of an H, with the

desired properties is equivalent to the existence of a p E [pmin,p,,,], such that u - up 20 and b - b, 2 0 with at least one strict inequality. Now assume that such a p does not exist. This implies by the fact that VJ fl VZis connected that a - ap 5 0 or b - b, 5 0 for all p E [pmin,p,,,]. Furthermore, it holds for any p E [pminyp,,,], by the optimality and Lemma 2, that (I-

p)a, + pb, < (1 - p)a + pb,

which can be rewritten as (1 - p)(a - a,) + p(b - b,) > 0.

(22)

Start with the case when a - a,, 5 0. If pmin= 0, then there is an immediate contradiction to (22). then by (21) it holds that If Pmin ‘O, a + b 5 aPIn,”+ b&W because a + b 5 4~; for any H in $3~~. This implies that (1 - p,i,)(a

- a,,,.) + Pmin(b - bPJ 5 (l - Pmin - PhXa

- a,J

5 0,

where the second inequality is implied by pmin% 0.5, which follows from the fact that the minimum (T, controller for z has closed-loop variances (u~.~, b&, which is an element of V, fl nYZ.This is a contradiction. The case when b - b, 5 0 is proven analogously by using Cl

Pmax-

Remark 9. Note that the minimization

of I_Lcan of as finding an optimal value P E [P,,,,“, pmax] for the LQG problem.

be

thought

Remark 10. Note that an alternative

formulation of the existence result is that there exists an MU controller if, and only if, there exists a minimum a, controller, obtained for p = 0.5, with a closed-loop standard deviation satisfying (+Z5 zo.

4. CONCLUSIONS The existence of the MU controller has been investigated. This controller minimizes the probability for the controlled signal to upcross a level given a certain reference value. There are many examples of control problems for which this approach is appealing, i.e. problems for which there exists a level such that a failure in the controlled system occurs when the controlled signal upcrosses the level. One important class of such problems is processes equipped with supervision, where upcrossings of alarm levels may initiate emergency shutdown, causing loss in production.

878

A. Hansson and P. Hagander

The problem of minimizing the upcrossing probability over the set of stabilizing linear time-invariant controllers has been rephrased to a minimization over LQG problem solutions parameterized by a scalar, and thus the complexity is only one order of magnitude larger than for an ordinary LQG problem. It has also been seen that the optimal controller can be interpreted as finding an optimal costing transfer function for the system output. The key to the method is the reformulation using the independent variables (Yand /3, which makes it possible to quantify by Lemma 1 the upcrossing probability in terms of the variances of (Yand p. The set of closed-loop variances of CYand /I obtained by solving the set of LQG problems has been characterized. This makes it possible to give a necessary and sufficient condition for the existence of the minimum upcrossing controller, which is equivalent to the existence of a controller with sufficiently small closed-loop variance of the controlled signal. This point, first solved for the scalar ARMAX case in Hansson (1993b), has here been generalized to the more general state-space case with several measurement signals.

APPENDIX-PROOFS Theorem 2

By Lemma 3 the problem in (9) is an LQG problem. This will have a unique - - solution in terms of the transfer function from y to u if (A, B) is stabilizable, (cZ2,A) is detectable,

rank zl-A

-8

111=1 c,

D

(

1

=n+2

(A.11

and

(see e.g. Hagander and Hansson, 1995). The matrices for the PBH rank test for stabilizability and detectability are given by

which have rank n + 1 for IzI2 1, because (zl-

A

B)

and

have rank n for /zI 2 1, by the stabilizability and detectability - of (A, B) and (C,, A), respe_ctively. This proves that (A, B) is stabilizable and that (C,, A) is detectable. Furthermore, it holds that zl-A c

-B

0

D

l/a

0’

0

X&?(z)

and that REFERENCES

Andersson, L. and Hansson, A. (1994) Extreme value control of a double integrator. In Proc. 33rd IEEE Conf: on Decision and Control, Orlando, FL, pp. 2163-2164. AstrGm, K. J. (1970) Introduction to Stochastic Control Theory. Academic Press, New York. Borisson, U. and Syding, R. (1976) Self-tuning control of an ore crusher. Automatica 12, l-7. Clarke, D. W. and Gawthrop, B. A. (1979) Self-tuning control. Proc. IEE l26,633-640. Cram&, H. and Leadbetter, M. (1967) Stationary and Related Stochastic Processes. Wiley, New York. Golub, G. H. and van Loan, C. F. (1983) Matrix Compututions. John Hopkins University Press, Baltimore. Hagander, P. and Hansson, A. (1995) Existence of discrete-time LOG-controllers. Svst. Control Len., 26, 231-238. Hansson, A. (1993a) Control of level-crossings in stationary gaussian random processes. IEEE Trans. Aurom. Conrrol, AC-38,318-321. Hansson, A. (1993b) Minimum upcrossing control of ARMAX-processes. In Preprints 12th IFAC World Congress.

Hansson, A. (1994) Control of mean time between failures. Int. J. Control 59, 1485-1504.

Hansson, A. (1995) Stochastic control of critical processes. Technical Report TFRT-1044-SE, Department of Automatic Control, Lund Institute of Technology. Mattsson, S. (1984) Modelling and control of large horizontal axis wind power plants. Technical Report TFRT-1026, Department of Automatic Control, Lund Institute of Technology. Stoorvogel, A. A. and Saberi, A. (1995) Continuity properties of solutions to Hz and H, Riccati equations. Syst. Control Len. 27, 209-222.

zl-A

-E

c,

F

)( -

where - denotes equivalence with respect to elementary row and column operations. Thus, if p E (0, l), the rank conditions are fulfilled by assumptions (2) and (3). Thus, if p E (0, l), the conditions for existence of a unique solution are fulfilled. When p = 0, 1, Xp(z) has a zero at z = -1, 1, respectively. Then there exists (see e.g. Hagander and Hansson, 1995) an optimal controller if, and only if, rank (zl- A,,, B,)
Only the observability in (Y will be investigated, because the observability in p can be investigated analogously. From (15) and (17) the matrix for the PBH-rank test for observability in a! is given by zl-A, 0

i C, - DL

B,C, -

zl -A,, C, -DO&

)(

zl-A,

Bcc2

C,-DL

C,-DD,c,

0

zl-A,,

)

Due to the fact that the unstable eigenvalue is in A, and never in A,,, it is sufficient to consider the rank of

(‘4.2) i.e. the observability in C, - DL of the unstable eigenvalue in A,.. Furthermore, it holds that

Minimum upcrossing controllers and that

879

z = 1, i.e. the unstable eigenvalue of A, at 1 for p = 1 is observable in C, - DL. Furthermore, it holds that (ZIG-A -bI(C, (C,

By (2) it now follows that the matrix in (A.2) has full rank for

O)+BL -z/b) - DL} l/b) - DL

For p = 0 it holds that b = 1 and that -1 is an eigenvalue of A,. Hence the matrix above loses rank for z = -1, which implies that there is unobservability in C, - DL. Cl