- Email: [email protected]

Automatica,Vol. 16. pp. 405 408

Pergamon PressLtd. 1980.Printedin Great Britain © InternationalFederationof AutomaticControl

Brief Paper

Experiment Design for Maximum-Power Model Validation* T O R S T E N B O H L I N t and L U D O M I R

REWO:~

Key Words--Identification; parameter estimation; model validation; experiment design; optimal inputs; discrete time systems; multivariable systems.

Abstract--The paper considers the problem of input signal selection to maximize power of an Asymptotic Locally Most Powerful (ALMP) test. It" is shown that the input signal which maximizes the test power simultaneously yields maximum accuracy of identification if disturbances are Gaussian. For linear multivariable discrete-time systems described by transfer functions both the input signal optimality criterion and its gradient are derived. This allows input signal optimization by means of a gradient hill-climbing method. The theory is illustrated by the optimal experiment design for a zero-order system with additive coloured disturbances.

(1977) show that the optimal experiment subject to an output power constraint should be carried out in a closed loop. Upadhyaya and Sorenson (1977) and Lopez-Toledo and Athans (1975) discuss the problem of input signal selection from the point of view of the output sensitivity index. Mehra (1974b) compares randomized and nonrandomized experiments for systems described by state-space models. He gives an algorithm for the randomized experiment design optimization. A profound discussion of experiment design is presented in the textbook by Goodwin and Payne (1977). The authors suggest experiment optimization also for model structure discrimination. The optimal input signal maximizes the power of a likelihood-ratio test. In the present paper the problem of input signal selection for a testing of dynamic models is considered. It is assumed that the model M o of a system is known and the hypothesis Ho: M = M o is to be verified against an alternative hypothesis Ht: M = M t. Discriminative power of the test depends on the data available for testing, i.e. the sequence u(t), y(t), t = 1,..., T. This allows the statement of an optimization problem. Despite the apparent differences the problem is closely related to the problem of the Cramer-Rao lower bound minimization. Section 2 of the paper describes the specific maximumpower test used. The test is due to Bohlin (1978). It is shown in Section 3 that maximization of the test power is equivalent to the Fisher information matrix maximization if the disturbances affecting the system are Gaussian. Input signal selection for a discrete-time multi-input multi-output (MIMO) transfer function model is discussed in Section 4. The Test power is given in terms of u(t), t = 1,..., T. Section 5 contains a simple example. A zero-order SISO system with coloured noise is considered. The hypothesis Ho:M=Mo is tested against the alternative hypothesis H t : M = M 1 where M 1 is a first-order model.

1. Introduction THE IMPORTANCE of experiment design for system identification has been recognized for a long time. There is a vast literature on input signal selection from the point of view of attainable identification accuracy (Mehra, 1974b). This accuracy may be expressed in terms of the estimator covariance matrix R. The covariance matrix is usually difficult to derive and the Cramer-Rao lower bound may be used instead according to the inequality cov [(0-- 0o ) ( 0 - 00)r] > M - 1

(1)

where b is an estimate of the true parameter value 00 and M is the Fisher information matrix given by§

M=E[8-1Og~oYlO)]r[d-IOg~oYlO) ].

(2)

Scalar functions of M such as tr M or det M are commonly used as the experiment design optimality criteria. In the existing literature the input signal characterization is discussed in both time and frequency domain for a broad class of linear, continuous and discrete-time systems. Continuous-time, single-input and single-output (SISO) systems are considered in Levadi (1966); Mehra (1974a); Payne et al. (1975); van den Bos (1973). Some analytical results as well as algorithms for numerical optimization of input signal are presented. Discrete-time SISO systems are discussed in Aoki and Staley (1970); Goodwin et al. (1973); Goodwin and Payne (1973). Algorithms for minimization of the simple or weighted trace of the information matrix subject to an inputpower or input-amplitude constraint are presented. Ng et al.

2. Design for locally most powerful validation In the current section is stated the problem of experiment design for maximum-power validation by Asymptotic Locally Most Powerful (ALMP) test (Bohlin 1978). Assume that a probabilistic model M 0 of the form

Mo:y(t)=P(yt_ t,u,_ t,O)+e(t)

*Received 18 July 1979; revised 19 February 1980. The original version of this paper was not presented at any IFAC Meeting. This paper was recommended for publication in revised form by associate editor H. Sorenson. tRoyal Institute of Technology, Department of Automatic Control S-100 44 Stockholm, Sweden. *System Research Institute, Ul.Newelska 6, Warsaw, Poland. §Derivative of a scalar function w i ~ respect to a vector is regarded as a row vector.

is to be validated, =col {u(i),i= 1,.._,t} are at time t and 0 is a sequence e(t) satisfies the

(3)

where Yt=col{y(i),i= 1,...,t}, ut output and input vectors available vector of system parameters. The conditions

E{e(t)} =0 E{e(t)eT(s)} = S(t-- 1 ),¢~(s-- 1 )&,.

¢4)

The problem is to decide whether the model Mo (3), (4) accords with data Ur, Yr given.for its validation. We thus state the null hypothesis Ho:M = M o. 405

406

Brief Paper

Let the alternative hypothesis be HI:M=MI(O ) for some 0:/:0 where MI(0): P(y,_l,u, 1 , 0 ) + e ( t ) and P(y,. l,u, 1,0) =P(y, t , u , _ , , 0 ) . Then the following test has m a x i m u m discriminating power for the worst case that the model and the system generating the data differ only little (Bohlin. 1978). Reject H o if 7"

~ > ~ (2~ )

and the Fisher information matrix (21 M = ~ grad0/Sr£2 l(t)grad0/3 t=:l

= ~ grad0/57~r~ i grad0 P.

Ill)

(5) Comparing (11) and (7) gives

where r

'/w/'T~

+=F

1

W7(t-1)w(t)

Q = r M.

t=l

FF/ = l

wT(t - l ) W ( t - - 1)

TI_I

w(t)=S

l(t-l)[y(t)-/5(y,

1,u, 1,0)]

17V(t)=~ l(t-l)gradop(y,

j,u, l,O)

Z,2(~) is the if-variable for z dgf and the risk level ~. If 0 is not dependent on the data YT, ur then , = d i m 0. Notice that the alternative model P may be more complex, e.g. of higher order than #. In that case fi components of have normally been estimated from the same data as used for validation and ~ = d i m 0 - ft. Under the null-hypothesis the statistic 1

T

T

~ ~--+(O-O)7Q(O-O)

(6)

where j

T

Q = T , ~--Igrad°PrS ' ~

1

grado P

(7)

and the arguments in P and S have been dropped for simplicity. If the a priori distribution of the modelling error is uniform the power of the test is an ascending function of det Q. The matrix Q depends on data through P(y, , u , ~,0). Thus we can state the problem of experiment design as max det Q(YT, UT)

(8)

ur~ U

where U defines the set of admissible input signals. The set is usually given as an inequality

Since the factor 1/T does not influence the choice of ult), maximization of det Q is equivalent to maximization of det M. Thus the input signal that ascertains the m a x i m u m test power, simultaneously yields m a x i m u m accuracy of identification in the sense of det M, where M 1 is the Cramer Rao lower bound for the covariance matrix of the estimator. In practice, if the model M 0 tested against the data ur, Yr is rejected, we can use the same data for fitting an alternative model, and then achieve m a x i m u m accuracy of estimation. The identity (12) states clearly that under the assumption of Gaussian disturbances optimization of input signal for model validation is equivalent to optimization of the signal using the Fisher information matrix. The whole literature on experiment design refers only to the case of Gaussian disturbances. Thus all algorithms for the Fisher information matrix maximization can be directly applied to design experiments for model validation. If such an algorithm is used for nonGaussian disturbances, the input signal obtained is still optimal for model validation. It is not optimal for parameter estimation however because the formula for M (11) does not yield the Fisher information matrix.

4. Experiment design Jbr a MIMO transfer-function model We shall restrict our considerations to multiple-input multiple-output transfer-function models with additive coloured noise. Such a model still covers a broad class of linear, time-invariant systems. A completely controllable and observable state-space model can also be transformed to a transferfunction model. We shall express the matrix Q (7) in terms of input signal ur. We shall derive the gradient of det Q, which allows application of gradient hill-climbing methods. Consider a M I M O transfer-function model y(t)=G(z

(9)

f(u T T)

where both f ( . , -) and C can be vectors and C is a constant. The most c o m m o n forms of (9) are the input power and amplitude constraints.

~)u(t)+F(z ')e(t)

E~¢e(t)~ =0

We shall now show that under the assumption of Gaussian innovations e(t) the problem of input signal optimization for model validation is equivalent to the problem of input selection for maximum-accuracy parameter estimation. Consider the model (3~(4). We assume for the moment that e(t) is a sequence of independent, normally distributed random variables with zero means and known covariance matrices

E{e(t)e7(s)) = ,~(t - 1 ),~T(s -- 1 16....

(13)

where z ~ is the backwards-shift operator, both y(t) and e(t) are N-vectors, u(t) is an M-vector. Elements of G(z 1)~ M and F(z I)N~u are rational functions in z ~. The random variable e(t) satisfies the conditions

3. Relation to the maximum accuracy experiment design

{fL

{14) t=.~

E{e(t)eT(s)}= O, t,:g:s"

(15)

The model can be written in the form (3), viz.

y(t)=[l-F

I(z I)]y(t) +F ~(z-')G(z ~)u(t)+e(t).

(16)

The matrix Q (7) is given by

~ (t),

=O,

t=s

tCs

1

(10)

where f* does not depend on the parameters 0. Now we can readily obtain the likelihood function

Q =T

I"

tE l grad0 prf~- l grado p

½~ [ y ( t ) - P ] r f ~ - ' [ y ( t ) - P ]

(17)

and

P=P(y, ,,u, 1 , 0 ) = [ I - F

l(z-l)]y(t)

+F~l(z-1)G(z I )U(t)

p(y(t)]y, ,,u, l , O ) : const .exp

(12)

and 0 denotes parameters in F(z 1) and G(z ~).

(18)

Brief Paper We shall now partition the parameter vector 0, viz. 0 = c o l [ & y ] where 3 denotes the parameter vector in G(z 1) and ~, denotes the parameter vector in F(z-l). Thus we assume that F(z -1) and G(z-1) are parametrized separately. If u(t) and e(t) are uncorrelated the matrix Q can be written in the form

where the elements of S are given by 1

T

s i j = ~ , = ~ ' xlr(t)f~ Ixj(t) "

1

1" cG(z

xi(t)=F- (z

t~i

- 1 )

(20) .

407

has been identified. The hypothesis Ho:M=M o is to be tested against the alternative hypothesis H1 :M = M1, where

bz 1 1 M l : y ( t ) = ~ , _ , u ( t ) + , ~ - - ~ T , e ( t ) , E l e ( t ) } =0. l+az~

"

(28)

The problem is to find an optimal input signal u(t), t = 1..... T for that purpose. The input signal is to maximize power of the test described in Section 2. The signal can also be interpreted as the m a x i m u m accuracy identification signal for model (28), when the a priori knowledge about the parameters is b =/~, c = ~, a = 0. A computer program for the optimization of u(t) subject to the input power constraint

.

-ultt

(21)

•

U2(t)=l

/=1

and the elements of V do not depend on u(t). Since det Q : det S det V and V does not depend on input signal u(t), it suffices to maximize det S. Thus we have reduced the original problem of maximizing det Q to the problem of maximizing det S max det S

(22)

u~U

where S depends on UT=COI{u(t),t=I ..... T I through the formulas (20) and (21). One can generate different admissible signals u T and compare their goodness by means of det S. To make the optimization of u r easier we shall also derive the gradient of det S. This may be done using either the co-state equations or the impulse-response technique. We shall use the latter approach. Differentiate det S with respect to Uk(Z), where Uk(Z) is the kth element of the vector u(t) at time t=z OdetS__:detStr(S

l+cz

I_ ?,S ~.

was prepared and run. The globally optimal signal found is sinusoidal except for some slight distortion of the sinusoid observed on both ends of the interval 1,..., T The distortion is due to the finite sample length T and zero initial conditions. The optimal frequency co depends on the parameter of the disturbance filter, e.g. for 6=0.5 to= 1.09rad. To avoid the well-known problems with local m i n i m u m the multistart optimization technique was used with random start points. Similar results may be obtained analytically under the assumption of infinite T. The optimal frequency maximizing det S may be found as -l-E2+, co = arc cos

I~34E2+U.

8~ J .T2

(23)

Now we need the derivative ~su/~uk(r). Differentiating .%(20) with respect to the whole vector u&) one obtains

~,Sij

1 T [ 7", ,~

-1'.o

I cqXi(t)

FIG. /~ut~) J

i~ IF_I(z ~u('c)-~u(z)

OG(z-I ) ] I)~-u(t)J=K,(t-z)

(25)

where Ki(r) is the impulse response function of the system described by the transfer function F(z l)(?~G(z-1)/~6i) and K i ( t ) = 0 if t < 0 . The derivative ?su/~u(r ) is a row vector and ~sij/~uk(r) is its kth element

?'sli-F c~'sij ?u(r)

~su ] [_,?ul (r) ..... ~u,.(~)]'

(26)

Note, that the lower bound of s u m m a t i o n in (24) is z since all terms for t = 1..... ( ~ - 1 ) vanish. Formulas (20~(26) make it possible to compute both the performance index det S and its gradient.

5. An example Suppose that the model M o 1

Mo:y(t)=~z 'u(t)+ l+~z~l ett), E{e(t)} =0, {Ee2(t)} =a 2

o's

IJo

I. Performance indices for power-constrained input signals: optimal and white random, b = l, o 2 = 1.

(24)

and (')xi(t)

-o:5

(27)

The corresponding performance index for the infinite-length sinusoidal input signal is shown in Fig. 1 along with the performance index achieved by means of a purely random power-constrained input signal characterized by R(O)=I/T and R ( t ) = 0 for t4= 0. Also the case of the input signal optimization subject to the amplitude constraint -l

t = l ..... T

was considered. It was found that globally optimal longsample input signal had the form

u(t) =sign[sin(eJt + ~b)], t = 1..... T The finite-length optimal input signal is actually different from the above 'sinusoidal' signal for several t at the beginning and at the end of the interval 1. . . . , T. Figure 2 shows the performance index for the optimal infinite-lenglh binary input signal along with the performance index achieved with a random white input signal. If disturbances affecting the system are slow the method presented in the paper yields considerable gain in the test power. However computation required to optimize the input signal in the case of m a n y parameters may be cumbersome.

408

Brief Paper

For a system with M inputs one has to solve an MTdimensional optimization problem to obtain the best input signal. Moreover, to calculate the optimality criterion for a given u r 2 N M difference equations must be solved. Therefore the input signal optimization is applicable in those identification problems where the cost of computation is negligible in comparison with the cost of experiment.

I

0

0.5

11.0

FIG. 2. Performance indices for amplitude-constrained input signals: optimal and white random, b - 1, 0. 2 - 1 . 6. Conclusions Experiment optimization for maximum-power validation of models has been considered for the case of validation with the Asymptotic Locally Most Powerful test. Power of the test depends on data available for validation, in particular on the input signal applied. The performance index and its derivativeg with respect to the input signal can be computed and thus the signal obtained by applying a constrained gradient hill-climbing routine. If disturbances are Gaussian, the optimal input signal yields not only the maximum test power, but also the maximum accuracy of parameter estimation, Hence, if a model under test is rejected, the same data may also be used for a model fitting, and the estimation accuracy is maximum. The method was applied to a simulated zero-order model with additive coloured noise. Two forms of constraints were considered, viz. the input power and amplitude constraints. In both cases the optimal input signal is periodic. It yields a considerable improvement of the test power, compared to a white noise input, if the disturbances affecting the system are slow. The conclusion may not, however, hold in the case of many parameters.

ReJerences Aoki, M. and R. M. Staley (1970). On input signal synthesis in parameter identification. Automatica 6, 431- 440. Bohfin, T. (1978). Maximum-power validation of models without higher-order fitting. Automatica 14, 137 146. van den Bos, A. (1973). Selection of periodic test signals for estimation of linear system dynamics. IFAC Symposium Identification and System Parameter Estimation, The Hague/Delft, Paper TT-3. Goodwin, G. C., J. C. Murdoch and R. L. Payne 11973). Optimal test signal for linear SISO system identification. Int. J. Control 17, 45 55. Goodwin, G. C. and R. L. Payne (1973). Design and characterization of optimal test signals for linear single input single output parameter estimation. IFAC Symposium Identification and System Parameter Estimation, The Hague/Delft, Paper TT-I. Goodwin, G. C. and R. L. Payne (1977). Dynamic System Identification: Experiment Design and Data Analysis. Academic Press, New York. Levadi, V. S. (1966). Design of input signals t~or parameter estimation. IE,EE Trans. Aut. Control A C - l l , 205 211. Lopez-Toledo, A. A. and M. Athans (19751. Optimal policies for identification of stochastic linear systems. IEEE Trans. Aut. Control AC-20, 754 765. Mehra, R. K. (1974a). Optimal inputs for linear system identification. IEEE Trans. Aut. Control AC-19, 192 200. Mehra, R. K. (1974b). Optimal input signals for parameter estimation in dynamic systems. Survey and new results. IEEE Trans. Aut. Control AC-19, 753 768. Ng, T. S., G. C. Goodwin and T. S6derstr6m (1977). Optimal experiment design for linear systems with input output constraints. Automatica 13, 571 577. Payne, R. L., G. C. Goodwin and M. B. Zarrop (1975). Frequency domain approach for designing sampling rates for system identification. Automatica 11, 189 191. Upadhyaya, B. R. and H. W. Sorenson (1977). Synthesis for linear stochastic signals in identification problems. Automatica 13, 615 622.

Copyright © 2023 C.COEK.INFO. All rights reserved.