IlII tllllot;("!I. Vol. 16. pp. 527 53 4 Pcrgamon Press Ltd. 1980. Printed in Great Britain
()()()5· ((N X XO 0901·0527 S02.00j O
© International Federation of Automa tic Control
Correlation Methods* K. R. GODFREYt Key Words--Computational methods; correlation theory; correlation methods ; dynamic response; identification; random processes.
Abstract ~The paper discusses the theory of correlation methods, emphasising the use of crosscorrelation to determine weighting functions of linear systems. The corresponding frequency domain expressions are derived. Correlation methods have been applied widely in engineering and several applications, and the problems associated with them, are discussed.
To characterise the dynamic structure of a stochastic process, it is necessary to consider the joint probability distributions of the values which it takes at different time instants. For a set of time instants (tl' t 2 ,· .. , t n ), let XI =X(tl)' X 2 =x(t z ) etc. The dynamic characteristics of the stochastic process may be defined by the nth order probability distribution function
I. INTRODUCTION
techniques are used in a wide range of applications, often in conjunction with probability density functions, which provide information on the distribution of amplitudes of · a random signal. Applications range from the analysis of electrophysiological signals to flow measurement and (structural) fatigue analysis. Since correlation techniques are based on signals described by a stochastic process, a brief discussion is given in the next section to define stochastic processes and to introduce the nomenclature used in the sections which follow . CORRELATION
F(X I ' X z,· · ·, Xn)
=Prob[x l ;;:;XI,XZ;;:;XZ, .. ·,xn;;:;Xn].
(1)
For a complete description of x(t), knowledge of this function is required for all values of t I' t z, ... and for all values of n. The means and covariances of stochastic processes provide much of the usefully-available knowledge of the probability structure of the processes. Most applications of stochastic process theory make extensive use of the mean or expected value E[x(t)] of x(t) which we will denote by Px (t). In this paper we will be concerned In particular with the autocorrelation function:
2. STOCHASTIC PROCESSES
The term 'stochastic process' refers to a quantity which evolves with time under the action of some chance mechanism. Other terms frequently used to denote the same quantity include 'random process', 'random function' and 'random signal'. A stochastic process may be visualised as a function of two variables, t and w. The argument t usually denotes time (and will be taken as such in this paper) but could equally represent any continuous variable such as distance, temperature etc., while the argument w may be taken loosely to represent some random event governed by probability laws.
or the closely-related autocovariance function: Cxx(t l , tz)=E{[x(t1 )- Px(tl)] x [x(tz)-Px(tz)]}
(3)
Note that R xx (tl' t z )= Cxx(tl' t z )
+ Px(t I) . Px(tz)
(4)
Crosscorrelation and crosscovariance functions between two different stochastic processes x(t) and y(t) are similarly defined:
*Received 10 March 1980. The original version of this paper was presented at the 5th IF AC Symposium on Identification and System Parameter Estimation which was held in Darmstadt, Federal Republic of Germany during September 1979. The published Proceedings of this IFAC Meeting may be ordered from: Pergamon Press Limited, Headington Hill Hall, Oxford OX3 OBW, England. This paper was recommended for publication in revised form by the Automatica Editorial Board. tInter-University Institute of Engineering Control, University of Warwick, Coventry CV4 7AL. England.
C,y(t I' t z ) =E{[x(t I) - Px(t I)] x [y(tz)-py(t z )]} .
527
(6)
528
K. R. GODFREY
A stochastic process is said to be strictly stationary if its probability characteristics remain invariant under a shift in the time origin. Such a requirement can only rarely be assumed completely but it is still possible to simplify the analysis of stochastic processes satisfying a much weaker criterion of stationarity to a degree which is just as useful as if they were strictly stationary. In particular, the concept of wide-sense stationarity (sometimes called weak stationarity) only requires that the mean value of a process is a constant. The correlation function then depends only on the time difference It2 - t 11 and setting this equal to r, Rxx(r)=E[x(t) · x(t+r)]
(8 )
For virtually all stationary random functions of practical interest, it is found that the mathematical expectation operator used above is equivalent, under fairly general conditions, to an average performed on any particular realisation of the stochastic process over an infinite time interval- in other words, the ensemble averages and time averages are equivalent. This is known as the ergodic hypothesis. Thus, for a stationary, ergodic process x(t), 1 T~ ", 2T
T~ oo
1
T
2T
-T
Cxx(O) =E{[x(t) - {Lx]2} variance of x (t).
=
a;,
I.e.
the
3.2.
Rxx(r)=Rxx( -r) (for a stationary process).
3.3.
Rxx(O)~IRxx(r)1
This follows since E[lx(t)-x(t+rW] ~O .'. E[x 2(t)] + E[x 2(t + r)] ~2E[x(t)·
x(t+r)]
:. 2R xx (0) ~ 2Rxx(r) By examining E[(x(t)+x(t+r))2] in a similar manner, we can show that Rxx(O) ~ IRxx(r )1· It is important to note that properties (3.2) and (3.3) do not apply (in general) to crosscorrelation functions. In particular, property (3 .2) becomes Rxy(r)=R yx ( -r).
3.4.
If z(t)=x(t)+ y(t), Then Rzz(r) =Rxx(r) + Rxy(r) + Ryx(r)+ Ryy(r) if x(t) and y(t) are uncorrelated.
T
S x(t)dt
-T
(9)
Rxx(r)=E[x(t)· x(t+r)]
= lim -
Using equations (9), (10) and (11), the following properties are evident: 3.1. R u (0)=E[x 2 (t)] , i.e. the mean squared value of x (t).
(7)
Similarly, the crosscorrelation function between two weakly stationary stochastic processes x(t) and y(t) is given by:
E[x(t)]={Lx= lim -
3. PROPERTIES OF AUTOCORRELA TION FUNCTIONS
S x(t) 'x(t+r)dt
(10)
Cxx(r) = E{[x(t) - Jlx][x(t + r) - {Lx]}
(similarly for crosscorrelation and crosscovariance). If x(t) is a periodic function with period To, the infinite time averages of equations (9), (10) and (11) are equivalent to averages over a time kTo, where k is an integer (~1). For example, for such a function,
4. ANALYTICAL COMPUTATION OF SOME AUTOCORRELATION FUNCTIONS
4.1. A discrete-interval random binary signal Consider the discrete-interval random binary signal shown in Fig. 1. The signal assumes values ± V for time intervals of duration A, changing with probability 0.5 at regularly spaced 'event points' 0, A, 2A,. . .. Clearly {Lx = 0, while from property 3.1, Ru(O)= V 2 • Consider the product x(t) · x(t+r). For r>A, an event point has definitely occurred in the time interval from t to t+r, so that x(t) and x(t+r) are independent (from the above definition of the signal). Thus
E[x(t)· x(t+r)] =E[x(t)]· E[x(t+r)]
For r < A, the probability of an event point in the time interval from t to t+r is r/A in which case E[x(t)'x(t+r)]=O; the probability of no event point in the same time interval is l-(r/A) in which case E[x(t)·x(t+r)]=V 2.
Correlation methods Thus for r < A.,
529
Hence E[x(t)'x(t+r)]
= V 2[
I -(vr)", exp( -vr)- I neven
2[
n.
(vr)"
- ,- exp( -vr)
nodd
vr
(vr)2
(vr)3
1!
2!
3!
]
n.
]
= V 1- - + - - - - - + .. ' exp( -vr) From property 3.2, Rxx( -r)=Rxx(r), and the complete autocorrelation function is as shown in Fig. 2.
.
= V 2 exp( - 2vr).
X(O
v f-
,...I 2
o
-v
-
4
3
'""""-
I 5
6
7
,lA
This is illustrated in Fig. 4. Note that for large values of r, x(t) and x(t+r) become virtually unrelated, so E[x(t) . x(t + r)] approaches
'"-----
E[x(t)]· E[x(t+r)] =0.
FIG. I. Di screte-interval random binary signal.
For any non-periodic signal,
FIG. 2. Autocorrelation function of a discrete-interval random binary signal. T
FIG. 4. Autocorrelation function of a random binary signal.
4.2. A random binary signal Consider now the random binary signal shown in Fig. 3, in which the signal assumes only values ± v, but transition from one level to the other can occur at any time, with an average number of zero crossings of v per unit time. As before, Rxx(O)= V 2 •
4.3. A sine wave If x(t)= V sin(wt+8), then using equation (10),
. sin(wt + WT + 8) dt
r
1 T- oo 2T
= lim
: ~ nn -v
nODUS. DU ~ U U ~ ut U ' FIG. 3. Random binary signa l.
For a given r, the product x(t) ' x(t+r) will be if there have been 0, 2, 4, .. . zero crossings and - V 2 if there have been 1, 3, 5, ... zero crossings. Since the zero crossings are at random intervals of time, the probability of n zero crossings in a time interval r is
+ V2
(vr )" P(n)=--exp( -vr).
n!
T
V2
S -T
2
[COSWT
- cos(2wt + wr + 28)] dt
=tv
2
cos wr.
Note that the autocorrelation function has the same period as the original signal, and that the autocorrelation function does not di.e away as r increases. (Exactly the same expression is obtained if equation (12) is used with To = 2n/ w). This property gives rise to one of the main uses of autocorrelation- the detection of a periodic component in noise. If z(t)=x(t)+y(t), where x(t) is a periodic signal and y(t) is a non periodic signal uncorrelated with x(t), then from property 3.4,
K. R. GODFREY
530 Similarly
1
T
R xy(r)= - Sx(t-r) ' y(t)dt To
and the autocovariance, Cyy(r), of y(t) dies away to zero, leaving only the autocovariance Cxx(r) as r becomes large. The noise component can, of course, be reduced by filtering if the bandwidth of the noise is well separated from the frequency of the periodic component, but if the frequencies are similar, filtering cannot be used, whereas correlation still proves effective.
In some cases, only a limited length of data is available, so that there is loss at the end as the time shift r is increased from zero. Equations (17) and (18) are then further modified to : 1 Rxx(r)= - T - r
_
1
Rxy(r)= -T 5. DETERMINATION OF CORRELATION FUNCTIONS OF DATA
At this point, let us recall the expressions to be used in finding the autocorrelation function of a single variable x(t) to the crosscorrelation function between two variables x(t) and yU) :
1
R xx (r)= lim ~
T
S
x(t) · x(t+r)dt
(13)
S x(t)· y(t+r)dt
(14)
T ~ oo 2T-T
1
R xy (r)= lim ~
S
x(t - r) ' x(t)dt
(19)
x(t-r) ' y(t)dt
(20)
0 T - r
S
- r
0
Clearly, the maximum usable value of r in this restricted case is very much less than the length (T) of record. If the correlation functions are computed digitally, the expressions corresponding to equations (17) to (20) are, for unlimited lengths of data
_
The processes involved in this computation are (a) time shift by an amount r ; (b) mUltiplication ; and (c) averaging the product over a very long (theoretically infinite) time. In practice, the computation can be carried out using a digital computer, an analogue computer or a special-purpose correlator. The corresponding covariance functions can be found by removing the mean level (levels) by software if a digital computer is used and by means of the 'a.c. setting' if specialpurpose equipment is used. The latter is still relatively expensive at the time of writing, but microprocessor-based correlators are currently being developed. The speed limitations of commercially-available microprocessors are a severe limitation on the maximum speed of operation of such a correlator. (Henry and AI Chalabi, 1979). Two modifications to the above expressions are necessary in practice. Firstly, the first signal is delayed (rather than advancing the second signal) so that equations (13) and (14) become: T
S x(t -
r) ' x(t)dt
(15 )
1 T R xy(r)= Iim ~ 2 S x(t-r) ' y(t)dt T ~7J T - T
(16)
T- ':f)
T-r
T
T ~ >o 2T-T
1 Ru (r)= lim 2T
(18 )
- T
Secondly, it is necessary to compute the correlation function over a finite time, T, so that approximate correlation functions are obtained : 1 T R xx (r)= - Sx(t-r)' x(t)dt To
(17 )
1
N- 1
R xy (k)= N
r=O
L
x r - k • Yr
(22)
and for limited lengths of data :
If both x(t) and y(t) are periodic functions with period To, then expressions (17), (18), (21) and (22) are exact correlation functions provided the correlation period = k . To, where k is an integer;?; 1. For non-periodic functions, the variance of 'an estimate of the crosscorrelation function R xy( r) obtained by taking measurements over a finite time T rather than an infinite time is derived by Bendat and Piersol (1966) on the very restrictive assumption of Gaussian signals with zero mean value : _
1
CL
Var[Rxy(r)] = T ) "" [Rxx(u)Ryy(u) + R XY(u + r) . R xy(r -u)] duo
(25 )
Putting y=x gives :
(26)
531
Correlation methods Note that for
TABLE I. SUITAB LE FEEDBACK CONNECTIONS FO R GENERATION OF MAXIMUM-LENGTH SEQUENCES WITH PERIOD UP TO 2047 DIGITS
T =0,
(27) Number of shift register stages. 11
Feedback to first stage the Modulo 2 Periodic of seq uence sum of output N ( = 2" - I J of stages
6. PSEUDO-RANDOM BINARY SIGNALS
2 3 4 5 6 7 8
One of the most interesting and useful periodic signals for system identification work is the pseudo-random binary signal (p.r.b.s.) which has the following properties: (i) The signal has two levels, ± V and may switch from one level to the other only at certain intervals of time t = 0, A, 2A .. .. (ii) Whether or not the signal changes level at any particular interval is pre-determined. The p.r.b.s. is thus deterministic and experiments are repeatable. (iii) The p.r.b.s. is periodic with period To =NA, where N is an odd integer. + 1) in(iv) In anyone period, there are tervals at one level and 1) intervals at the other level. (v) The autocorrelation function is as shown in Fig. 5.
teN -
teN
T/'A
2N
FIG. 5. Autocorrelation function of a pseudo-random binary signal (p.r.b.s.J.
(vi) The most commonly-used p.r.b.s.'s are based on maximum-length sequences (m-sequences), for which N = 2" -1, where n is an integer. These may be generated using an n-stage feedback shift register, with feedback to the first stage consisting of the modulo-2 sum of the logic level of the last stage and one or more of the other stages. Very few feedback connections in fact yield a maximum-length sequence for any particular n; Table 1 lists some appropriate connections for 2 ~ n ~ 11. A shift-register circuit for generating a p.r.b.s. of length 63 is shown in Fig. 6. Clock pulses
I and 2 2 and 3 3 and 4 3 and 5 5 and 6 4 and 7 2, 3, 4 and 8 5 and 9 7 and 10 9 and 11
3
7 15 31 63 127 255 511 1023 2047
9 10 11
For Il ~ 12, about half the possible lengths of sequence may be generated using two stages of feedback , the other lengths requiring four stages of feedback. Suitable connections for 11 ~ 12 will be found in Pe terson (1961).
The binary logic levels are taken as usual as 1 and 0 and modulo-2 addition is the EXCLUSIVE-OR operation:
lEB1=OEBO=O lEBO=OEB1=1 Transformation from logic levels to voltage levels ± V is made by either 1-> + V, 0-> - V or 1-> - V, 0-> + V. (vii) For the sake of completeness, it is worth noting that there are many other kinds of p.r.b.s. in addition to those based on m-sequences. The latter are fairly widely spaced in possible lengths (for fixed ).)-by factors of approximately twoand it is possible to fill in the gaps using, for example, quadratic residue codes which exist for N = (4k - 1), k an integer and N prime. These cannot conveniently be generated using hardware, however, so that all commercially. available p.r.b.s. generators are based on msequences. For a full discussion of all the various classes of p.r.b.s., see Everett (1966). 7. DETERMINATION OF SYSTEM WEIGHTING FUNCTION USING P.R.B.S.
Consider the system shown in Fig. 7. A perturbation signal i(t) is applied to a linear system with impulse response h (t) and the response of the system to i(t) is y(t}. Before y(t} can be measured . it is corrupted with noise 1I(t) (which may Noise n(t)
Linear Input i(t)
FIG.
6. Shift
regi ster circuit for generating m ax imum-length sequence.
a
system with impulse response h (0
y
+
(I)
+
63-digit FIG. 7. Bl ock diagram of a system.
z (t)
K. R. GODFREY
532
include both noise introduced by the measuring device and noise due to other sources) so that the observable response signal is z(t)= y(t)+n(t)
is illustrated in Fig. 8 for a p.r.b.s. input. Note that if the clock pulse interval A of the p.r.b.s. IS sufficiently small,
(28)
{33)
For a stationary linear system, the impulse response het) is the same as the weighting function, which gives the present output in terms of past values of the input as a convolution integral: a)
y(t)=
S h(tl) · i(t -tl)dt l
(29)
o
For all practical systems, there is some finite upper limit to this iptegral, called the system settling time eT.) which means that inputs which occurred longer than 1'. in the past have no effect on the present output. Thus
" FIG.
8. Convolution of a weighting function with the autocorrelation function of a p.r.b.s.
T,
y(t)=
S h(td·i(t-tl)dt l
(30)
o
Suppose that it is required to estimate het) from observations of i(t) and z(t) :
where b( r - t I) is the Dirac delta function. (It is assumed that N is sufficiently large that the d.c. offset of the autocorrelation function can be ignored). Equation (32) then simplifies to (34 )
T,
z(t)=
S h(t l ) · i(t-tddt l +n(t)
(31)
o
A problem arises in some cases that the noise signal net) 'swamps' the system response signal yet). (This is by no means always the case, and quite often, reasonable estimates of system dynamics may be obtained from the response of the system to a step or impulse perturbation.) Where noise is significant, one method of estimating the system dynamics is to perturb the system with a p.r.b.s. and use the equivalent convolution integral relationship between input- output crosscorrelation function and input autocorrelation function: T,
Riz(r)=
S h(td · Rii(r-t l )dt l + Rin(T), (32)
so that the weighting function IS directly proportional to the input- output crosscorrelation function.
8. SPECTRAL ANALYSIS
For a stationary random signal x(t), the Fourier transform of the autocorrelation function, defined by a)
xx(w)=
S Rxx(r)exp( -jwT)dr
(35)
-a)
is known as the Power Spectral Density of the signal. Using the inverse Fourier Transform,
o
the correlation being computed over an integer number of periods of the p.r.b.s. The settling time 1'. must be less than the period (To) of the p.r.b.s., and one system settling time must elapse after the p.r.b.s. is applied before correlation is commenced. If i(t) and net) are uncorrelated, Rin(T )-.+0 as the correlation time is increased. For finite correlation time, the effect of Rin (r) is to appear on the system input- output crosscorrelation function as a 'noise' which diminishes III amplitude as the correlation time is increased. The convolution integral
Recalling that Rxx(0)=E[x 2 (t)] and working in terms of cyclical frequency f instead of radian frequency w, a)
E[x 2 (t)] =
S xx(2nf)df -
(37)
a)
and it is this property which justifies the physical interpretation of the quantity xx as an energy density function. If the units of x(t) are volts, then the units of ~x(w) are (volts)2 per cycle per second.
Correlation methods For the discrete interval random binary signal of Example 3.1, xx(cV)=21 V2(I-neXp( -jwr)dr
=2v21
(38)
(w2/ 2)2
which is illustrated in Fig. 9. Note the somewhat unexpected result that the power spectral density is zero at the event frequency f = 1/ 2.
21f
4lr
x
.
'"
FIG. 9. Power spectrum of a discrete-interval random binary signal.
The cross-spectral density function between two signals x(t) and yet) is simi'larly defined : 00
Xy(jw) =
S
Rxy(r)exp( -jwr)dr
(39)
-00
The physical significance of Xy(jw) may be illustrated as follows . Let x(t) and yet) be passed (separately) through two identical narrow band filters, of gain unity for Wo < w < Wo + bw (where bw is small) and zero for all other frequencies. Multiply together the outputs of the two filters. The average value of the product, divided by bw, gives the real part of the cross-spectral density at frequency wo ' The imaginary part is obtained similarly, except that the signal y(t) is phase shifted by 90° before multiplication. The notation Xy(jw) is used to highlight that the crossspectral density function is (generally) complexunlike the real quantities xAw) and y/w). The frequency domain relationship for the linear system of Fig. 7 is obtained by Fourier transformation of equation (32) :
where 00
H(jw )=
S
h(t)exp(-jw t)dt
(43 )
For further details of frequency domain methods, see Well stead (1978) and Rake (1979).
= V22 sin2(w2/2)
T
is the transfer function of the system. If the noise net) and the input i(t) are uncorrelated, then their cross-power spectrum is zero, so that
H (jw) = iz(jW )/ii (w).
(1-~)coswrdr
o
533
(42)
9. APPLICATIONS OF CORRELATION
As previously mentioned, correlation techniques have a wide range of applications. A typical range of applications was presented at an Institution of Electrical Engineers Conference (1977). An interesting recent application is to the analysis of surfaces (Whitehouse and Phillips; 1978), in which for a wide range of surfaces, it is found possible to predict all the tribological parameters that could be measured from a profile, on the basis of only two points on the autocorrelation function aIld the root mean square of the surface profile. The use of correlation techniques to determine system weighting functions has been discussed by Godfrey (1969a); the paper contains a discussion of the effects of drift and nonlinearities on the measured crosscorrelation function . The particular problems of applying pseudo-random binary signals to a full-scale industrial process (the distillate splitter column of an oil refinery) are described in Godfrey (1969b). The difficulties of interpreting the crosscorrelation function in terms of a (linear) system weighting function when there are nonlinearities present are considerable. One type of nonlinearity which occurs frequently is when the dynamics of the process are different according to whether the input variable is being increased or decreased (Godfrey 'a nd Moore, 1974). When perturbing with a pseudo-random binary signal, extra peaks occur in the cross-correlation function and these can easily be mistaken for the original dynamics being 'echoed' around the system, unless it is appreciated that this type of nonlinearity is present. The use of correlation techniques to determine higher-order kernels in a Volterra series representation of a nonlinear system (the weighting function being the first-order kernel) has been discussed by Barker and Davy (1978). The use of normal operating records to determine a system weighting function is tempting because no special perturbation has to be applied. Provided that the input to the system of interest is continuously exciting (spanning the frequency range of the system), the method should work, at least in theory. In practice, there are pitfalls, one of which is that there may be
534
K. R. GODFREY
hidden feedback from the output to the input. In an example on a steelworks blast furnace (Godfrey and Brown, 1979), reasonable results were obtained for the relationship between ore/ coke ratio and percentage Silicon in the cast iron, but the forward relationship between blast temperature and percentage Silicon was distorted by the reverse relationship through the (manual) control system. Similar results have been reported by Rake (1970). The perils of trying to determine forward path dynamics in the presence of feedback have been discussed by Wellstead (1977). It is strongly recommended that an independent external test signal should always be applied in identification experiments. The philosophy of first applying a step (or impulse) to see if satisfactory results can be obtained directly and if not, to use a p.r.b.s. input and crosscorrelation, is a sensible one. 10. CONCLUSIONS
Correlation methods are used widely III engineering applications. Crosscorrelation is a useful approach for determining weighting functions of noisy linear systems provided that proper precautions are taken to recognise and include the effects of nonlinearities and to provide appropriate inputs which continually excite the system over its frequency range of interest. REFERENCES Barker, H. A. and R. H. Davy, (1978). Measurement of
second-order Volterra kernels using pseudorandom ternary signals. Int. J. COlllrol 27, 277- 291. Bendat, J. S. and A. G. Piersol (1966). Measurt'mt'nt and A/wlysis of Random Data. John Wiley , New York. Everett, D. (1966). Periodic digital sequences with pseudonoise properties. G.E.C. J. Sei. Technol. 33, 115- 126. Godfrey, K. R. (1969a). The theory of the correlation method of dynamic analysis and its application to industrial processes and nuclear power plant. M easure. Control 2, T65T72. Godfrey, K. R. (I 969b). Dynamic analysis of an oil refinery unit under normal operating conditions. Proe. lEE. 116, 879- 888. Godfrey, K. R. and D. J. Moore (1974). Identification of processes having direction-dependent dynamic responses, with gas-turbine engine applications. Automatica 10, 469481. Godfrey, K. R. and R. F. Brown (1979). Practical aspects of the identification of process dynamics. TraIlS. Inst. Measure. Control I, 85- 95. Henry, R. M. and L. AI Chalabi (1979). Microprocessor applications to velocity measurement by crosscorrelation. 8th I MEKO World Congress, Moscow, U.S.S.R. Institution of Electrical Engineers, London (1977). Random Signal Analysis. Conference Publication No. 159. (Available from I.E.E., Savoy Place, London WC2R OBL, England). Peterson, W. W. (1961). Error Correcting Codes. M.LT. Technical Press, Cambridge. Rake, H. (1970). Correlation analysis of blast-furnace operating data. 2nd IFAC Symposium on Identification and Process Parameter Estimation , Prague, Paper 11.1. Rake, H. (1979). Step response and frequency response methods. 5th IFAC Symposium on Identification and System Parameter Estimation , Tutorial paper No. 2.; also Aut omatica 16.
Wellstead, P. E. (1977). Reference signals for closed-loop identification. Int. J. Control 26, pp. 945- 962. Wellstead, P. E. (1978). Spectral analysis and applications. University of Manch est er Institute of Science and Technolog y, Control Systems Centre, Report No. 411.
(Report available from Secretary, Control Systems Centre. U.M.LS.T., Manchester M60 1QD, England). Whitehouse, D. J . and M. J. Phillips (1978). Discrete properties of random surfaces. Proc. R. So c. Lond. Phil. TrailS. A, Math . phys. Sei. 290, 267- 298.