Parametric identification of transient electromagnetic systems

Parametric identification of transient electromagnetic systems

WAVE MOTION 5 (1983) 369-384 NORTH-HOLLAND 369 P A R A M E T R I C I D E N T I F I C A T I O N OF T R A N S I E N T E L E C T R O M A G N E T I C SY...

903KB Sizes 0 Downloads 15 Views

WAVE MOTION 5 (1983) 369-384 NORTH-HOLLAND

369

P A R A M E T R I C I D E N T I F I C A T I O N OF T R A N S I E N T E L E C T R O M A G N E T I C SYSTEMS D.G, D U D L E Y Department of Electrical Engineering, The University of Arizona, Tucson, A Z 85721, U.S.A.

Received 8 April 1982, Revised 2 November 1982

Data into and out of a transient electromagnetic system are considered in the framework of modern system identification. System solutions that take the form of a complex exponential series are discussed. Since the identification of the parameters in the series is non-linear, emphasis shifts to the identification of the parameters in the difference equation whose solution is the exponential series. The subject is cast in the formalism of system identification with generalizations to more complex systems. Two examples are given, one involving an actual electromagnetic experiment.

1. Background Identification of a system involves the extraction of information concerning the system from inputoutput data obtained in an experiment. The techniques of system identification are rooted in early studies in economics [1]. The ideas matured in the early 1960's in control theory, particularly in proceedings of the International Federation of A u t o matic Control (IFAC), beginning with the first I F A C Congress in Moscow (1960). References to this early history can be found in preface to the survey b o o k by Eykhoff [2] and in the survey p a p e r by A s t r o m and Eykhoff [3], which contains 222 references. Parametric identification is a subclass of system identification in which the information concerning the system is obtained in the form of a parametric set. Such a set, for example, might be the poles and zeros describing the system in the frequency domain. Recently the ideas in parametric identification have been formalized by Ljung in a survey p a p e r on system identification [4] and in a joint p a p e r with Glover [5] on parametric compared with non-parametric methods. In addition, Ljung has contributed to the understanding of bias

in the p a r a m e t e r estimates through investigation of stochastic noise models [6]. Parametric identification of a transient electromagnetic system is a concept which only recently has begun to surface in electromagnetic journals. The beginnings of the subject were motivated by workers concerned with transient electromagnetic scattering theory where the excitation is the electromagnetic pulse (EMP) produced in certain nuclear processes. A milestone p a p e r is that of Marin [7]. Although Marin was not concerned with system identification, he pointed out that the electromagnetic scattering from a large class of conducting scatterers is a m e r o m o r p h i c function of frequency. This discovery m e a n t that transient scattering for this class could be represented as the sum of complex exponentials. Fitting data with a complex exponential series dates back at least two centuries to Prony [8] whose method was resurrected by a n u m b e r of electromagnetic workers in the mid 1970's [9, 10, 11]. Subsequently, Dudley [12] showed that Prony's method is a special case of pole-zero parametric identification. This view of modeling of transient electromagnetic scattering in the established f r a m e w o r k of system identification introduces signal processing and

0165-2125/83/$3.00 I~ 1983, Elsevier Science Publishers B.V. (North-Holland)

D.G. Dudley / Parametric identification

370

system modeling techniques that contain established procedures for the treating of data. In this paper, the transient electromagnetic system is considered in the framework of modern system identification. The subject is introduced through consideration of a simple, classic transient electromagnetic scattering problem whose solution appears in the form of a complex exponential series. Next, the complex exponential series is discussed with particular attention to the identification of the parameters in the series. This discussion leads to a consideration of the differential and difference equations whose solutions are complex exponential series. Particular attention is given to identification of the coefficients in the difference equation. This subject is next cast in the formalism developed by Ljung [4], a formalism which allows the presentation of a number of different models applicable to the electromagnetic case. Next are included two examples. The first is a moving average model with real data taken on the Lawrence Livermore National Laboratory electromagnetic transient range. The second is the dynamic adjustment model with computer-simulated data. The paper ends with a discussion and some recommendations.

of x and y so that 0

0

Ox

Oy

(2)

In this case, Maxwell's equations (MKS units) reduce to the following: ---

~--,

az

---

3z

(3)

at

= 8(z - h ) 6 ( t ) + o ' ~ x

Ex (z, to) =

F

~x (z, t)

(1)

where £ is a unit vector in the x- direction. Further assume that the problem geometry is independent

e -i°~t

dt

(5)

and similarly for the H-field. Application of the Fourier transform to (3) and (4) yields

dz

The basic ideas involved in parametric identification of a transient electromagnetic system will be introduced in a simple electromagnetic boundary value problem. In preparation for the specific problem, consider Maxwell's equations in the time domain for the special case where the excitation is a temporal impulse of x- directed current spatially confined to a plane z = h and temporally an impulse, viz:

(4)

Ot

oo

ito/zHy,

d~

2. Introduction

+e

with all other field components equal to zero. In (3) and (4), the fields are functions of z and t, # is the permeability, s is the permittivity, and o- is the conductivity. If it is assumed that (tr,/x, e) are constants throughout the region of interest, (3) and (4) are linear equations. Define the Fourier transform of the E-field by

dEx dz

J=£6(z-h)6(t)

0.

- 6 (z - h) +

(6)

(7)

iwe'Ex

where e' is the complex permittivity defined by Or

e' = e + __.

(8)

lto

Equations (6) and (7) are easily manipulated to give d2E~ (9)

d z 2 + k '2Ex = itolx 6 (z - h),

Hy -

1 dEx ito/z dz

(10)

where k' is the complex wave number given by k ' = to(txe') 1/2.

(11)

D.G. Dudley / Parametric identification Consider a two region problem (Fig. 1). Ro is the half space z > 0 containing a sheet current source at z = h . In Ro, it is assumed that the medium is free space so that o" = 0 a n d / z and e take their free space values/Zo and e0, respectively. R1 is a slab of thickness d wherein the m e d i u m is typified by cr, /~o, e. The slab is terminated at z = - d by a perfect conductor. In Ro, (9) and (10) become

The intermediate steps to the solutions to the differential eqs. in (12) and (15) follow standard methods and are omitted. The result for the E field in the two regions is as follows:

e

+ e-ik°z E I

(0
(12)

dz 2

Hyo =

sin koZ _ikoh



Exo = -lo~IZo ~

Exld2Ex° ~-k~Uxo = iw/x 6(z - h),

371

sin kl(z +d) sin kid

EI

(-d
(18) <0)

(19)

where E I is the E-field at the interface z = 0 and

1 dE.o iw#o dz

(13)

where

LEI = -io)tXo e -ikoh

(20)

where

ko = o)(/Zoeo) :t/2 = w__

(14)

c

and where c is the speed of light. In R1, (9) and (10) b e c o m e d2Exl + k12Ex1 = 0, dz 2 Hyl =

1

(15)

dExl

(16)

io)lZo dz

L = iko + k l ctn kid.

The last step in the p r o b l e m solution is to multiply by the spectrum of the input pulse and then take the inverse Fourier transform. If f(t) is the input pulse and F(w) is its Fourier transform, the E-field in R0 in the time domain is given formally by

F(o))Ex(z,o))ei~'do)

gxo(Z, t) = ~-~

where

=-

c

er

(22)

oo

k,--o) [uo~ (1 +_-m-l] 1'~ \

(21)

lo)E/J

+

(17)

and similarly for the other field components• Consider specifically the expression for the E-field in Ro, given by (18). Substitution of (18) into (22) gives

and where 8r is the dielectric constant in R1.

xo= current sheet

1

~OiEO Ro Ii X

RI

o-, Po, (E

d

/ Fig. 1. Example electromagnetic problem (two region).

+~

~o 1 - i ( k l / k o ) ctn k i d do)

(23) where ~o is the intrinsic impedance of free space. In (23), the integral can be solved by the calculus of residues. Since emphasis in the following d e v e l o p m e n t will be on the geometry-related characteristics in the solution, it will be assumed that the forcing function contains no singularities in the finite o)-plane. As a function of frequency the integrand can be analytically continued

D.G. Dudley / Parametric identification

372

throughot the complex ~o-plane with the result that it is analytic in the lower half plane and m e r o m o r p h i c in the upper half plane. This is a result whose generalization to other problems has far-reaching consequences in electromagnetic identification. The m e r o m o r p h i c character arises entirely from the interface field EI, a fact that has been established in the operator equation (20). In this problem, the operator L, given in (21) is algebraic and therefore immediately invertible. It is easy to establish that L -1 has only poles in the upper half plane. Indeed for the case where ~ = 0, the poles are simple and can be found specifically at locations

wn

- -

d

3. Complex exponential series

+x ~-~ In ~~/-T75-~~, ke r l/ -

-

n = 0, : t : 1 , . . . .

(24)

For this case, utilization of the residues at these simple poles gives for the form of the solution to (23)

+ ~ R . e S"('-Iz+h~/cl

(25)

n=--co

where R., n = 0, + 1 . . . . . s, = iw,.

L is not algebraic. Indeed it is an integral operator whose inverse, considered as a function of frequency, is a formidable task. McLeod [14] has obtained an integral operator for the scalar H e l m holtz equation in two dimensions where the scatterer is a perfect conducting obstacle of finite extent. H e shows that the inverse operator is m e r o m o r p h i c except for the expected branch cut in log(kr) always present in two dimensions. As mentioned earlier, Marin [7] has considered the three-dimensional electromagnetic case for a finite perfect conducting scatterer. Extensions to lossy media do not appear to be easy.

Because of the expectation that m a n y examples of electromagnetic data can be modeled by a complex exponential series, the series has received considerable attention in the electromagnetic literature. Let h (t) be data obtained in an electromagnetic process. Assume that the data can be represented by N

h(t)= •

RneSnt+e(t)

(27)

n=l

are the residues and (26)

Although evaluation of the residues has not been done specifically here, it is clear that the form of the solution is a complex exponential series. For the case of non-vanishing or, the exponential series form does not change so long as the poles remain simple. That this is the case except for certain specific values of or will not be shown here but is available in [13]. In this simple example, it has been shown that the transient electromagnetic scattering is in the form of a complex exponential series. Generalization of these results to classes of electromagnetic problems is not a trivial exercise. The difficulty is that in higher dimensions, the operator

where ~(t) is an error term arising in three ways. First, the exponential model may not be perfect; for example, the infinite conductivity assumption discussed above can never be satisfied exactly. Second, h(t) is not the output from the model, but rather from the electromagnetic process and therefore must contain m e a s u r e m e n t error. Third, the order N of the exponential series is finite and therefore an approximation to the infinite series discussed in (25). A system identification problem associated with (27) can now be stated as follows: Given data h (t), identify the residues Rn and the poles sn such that the error is minimized in some sense. This problem is not an easy one. Householder [15] has considered it after discretization of the time variable and minimization of the error in the Euclidean-norm sense. The result is a set of

373

D.G. Dudley / Parametricidentification non-linear equations. The identification can be linearized, however, by the following strategy. Observe that the complex exponential series in (27) is the solution to an Nth order differential equation with zero initial conditions, so that with error e (t) N d~y(t) M d~u(t) 5~ a~ = ~ b, ,~ +e(t). . =o dt n ~=o dt

(28)

Indeed, take the Laplace transform of (28) to give N

n =0

t=nT,

n=0,1 ....

where T is the sampling interval. Symbolically, let

h, = h(nT).

(29)

where y(t), u(t), e(t) and Y(s), U(s), E(s) are Laplace transform pairs, respectively. Define the system transfer function H(s) by [16]

hk = ~ R~ eS"kr+~k.

(30)

(36)

n=l

In this case, (36) can be shown to be the solution of an Nth order difference equation given by N

N

1

a~yn+k = ~ b~un+k+ek n =0

(37)

n =0

where use has been made of the Z - t r a n s f o r m and the following definition of the system transfer function:

so that M

b.s n

H ( s ) - .=o N

(35)

The discretized version of (27) can now be rewritten:

n =0

H ( s ) = -Y(s) U(s)

(34)

N

M

Y(s) Z a . s " = U ( s ) ~ b . s " + E ( s )

Because of the utility of high speed digital computers, it is advantageous to consider discretized versions of the mathematics described above. Let

+/~(s)

(31)

anS n

H(z) -

zY(z) U(z)

(38)

rt=0

w h e r e / ~ ( s ) is a modified error term produced in the cross multiplication operation by which (29) becomes (31). The denominator polynomial in (31) can be factored to give M

H(s)-

E b,,Sn .u =o +E(s)

(32)

lq ( s - - s . ) n=l

where yk, uk, ek and Y(z), U(z), E ( z ) are Ztransform pairs respectively. The details in obtaining (36) from (37) are contained in [12] and are omitted. Most of the attention in electromagnetic parameter analysis has centered on the identification of coefficients in difference equations such as (27). This problem will now be considered in the framework of system identification.

where it has been assumed that the roots are simple. A partial fraction expansion yields

4. System identification H(s):

~ R~---+/~(s). n=l

(33)

S --Sn

The inverse Laplace transform now produces eq. (27). What has been accomplished in an identification in (28) rather than (27) can be stated as follows. In (27), the coefficients s, to be identified appear in a non-linear term. In (28), the coefficients an and b, all appear linearly.

The objective of system identification is the determination of a model that describes inputoutput data obtained from a system. A rudimentary idea of this concept can be obtained by considering a single-input, single-output system (Fig. 2). The input u(t) is applied to both the model and the system. Noise is added to the system

D.G. Dudley / Parametric identification

374

noise SYSTEM

Y°(t) j,(~

time-stepping operator q 1 by

Y(~)

u(fl

q If(t) = f ( t - 1). ~/,l

MODEL

(39)

Define polynomial operators A, B, C, D, and F based on this time stepping operator. Typically,

A(q 1)= ~ a,q "

(40)

n=O

Fig. 2. Single-input, single-output system identification.

output yo(t) to produce the observable output y (t). This output is c o m p a r e d to the output of the model f(t) through a differencer that produces an error signal e (t). Finally, the model is somehow adjusted to m a k e the error e(t) small in some sense. Although the ideas inherent to the example in Fig. 2 seem straightforward, the user of the identification techniques faces m a n y decisions and m o r e than a few problems. First he must decide what models are candidates for the model to which he will compare his system. These candidate models comprise a 'model set'. After selection of a model set, the user must then determine the experimental conditions under which the experiment will be carried out. Some of the conditions he can control and some will be beyond his control. H e next must produce a model from the model set and finally he must evaluate its worth. In this p a p e r consideration is limited to parametric models in the time domain. This restriction eliminates a large body of work contained under the heading of 'spectral theory'. Readers interested in spectral theory are referred to [6] for a comparison of time and frequency techniques and to Jenkins and Watts [17] for a thorough discussion of spectral methods. N o n - p a r a m e t r i c methods of system identification are also discussed by Wellstead [ 18]. Ljung [4] has considered the parametric identification problem for multiple-input, multiple-output possibly non-linear processes. He later specializes to the single-input, single-output linear case, the case that will now be considered herein. Consider a linear, causal system in which data is taken in discrete time. Let [u(0), u(1) . . . . . u(t)] be the input sequence and [y (0), y (l) . . . . . y (t)] be the corresponding output sequence. Define the

with the other polynomial operators defined similarly. Using the above notation, consider Ljung's linear model

1) C(q 1 A(q -1)y ( t ) - FB(q ( q l~u(t)+D~q ~)e(t)

(41)

where the sequence [e(t)] is a white noise sequence, that is, a sequence with a constant power spectral density. Because of the prominance of 'predictor' models in the literature, note that Ljung's model contains linear predictors as a subset. Indeed, (41) can be easily manipulated to give

=

---

y(t)+~u(t)+e(t).

(42)

This result can be interpreted as a rather complicated digital filter whose function is to filter the data with the polynomials A, B, C, D, F to produce 'residuals' e(t) with white characteristics. If the residual characteristics are not white, the result is usually bias in the estimation of the parameters. For a detailed discussion of this issue, the reader is referred to Dudley [12] for some examples and to Ljung [6] for an extensive theoretical analysis. This subject will be taken up later in this p a p e r with an example. The beauty of Ljung's model in equation (41) is that it contains most of the well-known singleinput, single-output models as special cases. Some of these will now be discussed. The autoregressive (AR) model is obtained in (41) by setting B = 0, C = D = 1 to give Ay(t) =e(t)

(43)

where e(t) is a white noise sequence. This model has been used with great success in speech processing [19].

375

D.G. Dudley / Parametric identification

The moving average (MA) model is obtained by setting A = F = C = D = 1 to give

more accurately written as follows: A y (t) = B u (t) + v (t)

y (t) = B u (t) + e (t)

(44)

where again e ( t ) is a white noise sequence. The M A model is also known as a transfer function identifier. An example from an electromagnetic transient range will be considered later in this paper. The pole-zero (PZ) model is obtained by setting F = C = D = 1 to give A y (t) = B u (t) + e (t)

where v (t) is the noise sequence. A method of dealing with the non-white noise sequence v (t) in (46) was proposed by Clarke [20] and has become known as the dynamic adjustment (DA) model. Basically, the method assumes that the non-white noise sequence v (t) can be modeled as an A R process. The D A model is e(t) A y (t) = B u (t) + - -

D

(45)

where e(t) is a white noise sequence. A special case in (45) occurs when the input u (t) is the delta function. Dudley [12] has shown that this special case results in Prony's method and that the poles contained in the A - o p e r a t o r can be identified in an A R model independent of the zeros in the B-operator, which is the classic Prony result. There is a large amount of confusion in the electromagnetic literature concerning error terms in Prony's method. A consideration of Ljung's model in predictor form (42) clarifies many of the difficulties. Most asymptotic analysis assumes that the error sequence e (t) is a white noise sequence. In this case, with certain restrictions [6], the parameters operating on u(t) and y(t) converge, as the number of data points approach infinity, to their unbiased values. As a simple example of this result, the unbiased convergence for the A R model in (44) has been known since 1943 [1]. Note that the A R model is driven by white noise as a source. This is a much different situation from the PZ model, for example, where the source is u(t), and the error term e(t) is not a driver but a result of noise arising in the measurements and errors caused by model inadequacy. Thus, the chances of e (t) being white or near-white are small indeed. As has been pointed out [12], Prony's method is an 'ignorant predictor because the method includes no provision for testing the noise and then dealing with non-whiteness. Indeed, the PZ model with no provision for noise analysis is

(46)

(47)

Note that comparison with (46) produces D v (t) = e (t)

(48)

which is an autoregressive fit to the noise. The basic problem is to design a filter D (q-t) to convert v(t) into a white noise sequence. Note that (47) can be alternatively written A D y (t) = B D u (t) + e (t)

(49)

where use has been made of the commutivity of the stepping operators. If a filter D can be found, it can be used in (49) to filter the input and output data tO produce A ~ ( t ) = B S ( t ) + e (t)

(50)

where 37(t) = Dy(t)

(51)

if(t) = D u ( t )

(52)

and where asymptotic convergence is now assured by the whiteness of e(t). The problem with this approach is that the original noise sequence is never available. It can only be estimated during an identification of A and B. An iterative technique based on estimates of v(t) has been suggested [2] and an example will be considered later in this paper. Once a particular model has been chosen, a strategy must be adopted whereby the model parameters are adjusted so that the model 'best fits' the data. In (42) for example the prediction

D.G. Dudley/Parametric identification

376

error must be made small in some sense. Ljung and Glover [5] introduce an criterion of fit l(t, 0, e(t)), where 0 is a column vector containing the p a r a m e t e r s of the model. They then adopt an error criterion VN such that after N data have been collected VN(O,

yN-1

, /AN 1)= ~1 ~N l(t, O, e(t))

'~

/\\\\

tronsmit

u

T

(54)

= [u (0), u (1) . . . . .

(55)

The error criterion VN is a scalar-valued function of 0 and is minimized by adjusting the elements of 0. By far the most popular choice for the criterion of fit has been

l(t, O, e(t)) = [e(t)[ 2,

receive

? I~-----35m f ~

=[y(0), y(1) . . . . . y(t)], u (t)].

\\\\~

(53)

where T

~'i

(o)

t=l

y

3.5m

(56)

a choice which results in the least squares methods of identification. A n o t h e r often used choice is the m a x i m u m likelihood criterion, given by

\\\\l

~\\\,

~ronsrnit

receive

(b) Fig. 3. Transient range; (a) Free field measurement; (b) Obstacle measurement.

y (t). The objective is to identify the transfer function of the electromagnetic process. As a model, consider the convolution relation for a causal, linear system, viz: t' t

l(t, O, e(t)) = - l o g f(t, 0, e(t))

(57)

where f is the probability density function of the error e (t). Note that if f is assumed to be normal, (57) reduces to (56).

)~(t) = Jo b('r)u(t - z ) d~"

where b (t) is the transfer function to be identified. If the data is sampled with sampling interval T, (58) becomes M

5. Example: MA model, transient range data The following is an example of the use of a moving average (MA) model to analyse data taken on the Lawrence Livermore National L a b o r a t o r y Electromagnetic Transient Range. A free-field m e a s u r e m e n t u(t) is taken with a short electric p r o b e protruding up through the surface of a conducting ground plane (Fig. 3(a)) at a distance of 3.5 meters from a transmitting biconical antenna. A conducting obstacle is then placed at the same distance away from the transmitting antenna and an E-field m e a s u r e m e n t y(t) is taken with the same p r o b e through the skin of the conducting obstacle (Fig. 3(b)). It is assumed that this experiment is a linear system with input/A (t) and output

(58)

1

m<~n=O ..... N-l, m=0

(591 where, typically

b,,, = b ( m T ) .

(60)

Define the error e, between the model data Y, and process data y, by e. = y~ -~,~

(61)

and thus obtain the following model description: M

1

y, = • bmu,~-m+e,. ,,-0

(62)

Note that (62) is precisely the M A model defined in eq. (44). It is convenient to write (62) in vector-

D.G. Dudley / Parametric identification

matrix notation, viz:

and

R~(0)

(63)

e=y-Ub

R~r(1)

where e T= [e0

eN-1],

(64)

yT = [Y0 Yl " " • Y N - 1 ] ,

(65)

bT =

el''"

[bo

bl • • •

-Uo

bM-1],

(66)

0

0

0

u 1

l~o

0

0

u2

l~l

Uo

0

u3

If2

Ul

U0

_u4

ll3

U2

b/Z_

U =

(67)

and where the U-matrix is shown for the 5 x 4 case. In (64) through (66), T denotes transpose. Adopt the least squares criterion given by (56), which is equivalent to minimizing the Euclidean norm of e, viz: Ilel[ = (y - U b ) T ( y - U t , ) .

(68)

The well-known [12] result is b = Uty

(69)

where U* is the pseudo-inverse given by U* = (uTu)-~U

(70)

T.

If the data is zero-filled both before the start and after the end of the finite data records, (70) can be written in terms of correlation functions as follows: (71)

b = ~lr~r

where

~

377

=

Ra(O)

Rrf(1)

.. • Rrr(M - I f

R~(1)

Rtf(0)

• • • R~(M-2)

R~(2)

R~(1)

• • • Rt~(M-

R~(M

- 1) R ~ ( M

- 2) •. •

3)

Rrf(0) (72)

rgf =

Rgr(2)

(73)

R ~ ( M - 1) In (72), R ~ ( n ) is the sample autocorrelation function of Uk at the nth time step. In (73), R , r ( n ) is the sample crosscorrelation function between Yk and Uk at the nth time step. Utilizing the Toeplitz form of (72), Harris [21] has designed a user-interactive algorithm based on Levenson's recursion [22] to accomplish the inversion indicated in (71). The following example is a usage of his algorithm. Figures 4 and 5 illustrate the free-field Uk and obstacle field Yk, respectively. Each data set is sampled at a 0.02 nanosecond sampling rate and contains 1024 points. Figures 6 and 7 are the Fourier transforms of the free-field and obstacle field, respectively. Both temporal data sets are subjected to the following preprocessing. First, since neither data set can have a non-zero value at zero frequency, the mean value is removed from each waveform sequence. Second, based on an examination of the Fourier transforms, a decision is made to treat all response beyond 4 gigahertz as noise. Both data sets are therefore low-pass filtered at 4 gigahertz with an eighth-order, analog-derived Butterworth filter. Finally, based on the sampling theorem, the data is decimated by 4 : 1 , yielding a new sample rate of 0.08 nanoseconds and a total record length of 256 points. Figures 8 and 9 are the input and output sequences after preprocessing and Figs. 10 and 11 are their respective Fourier transforms. Use is now made of Harris' algorithms to identify a 256 point transfer function fitting the inputoutput data in Figs. 8 and 9. The result is the transfer function in Fig. 12, with Fourier transform in Fig. 13. The success of the identification can be examined by comparing the output in fig. 9 with

D.G. Dudley / Parametric identification

378

o ×

x o

c~

o

d d

o

2

E 0

b

o

9~

©

0

9

9

i 8_ O[X

~"8

I

I

I

8"I

~'~

90

@]SIJ

I 0'0

9"0-

9

o

L~

o

8"I-

~10IX

9"I

z'l

333UIS~0

8"0

I~q3I J

~"0

0"0

3q3UISSO

m o ×

× c?

o

Z,

I

d

2 o

0 0

o

~o

g 32

9 co

,6

N u3 9 {:z, c4

I

o 8_O|X

S'C

O'S

9"8 OqIIJ

0'0

]2~J

S'C

O'S-

I1-

OIX

S'8

0"~

I

,

£'I

0"I

0]~]9

3~9

9 c,

--I s'0

0"0

D.G. Dudley / Parametric identification

379

o~ o x

o x

£ o u3

E

o

E

0

0

o ~n

u3 b.J "6

o

£

u3

C~

~r ~6 o o

z__OIX

~"~

8"I

g'I

9'0

[17319

O'O

9"0-

Z'I-

ii

OIX

9"I

O'O

~'l

373d1£00

0731J

I,'0

0'0

373U1£00

K ×

x

o c~

l

o

0

l= 0 ,,~

o. tn

S

w

8 £

L J z OlX

£'L

,

,

)

0'£

£'Z

0"0

0731J

32~3

o

8

.i

~0

E

, £'~-

: 0"£-

11_ OIX

I

I

I

I

£'Z

O'Z

£'I

O'l

0731~

33~3

o £'0

0"0

D.G. Dudley / Parametric identification

380

o x 9

o

r~

r-

o bo

c3

cm co

E 0 o

.£ Z- O;X

I

I

J

I

S'8

0'~

S'l

O'l

2

INdlRO

W~Igig

t~ S'O

0'0

£'0-

ONU

LRdlRO

O'I-

S'I-

q3OOW

H

m o x

g 2 u~ o u3

o

u3

f-l_olx

0"~

4

o

I

I

S'I

0"I

i S'O

I0"0

-i

o

o .4

S'O-

,.? NO] lgNfl_-I ;~34SNUNI

J

=~ f-

C3

~4 0

C3 C3

01_ OlX

8"l

S';

3"t

6"0

NO]I3NnJ ~]3SNU~I

9"0

C'O

D'O

i~

D.G. Dudley / Parametric identification

the result of a convolution between Fig. 8 and the transfer function (Fig. 12). This result is displayed in Fig. 14 and shows excellent agreement.

6. Example: D A model, computer simulated data In the discussion of the D A model, it was pointed out that the noise sequence v ( t ) in (48) is not available during the identification. Eykhoff (2) has suggested an iterative technique that proceeds as follows. Consider the P Z model with noise sequence v (t) as described in (46). Minimize v (t) in the least squares sense to produce an estimate of A and B, symbolized .4 and/~. Substitute these estimates into (46) to give an estimate of v (t), viz: ~(t) = Ay ( t ) - / ~ u (t).

(74)

Next, fit an autoregression D in (48) to the noise estimate v ( t ) to produce a white noise estimate ~(t). Filter the original data with the filter D, as in (51) and (52). Next repeat the process using the filtered data in (46). This procedure is continued until the estimate ~(t) are, with a high degree of confidence, white noise. This iterative D A model has been tried on simple computer-simulated data. Consider a fourth-order system with delta function input. The output hk is given by 4

hk = ~ Rn e s"k

(75)

381

in Table 1. The middle column gives the identified poles and residues after a least squares identification (Prony's method), using 98 data points. The order of the system is overestimated by a factor of two in order to allow a fit to the noise on the data. Note that the least squares estimation produces fair results for the residues and the imaginary parts of the pole locations. The damping factors, given in the real part of the pole locations, are badly in error. The right column in Table 1 gives the D A model results after six iterations through a fifth order autoregressive filter used to whiten the residual estimates. Note the improvement in all aspects of the identification, particularly in the real part of the pole locations. During the running of the iterations, it was noted that an estimate of the autocorrelation of the residuals indicated that the least squares identification produced non-white noise, but that successive runs through the filter whitened the residuals until after six iterations the residuals were essentially white. The test for whiteness was that suggested by Jenkins and Watts [17]. A considerable amount of effort was expended testing the iterated D A algorithm against a variety of computer-simulated data. In some cases the results were excellent, but in other cases, the algorithm failed to produce convergence to white residuals. Indeed for some selections of noise model order, the results oscillate as a function of iteration number.

n=l

where

RI= R2=-0.5,

(76)

R3=R4=0.5,

(77)

s l = s * = - 0 . 0 3 5 +i0.25,

(78)

83 = s4* = - 0 . 0 3 5 + i0.50.

(79)

The data hk calculated from (75) are subjected to 0.1 percent zero mean Gaussian noise from a random number generator. The noisy data is then applied to an iterative D A algorithm constructed by the author. An example of the results is given

7. Discussion Parametric identification of a transient electromagnetic system involves the following issues: Selection of a model, design of the experiment, data handling, selection and running the identification algorithm, and application of the results. The model selection issue in electromagnetics has to date been largely ignored. Primary emphasis has been on the difference equation model and the resulting complex exponential series solutions.

382

D.G. Dudley/Parametric identification Table 1 DA model versus unfiltered pole-zero model Actual residues Real 0.0 0.0 0.0 0.0 0.50 0.50 0.50 -0.50

Pole-zero model residues Imag 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0

Poles Real -0.035 -0.035 -0.035 -0.035

DA model residues

Real

Imag

Real

Imag

0.0032 0.0032 0.0032 0.0032 0.59 0.59 -0.61 -0.61

0.014 -0.014 0.0013 -0.0013 -0.086 0.086 0.11 0.11

-0.0010 -0.0010 0.0033 0.0033 0.52 0.52 -0.53 -0.53

0.0044 -0.0044 0.0059 -0.0059 -0.00027 0.00027 0.0020 -0.0020

Poles Imag 0.50 -0.50 0.25 -0.25

Real -0.34 -0.34 -0.17 0.17 -0.065 -0.065 -0.070 -0.070

T h e r e is no r e a s o n to b e l i e v e t h a t this m o d e l p r o v i d e s the b e s t m e t h o d of d e t e r m i n i n g physical p r o p e r t i e s of t h e e l e c t r o m a g n e t i c p r o c e s s . I n d e e d , t h e use of t h e c o m p l e x e x p o n e n t i a l series is an a t t e m p t at a ' g l o b a l ' r e p r e s e n t a t i o n of p r o p e r t i e s in t h e sense t h a t t h e use is a t t e m p t i n g to find t h e n a t u r a l r e s o n a n c e s of the e l e c t r o m a g n e t i c scatt e r e r . R a y - o p t i c a l e l e c t r o m a g n e t i c solutions, on t h e o t h e r h a n d , e m p h a s i z e t h e local s t r u c t u r e of the s c a t t e r e r a n d p e r h a p s p r o v i d e a g u i d e to an a l t e r n a t e class of m o d e l s . I n d e e d , t h e r e is r e a s o n to b e l i e v e t h a t m o d e l s i n c o r p o r a t i n g b o t h g l o b a l a n d local c h a r a c t e r i s t i c s m a y be n e c e s s a r y to o b t a i n a n y d e g r e e of success with a c t u a l d a t a . D e s i g n of t h e e x p e r i m e n t [23] is c o n c e r n e d with making the experiment maximally informative. This issue involves such c o n s i d e r a t i o n s as t h e ' b e s t ' i n p u t signal, the ' p r o p e r ' s a m p l i n g r a t e , a n d 'intell i g e n t ' m o d e l selection. A t p r e s e n t , t h e r e has b e e n little a t t e n t i o n to t h e s e c o n s i d e r a t i o n s in elec-

Poles Imag 1.3 -1.3 2.7 -2.7 0.51 -0.51 0.27 -0.27

Real -0.26 -0.26 -0.28 -0.28 -0.038 -0.038 -0.033 -0.033

Imag 1.1 -1.1 2.8 -2.8 0.49 -0.49 0.26 -0.26

t r o m a g n e t i c s . In p a r t i c u l a r , in s o m e e l e c t r o m a g netic e x p e r i m e n t s , it is p o s s i b l e to select t h e f o r m of the i n p u t signal. Since p a r a m e t e r bias has b e e n l i n k e d to t h e l e n g t h of t h e d a t a r e c o r d , it s e e m s d e s i r a b l e to i n v e s t i g a t e classes of e l e c t r o m a g n e t i c i n p u t signals t h a t w o u l d allow a p p l i c a t i o n of asymptotic convergence theory. D a t a h a n d l i n g has b e e n t r a d i t i o n a l l y t h e role of the signal p r o c e s s o r . In c o n j u n c t i o n with s y s t e m identification, it involves t h e p r e p r o c e s s i n g of the d a t a b e f o r e the d a t a is s u b j e c t e d to the i d e n t i f i c a t i o n a l g o r i t h m . E l e c t r o m a g n e t i c r e s e a r c h e r s , p r e o c c u p i e d p r i m a r i l y with P r o n y ' s m e t h o d , are r e a l i z i n g t h a t p r e p r o c e s s i n g o f t e n m a k e s a p r o f o u n d d i f f e r e n c e in t h e i d e n t i f i c a t i o n results. Essential to any electromagnetic i d e n t i f i c a t i o n p r o c e d u r e is t h e p r e s e n c e of a g o o d set of signal p r o c e s s i n g a l g o r i t h m s . S e l e c t i o n a n d r u n n i n g of an i d e n t i f i c a t i o n a l g o r i t h m is at t h e h e a r t of the e l e c t r o m a g n e t i c

D.G. Dudley / Parametric identification

identification process. Because of the emphasis on identification in control theory, there are many algorithms from which to choose. Electromagnetic researchers have tried few of them to date and there is much to be learned from researchers directly involved in the identification business. Prony's method is an autoregressive model which assumes a white noise input, an unrealistic assumption in electromagnetic identification. There are input-output models with noise processing available. These so-called 'black-box' models, however, are 'ignorant' in the sense that they provide no mechanism for the incorporation of any known features of the system into the model. It might be argued that Prony's method identifies the poles of an electromagnetic system. This statement is not precise. Prony's method identifies the poles in the data. As yet no one has discovered how to incorporate any electromagnetic features into the models. It is stressed that the application of the results of the identification forms an essential part to the entire identification process. Indeed, the planned utilization of results can be the single decisive factor in all issues. For example, if it has been decided that the system characteristic poles in the complex frequency domain are the important result, this decision might lead to one particular choice of algorithm. If, on the other hand, one desires a catalog of signals scattered from different targets, the choice of algorithms could be radically different. The MA model considered earlier in this paper is an example. Some possibilities for future work in this subject are as follows: (1) Extension of Marin's work to lossy objects. (2) Obtaining electromagnetic data under precisely controlled experimental conditions. A good electromagnetic transient range might form a vehicle for noise studies. (3) Study of the limitations of transient identification. Transient identification remains one of the single most perplexing problems. To date, all convergence proofs for parameters are based on large numbers of data so that one can

383

apply asymptotic stochastic theory. In the transient case, there is usually a severely limited number of data available. (4) Limitations of black-box models. Black-box models identify poles and zeros in data. It is important to learn how these relate to the poles and zeros of the actual electromagnetic system. This is an especially important issue when the electromagnetic scatterer has a complex shape. (5) Identification using models that contain electromagnetic information. In this area, the researcher must begin to deal with inclusion of electromagnetic information in the identification algorithm. Unfortunately, such inclusion will surely involve non-linear identification techniques.

Acknowledgment A portion of this work was accomplished during two periods when the author was a Summer Visitor at Lawrence Livermore National Laboratory, in a department under the direction of E.K. Miller. H.S. Cabayan made possible the transient range studies. E.J. Bogdan and D.M. Wythe performed the bulk of the measurements. J.V. Candy introduced the author to the signal processing algorithms. D.B. Harris wrote the MA model algorithm and G.A. Clark and E.J. Bogdan ran many examples from the transient range. During 1981, the author had several meetings with L. Ljung, who was at that time a Visiting Professor at Stanford University. Professor Ljung's insights in identification have influenced the author's view of the subject. Finally, the author would like to thank K.J. Langenberg whose interest lead to the presentation of these ideas at the URSI General Assembly, August 1981, and subsequently to the writing of this paper.

References [1] H.B. Mann and A. Wald, "On the statistical treatment of linear stochastic difference equation", Econometrica 11, 173-320 (1943).

384

D.G. Dudley / Parametric identification

[2] P. Eykott, System Identification, Interscience, New York (1974). [3] K.J. Astrom and P. Eykott, "System identification--a survey", Automatica 7, 123-162 (1971). [4] L. Ljung, "System identification", Department of Electrical Engineering, Linkoping University, S-581 83 Linkoping, Sweden (1981). [5] L. Ljung and K. Glover, "Frequency domain versus time domain methods in system identification", Automatica 17, 71-86 (1981). [6] L. Ljung, "Convergence analysis of parametric identification methods", IEEE Trans. Automat. Control 23 (5), 770-783 (1978). [7] L. Marin, "Natural-mode representation of transient scattered fields", I E E E Trans. Antenna Propagation 21, 809-818 (1973). [8] R. Prony, "Essai experimental et analytique sur les lois de la dilatabilite de fluides elastiques et sur celles de la force expansive de la vaperu de l'alkool, a differentes temperatures", 3. l'Ecole Polytech. (Paris) 1 (2) 24-76 (1795), translation available in J.R. Auton and M.L. VanBlaricum, "Investigation of procedures for automatic resonance extraction from noisy transient electromagnetics data", Final Report, Contract N00014-80-C-0299, Office of Naval Research, Code 427, Arlington, VA 22217 (1981). [9] D.L. Moffatt and R.K. Mains, "Detection and discrimination of radar targets", IEEE Trans. Antenna Propagation 23, 358-367 (1975). [10] M.L. VanBlaricum and R. Mittra, "A technique for extracting the poles and residues of a system directly from its transient response", IEEE Trans. Antenna Propagation 23 (6), 777-781 (1975). [11] E.K. Miller, ~'Natural mode methods in frequency and time domain analysis", in: Theoretical Methods for Determining the Interaction of Electromagnetic Waves

[12] [13]

[14]

[15]

[16] [17] [18] [19] [20]

[21]

[22]

[23]

with Structures, Sijthoff and Noordhoff, Netherlands (1981). D.G. Dudley, "Parametric modeling of transient electromagnetic systems", Radio Sci. 14 (3), 387-396 (1979). J.B. Grant, "The singularity expansion method applied to plane wave scattering from a lossy slab on a conducting half-space", Masters Thesis, University of Arizona, Tucson, AZ 85721 (1983). J.B. McLeod, "The analytic continuation of the Green's function associated with obstacle scattering", Quart. J. Math. Oxford 18 (2) 169-180 (1967). A.S. Householder, "On Prony's method of fitting exponential decay curves and multiple-hit survival curves", ORNL-455, Oak Ridge National Laboratory, Oak Ridge, TN (1950). A. Papoulis, Signal Analysis, McGraw-Hill, New York (1977) 124-133. G.M. Jenkins and D.G. Watts, Spectral Analysis and its Applications, Holden-Day, San Francisco (1968). P.E. Wellstead, "Non-parametric methods of system identification", Automatica 17 (1), 55-69 (1981). J. Makhoul, "Linear prediction--a tutorial review", Proc. I E E E 63 (4), 561-580 (1975). D.W. Clarke, "Generalized least squares estimation of the parameters of a dynamical model", Proc. 1st I F A C Symposium on Identification in Automatic Control Systems, Academia, Prague (1967). D.B. Harris, "EM waveform extrapolation", L-156, Lawrence Livermore National Laboratory, Livermore, CA 94550 (1981). S. Treitel and E.A. Robinson, "The design of high resolution digital filters", IEEE Trans. Geosci. Electron. 4 (1), 25-38 (1966). G.C. Goodwin and R.L. Payne, Dynamic System Identification: Experiment Design and Data Analysis, Academic Press, New York (1977) Chapter 6.