Optimum recovery of a continuous signal from two discrete channels

Optimum recovery of a continuous signal from two discrete channels

Optimum Recovery of a Continuous Signal from Two Discrete Channels by jAMEs c . HUNG Electrical Engineering Department, University of Tennessee, Kno...

704KB Sizes 0 Downloads 31 Views

Optimum Recovery of a Continuous Signal from Two Discrete Channels by jAMEs

c . HUNG

Electrical Engineering Department, University of Tennessee, Knoxville, Tennessee

ABSTRACT: This paper deals with the recovery of a continuous signal from received discrete data. Very often, two sets of discrete data, obtained independently, are available for the recovery of a continuous signal. The two sets of discrete data have, in general, different data-rates. A better signal recovery will be achieved if both sets of data are employed in an optimum way, rather than when only one of the two sets is used. Optimum systems using both sets of discrete data are, in general, time-varying as opposed to the timeinvariant property of the systems using only one set of discrete data, even though the signals and noise are stationary. A procedure for the optimum recovery of a continuous signal using two sets of discrete input-data will be developed, and methods for evaluating the mean square-error will be shown. An example will also be given to illustrate the method, and the quality, of the double-input system as compared with that of the single-input system. Introduction

In communication and control systems the recovery of a continuous random signal from received discrete data is an important problem. The discrete data occur in many practical situations. There are situations when the datagathering devices themselves, are capable of producing only discrete sets of numbers rather than a continuous variable. There are also situations when practical advantages are to be gained by transmitting and processing only a sequence of numbers as opposed to a continuous variable. The recovery of a continuous signal from discrete data is further complicated by the random noise in the transmitting and processing channels. Therefore, the signal recovery is composed of two operations, namely, extrapolation (or interpolation) and noise removal. Very often the available discrete data appear in several different forms. For instance, in guidance and control systems, measurements of a certain quantity may be taken by two different methods having both different datarates and different accuracies, such as separately obtained measurements of position signal and its first derivatives. The best signal recovery will be obtained if both measurements are used and their results are weighted and combined in an optimum way.

428

Optimum Recovery of a Continuous Signal

In the case of single input optimum signal recovery, the problem is solved by Franklin (1) in the Wiener sense (2). It is important to note that the theory of double input optimum signal recovery cannot be obtained by a simple extension of the theory for the single input case. The basic difference is the following. For both single input and double input systems having a single input-data-rate, the optimum system is time-iavariant if the ~nput signals are stationary. But for the double input system, whose two input-data-rates are different, the optimum system is time-varying even though the inputs are stationary (3). In fact, the transfer function of the system is a function of two independent variables. In this paper a procedure of optimum signal recovery by the double input system having two different input rates will be developed, the method of error analysis will be shown, and an example will be given to illustrate the method. It will also be shown in the example how the quality of the double-input system compares with those of the single-input system. Three assumptions are used for the development of the theory: first, the time series representing the signal and noise are stationary and have rational spectral density functions; second, the performance criterion used is to minimize the continuous statistical mean square-error between the system output and the ideal signal; and third, the system operation is linear. These assumptions can be justified for many practical considerations. gl!

I I I

I I

r,

/ T,

'I

r,: -

! I

i

FIG. 1. Double-reception discrete data optimum signal recovery. Mathematical Formulation

The hypothetical block diagram of the system is shown in Fig. 1. In this figure, r is the desired signal to be recovered. M1 and M2 represent the characteristics of two transmitting and processing channels which generate the random noise nl and n2, respectively, r~ and r2 are the hypothetical noisecontaminated continuous signals. The actual available discrete data at the output of the channels is denoted by r~* and r2*. G1 and G2 are the optimum systems to be synthesized. The error e, whose mean squared value is to be minimized, is the difference between r and the actual system output c. Figure 2 defines the time variables used for the development of the theory. T1 and T2 are two different sampling periods. For simplicity of the derivation,

vol. ~77, ~o. 5, m r 1964

429

James C. Hung

FI~. 2. Definition of t i m e variables.

it is assumed t h a t the ratio of T~ and T: be an integer. The time of observation is denoted b y t, which is r~ and r~ seconds behind the last sampling instants having sampling periods T~ and T~, respectively. Let the impulse response of G1 and G~ be gl(nT1) = g~(nT1, rl, r2) g2(iT2) ---- g~(iT2, r2, "r~)

where g~(nT~) is a function of n, r~ and r2, while g2(iT~) is a function of i, T~ and T2. Notice t h a t between rl and r2 only one is independent, since v~ is known once rl is given. The o u t p u t of the system is oo

c(t) =

r,(t -- r~ -- nT~)g~(nT~) + E r2(t -- ~'2 -- iT2)g2(iT2) n=0

(1)

/=0

and the o u t p u t error is e (t) -- r (t) -- c(t).

(2)

Recalling t h a t in the theory of r a n d o m processes, the correlation function of two stationary time functions x (t) and y (t) is (9, 10) ¢~y(~) -- Ensemble average of [ x ( t ) y ( t ~- ~)~. Taking the ensemble average of the square of Eq. 2 results in the mean squareerror ¢~(0) ---=~brr(0) -- 2 E ~brlr(T1 -}- nT1)gl(nT1) n

-- 2 E O,~(-r2 -~ iT2)g:(iT2) i

~- Z Z ~rlr1(nT~ -- m T , ) g ~ ( n T ~ ) g ~ ( m T , ) ~t

m

"4- 2 E E ¢pr~(rl A- nT~ -- -r2 -- iT2)g~(nT1)g2(iT2) n

i

+ Z Z Cr~(iT~ -- jT2)g2(iT2)g:(jT2).

430

(3)

Journal of The Franklin Institute

Optimum Recovery of a Continuous Signal

The problem then of optimum synthesis is to find gl(nT1) and g~(iT~) such that Eq. 3 is a minimum.

Necessary and Sufficient Condition To minimize the mean square-error, the method of variational calculus (4) is used. Let A¢oo(0) be the variation of ¢~(0), and let hg~(nT~) and hg2(iT2) be the variations of g~(nT~) and g~(nT2) respectively. If g,(nT~) and g~(iT~) are the optimum system impulse responses which minimize ¢~(0), then the variation of h6~(0), as a function of the variations hg~(nT~) and hg~(iT2), should be zero. That is ~¢~(0) = 2 E Ag~(nT~)[E ¢ ~ ( n T ~ -- mT~)g~(mT1) n

m

i

+ 2 E Ag2(iT2)[E ¢ ~ ( r ~ + nT~ -- T2 -- iT2)g~(nT~) i

n

+ E ¢~2~(iT2 -- jT2)g2(jT2) -- ¢~2~(r2 + nT2)~ -- O.

(4)

]

Since Eq. 4 must hold for any physically realizable Ag~(nT~) and Ag2(iT2), it requires that the expressions inside the brackets should be equal to zero individually. E ~,,,,(nT~ -- mT~)g~(mT~) m

~ E ¢,~**(vl + nT~ -- r2 -- iT2)g~(iT~) = ~rlr(T 1 + nT1)

n > 0

(5)

i >_ O.

(6)

E ¢~l~('rt + nT~ -- ~'2 -- iT2)g,(nT1) n

"~ E Cr~r,(iT2 - jT2)g2(jT2) = Cr,~(r2 -t- iT2) i

The above derivation has shown that the summation Eqs. 5 and 6 impose the necessary condition for the optimum gi (nT1) and g2 (iT2). It can be shown that the condition is also sufficient, since the second variation of ¢~(0) is never negative when Eqs. 5 and 6 are satisfied. Solution of the two equations yields the optimum gi (nTt) and g2 (iT2).

Solution of Summation Equations The form of Eqs. 5 and 6 suggests that their solution may be obtained by transforming the equations to, and solving the equations in, the frequency

Vol. 277, No. 5, M a y 1964

431

,]'ames C. Hung domain. In both Eqs. 5 and 6, first, transpose the right side terms to the left side, and then, denote the left side of the obtained equations by f~(nT~) and f~(iT~), respectively. Then f~(~'~T1) -~ 0

n ~ 0

(7)

f:(iT:) = 0

i >_ O.

(8)

Remember it has been assumed that T2 = T1/a where a is an integer. Use ~{ } to represent the z-transformation having sampling period T~ and use ~{ } to represent the z~-transformation having sampling period T~/a. Taking the two-sided z-transform of Eq. 5 with respect to n, and the z~-transform of Eq. 6 with respect to i, gives

• ~,(Z)Gl(Z) -~ ~{~(s)e(~-~)~G2(z~)} = ~ { ~ ( s ) e +~'1} -~ zFl(z)

(9)

~,{~,~(s)e(**-*~)'}G~(z) -~ ~,2~(z~)G2(z~) = ~,{~,2~(s)e+~.2} -~ z~F2(z,)

(10)

where s is the Laplace transform variable, z is the z-transform variable, and z~ is the z~-transform variable. Two sided transformations are necessary, since the correlation functions are non-zero for negative values of n and i. The term ZFl(Z), which is the z-transform of f1(nT~), has all its poles outside the unitcircle of the z-plane due to Eq. 7. Similarly, the term z~F2(z~), which is the z~-transform of f~(iT2), has all its poles outside the unit-circle of the z~-plane due to Eq. 8. Multiplying Eq. i0 by Z-- 1 ~/)rlr2(s) e(rl--~-2)s,

@~(zo) and z-transforming the whole expression, and then substracting the resulted equation from the product of Eq. 9 and z-1 yields

z_l [,i,,,~l(Z) - ~ { (i)r2r2(Za) ~',l,~(s) e('l-~)~E~r2~l(s)e(~2-rl)'~ } ] G~(z) =

z--l~{f~rD,(8)e

-~-srl } -- Z--I~

{ ~lr2(s)e(~,_~)~(s)e+.~

+ Fl(z) -- ~

}

e(~-*~)~F2(z.) ~z~-

.

The bracket factor in the left-hand side of Eq. 11 is symmetrical with respect to z and z-1. Therefore, this bracket factor can be expressed as the product of two factors

Y(z) Y(z-1) = I~rlrl(z) -- ~ { ~e(Tl-r2)8~a[-~Pr2rl(8)e(r2-rDs"]}] 432

(12)

Journal of The Franklin Institute

Optimum Recovery of a Continuous Signal where Y(z) has all its poles and zeros inside the unit-circle of the z-plane while Y(z -1) has all its poles and zeros outside the unit-circle. Substituting Eq. 12 into Eq. 11 and dividing the expression by Y(z-1),

z-Iy(z)GI(z) -- ~

z-l[

~{+~l~(s)e+'~'}

FI(Z)

1 { ~I,~,(s) z-~ } Y(z -~) ~ ¢~(zo) e(~-~2)~F2(z~) z°- 1- . (13)

+ Y(z -~)

In Eq. 13, the term on the left-hand side has all its poles inside the unit-circle, since G~(z) is a stable function and Y(z) has only inside poles by definition. The first term on the right-hand side of Eq. 13, which is completely known, may have poles both inside and outside the unit-circle. The partial fraction method can be applied to this term to separate it into two parts, one of which has only inside poles, the other having only outside poles. The second term on the right-hand side has all its poles outside the unit-circle. The last term,

y (z---l~ ~

~2~(zo ) e(~-~)'F2(zo) _Z a_- 1

on the right-hand side of Eq. 13 may have poles both inside and outside the unit-circle. This term is not completely known due to the fact that F2 (z~) is not known. However, all the inside poles of this term are known since F2 (z~) has only outside poles and is the only unknown factor in this term. Therefore, the part of this last term having poles inside the unit-circle may be expressed as

E--A~ i

Z

--

- ~ a¢

.

Aiz-1 1

--

(14)

c~iz -1

where la~[ < 1. Extracting all the terms of Eq. 13 whose poles are inside the unit-circle, and dividing the entire expression by Y(z), yields the optimum transfer function Gi(z) as ~,(z) =

Y~(z)Z [I ~z-1I

~{®'1r(~)~+*'1}

L

-

~

~,,(Zo)

in

~

i " 1 -- oz~z-1

(15)

where the symbol [ -]~ is used to indicate that only the part of the bracketed terms, whose poles are inside the unit-circle, are taken. To obtain the optimum transfer function G~(z~), let x (zo) X (z~-t) = cr~, (zo)

Vol. 277, No. 5, May 1964

(16)

433

James C. Hung where all the poles and zeros of X (z~) are inside the unit-circle of the z~-plane, while all those of X(z~-1) are outside the unit-circle. Substituting Eq. 16 into Eq. 10, dividing both sides by X(za-O, and rearranging the terms, gives - X (zz1 zo-O

z°-lX(z°)G~(z°)

[~o{~r(s)e + ~ I + z~F2 (zo)

Therefore, the optimum transfer function G2(z~) is [ X ~ a -1) ba{~r2r(8)e+Sr'

-- ~r~rl(8)e(r~--rl)'~l(Z)]

in"

(17)

Now it remains only to determine the constant A~ contained in Eq. 18. This is done by substituting both Eqs. 15 and 17 into Eq. 9 and comparing coeffieients of the terms having like poles. It is interesting to note that if one of the two channels does not exist, the G~(z) or G2(z~) vanishes. Therefore, Eqs. 15 and 17 become z

[

z-1

]

Zo

r

zo-,

]

(18)

or,

_

EG~(zo)3~.°)=0 X(zo) LX~.-') aoi®r~(~)e'"l ,.

(19)

Both Eqs. 18 and 19 are the optimum transfer functions for the single input systems.

Error Analysis By substituting the necessary and sufficient condition, Eqs. 5 and 6, into Eq. 3, the mean square-error of the optimum system is obtained. ¢~(0) = ¢,(0) - ~ ¢~(r~ 4- nT~)g~(nT1) - ~ tr~(T~ 4- iT2)g2(iT2). (20) n

i

Equation 20 is an equation in the time domain. Its frequency domain equivalent is ¢,~(0) = ~

i f ;~~¢.(s)ds-- ~1) f

(;l(z)

b[,~,(s)e-~Q dz z

1 / 2~rj

dz~ G2(Za)~a[C~rr2(S)e--*r2~ --~. (21)

When correlation functions are available, Eq. 20 is used to find the mean square-error; on the other hand, when spectral density functions are available, Eq. 21 should be used.

434

Journal of The Franklin Institute

Optimum Recovery of a Continuous Signal

Example Consider a case where the slow-rate channel is noise-free, while the fastrate channel is noisy. T h e spectral density functions of the signals and noise, shown in Fig. 1, are • .(s)

=

~rl~l(s)

8

- -

-

4

-

s2

s2

• ~,r~(s)

4 - s~

["

• .1.1(s)

0

~)~(s)

1

J

(white noise)

(22)

T h e sampling periods T1 and T2 are 1 and ½ second, respectively. E q u a t i o n s 15 and 17 are used to obtain the o p t i m u m transfer functions. Notice t h a t the two cases, T1 -- r2 = 0 and rl -- 72 = T2, for the o p t i m u m Gl(z) and G2(za), calculated separately, because the modified za-transform m e t h o d applies only to cases where the t i m e a d v a n c e is a fraction of the sampling period T2. Consider first the case when T1 -- r2 = 0. Since the first channel is noisefree while the noise in the second channel is white, Eq. 12 reduces to

[ q'~ (z~)~s~2(z~) ]" ~r~(Z~)

Y (z) Y (z-1) = b /

(23)

Substituting the values of Eq. 22 into Eq. 23 yields 0.4642

Y (z) Y(z -1) = (1 - 0.03636z -1) (1 - 0.03636z)"

(24)

Therefore, 0.682

Y(z) = 1 - 0.03636z -1

(25)

0.682

Y(z-1) - 1 -- 0.03636z"

(26)

T h e inside pole a~ in the last t e r m of Eq. Eq. 15 is the inside pole of

r ®,r(s) ], which is z = a = 0.03636.

(27)

Substituting Eqs. 22, 25, 26 and 27 into Eq. 15, gives

Gl(z) = 0.85(0.190751 q- 52) q- 1.466A,

(28)

where bl = sinh 2r2

(29)

b2 = sinh 2(T2 -- r2),

(30)

and A is a constant to be determined later.

Vol. 277, No. 5, May 1964

435

James C. Hung To find the optimum G~(za), Eq. 16 is used, giving ( 1 - 0.1907z~-1) (1 -- 0.1907z~) X(z~)X(z~ -t) = ~r2r2(Za) = 1.93 ~1~ -- ~ 0.368Za) "

(31)

Hence, ( 1 - - 0.1907za -1) X(z~) = 1.39 (-1 -- ~

X(z~ -~) = 1.39 (1 -- 0.1907z~) ( 1 - - 0.368z~) "

(32)

(33)

Substituting Eqs. 22, 28, 32 and 33 into Eq. 17, gives G2(z~) = 0.0729bl - 0.707A (z~ -- 0.1907) '

(34)

where bl is given b y Eq. 29. The last step is to evaluate the unknown constant A appearing in Eqs. 28 and 34. B y inserting Eqs. 28 and 34 into Eq. 9 and comparing coefficients of the terms having like poles, the value of A~ is found to be A = 0.103b~.

(35)

This completes the first part of the solution which can be written in a final form as Gl(z) = 0.313b~ -t- 0.85b2 = 0.313 sin h2r2 -t- 0.85 sin h2(T2 - 7-~) (36) and Ge(za) = 0.

(37)

The vanishing value of G2(za) in Eq. 37 is expected, since at T1 = r2 = 0 the noiseless d a t a received b y the first channel are completely accepted, while the noisy data received b y the second channel are completely rejected. Consider t h e n the second case when r~ - r2 = T2. For this case G~(z) = 0.162(0.1907b~ A- b2) A- 1.466A

(38)

G2(z~) = (0.1106b~ A- 0.198b2 - 1.92A)za (z~ - 0.1907) -4- (0.0379b~ -[- 0.197b2 A- 1.783A) (z~ -- 0.368) (z~ -- 0.1907)

(39)

where b~ and b~ are given b y Eqs. 29 and 30, respectively. To determine the constant A, Eqs. 38 and 39 are inserted into Eq. 9, and the residues of like poles are compared. Thus we obtain A = 0.0196b~.

436

(40)

Journal of The Franklin Institute

Optimum Recovery of a Continuous Signal 1.0

0.8 i=0

7-

0.6

0.4

~..._.....-~-- ( O, 5 < ~"1 < i) O, 2819

--

0.2

0.0

0.0

0.25 •r 2

0.5

(second)

FIG. 3. I m p u l s e r e s p o n s e of gl~/n T 1, rl, T2).

This completes the second part of the solution which can be written as

Gl(z) = 0.0596bl + 0.162b2 = 0.0596 sin h2r2 + 0.162 sin h2(T~ - r2), (41) and

G2(za) = (0.14651 + 0.395b2)za - (0.0296bl + 0.0725b2) (za - 0.1907)

(42)

Notice t h a t i t / t h i s second part of the solution G~ (z~) is not zero, since at r~ = T2 and r~ = 0 the first channel does not have input data, while the second channel does have input data although it is noise-contaminated. Figure 3 shows gl(nT1), which is the impulse response of Gl(z) and is given b y Eqs. 36 and 41. Figure 4 shows g2(iT2), which is the impulse response of G2(z~) and is given b y Eqs. 37 and 42. Remember, as was indicated at the beginning of the mathematical formulation, both gl(nT1) and g2 (iTs) are also functions of rl and r2. The mean square-error of this optimum system can best be evaluated using Eq. 21 with the aid of the t h e o r y of residue. The result is ¢e~(0) = 1 -- 0.0164(51 + 2.717b2)(al + 0.1406a~) -

-

0.9082d2b2 -- 0.333d2bl + 0.333dlb2 + 0.115dlbl.

(43)

This equation gives the mean square-error, Cee(0), averaged over the entire ensemble, and is a function of rl. T h e mean value of 4°°(0) averaged over rl is given b y

f l ~e(O)drl = 0.67.

Vol. 277, No. 5, May 1964

(44)

437

James C. Hung It is interesting to examine the reduction of mean square-error which has been obtained with the optimum double-input system as compared to that obtained by ¢he optimum single-input systems. 1.0

0.8

%

0.6

as

0.~68

O.Z ~

o.o7'

0.0 0.0

OZ5

0.5

0.75

1.0

~7 ;SEcOMO) FIG. 4. I m p u l s e response of g2(iT2, rl, r~).

When the slow-rate channel alone is used, the optimum system (1, 8) is given by

where

W(z)W(z-~)

=

~E~,,,~(z)].

W(z) has all its poles and zeros in the left half of the s-plane while those of W(z -1) are in the right half of the s-plane. The symbol [- JL indicates that only those bracketed terms, having poles in the left half of the s-plane, are kept. The mean square-error of this system is ¢~(0) = ~

i~ [-¢~(s) -- ~ ( z ) G ~ ( s ) G l ( - s)lds.

Using the data given in Eq. 22, we obtain 1 - - 0 . 1 3 5 z -1

Gl(s)

-

s + 2

(45)

'

and ¢°°(o) = 0.755.

438

(46)

Journal of The Franklin Institute

Optimum Recovery of a Continuous Signal Comparing the mean square-errors in Eqs. 44 and 46 yields 0.67 -- 0.755 - 12.7 per cent, 0.67

(47)

which shows t h a t the double-input system reduces the mean square-error by 12.7 per cent.

'°t

0.9ZT

/ / 0,St;

0.8

QJ

506

~4t$ a4

~2

0.0 0.0

02f

0,5"

~Tf

I,o

Tz C5~¢o,~) FIG. 5. M e a n s q u a r e - e r r o r q,~e(0).

If the fast-rate channel alone is used, then the transfer function of the optimum system and the system mean square-error are 1

G2(~) = ~

[ @,,,(s) ] L x ( - - ~-~) L

where X(za) and X(za -1) are given in Eqs. 32 and 33, respectively. Using the values of Eq. 25, the following result is obtained. 0.482(1 -- 0.368za) G2(s) = (s q- 2)(1 -- 0.1907z~ -1)

(48)

eel(0) = o.888.

(49)

Comparing Eqs. 44 and 49, we can see t h a t the double-input optimum system reduces the mean square-error by 0.67 - 0.888 - 32.6 per cent. 0.67

Vol. 277, No. 5, May 1964

(50)

439

J a m e s C. H u n g

Extension

The method proposed above can be extended to handle the cases of prediction, delayed interpolation, and inputs which are pulse trains having finite pulse-width. In the case of prediction, the required modification is to replace r(t) in Eq. 2 by r (t + p) where p is the prediction time. When the recovered output is allowed to be delayed, a better interpolation can be achieved. This is done by modifying the lower summation limits of Eq. 1. For instance, if the allowed output delay time is ~T~ seconds where ~ is a positive integer, Eq. 1 is replaced by the following equation oo

c(t) =

E n~--~

oo

r~(t -- .r~ -- n T ~ ) g ~ ( n T 1 ) +

E

r2(t -- "r2 -- i T 2 ) g : ( i T 2 )

(51)

i~--a~

where a = T 1 / T 2 be a positive integer. It is possible that the impulse train representation of the sampled signal is not satisfactory is some practical situations, and the pulse train having fixed finite pulse-width must be considered. Under this condition, instead of looking of the optimum impulse response function of the filters, we look for the optimum pulse response function of the filters. And, the technique proposed in this paper is still applicable. Discussion

Realization of the filters obtained from the proposed method can be done by first plotting the impulse response curves of the filters and then approximating the response curves by passive network elements and synchronized switching devices. By choosing the sampling rates of both channels properly the optimum filter of the slow-rate channel is time-invariant, while only the filter of the fast-rate channel is time-varying. It should be mentioned that the optimum synthesis techniques of this type (1, 2, 3, 5, 6) are used more often as guides for practical design rather than the actual design methods. For instance, consider the problem presented in this paper. An engineer facing this problem of receiving data from two channels is likely to recover the continuous signal from the two channel independently and then take the average of the two outputs to arrive at a final result. Using the previous example, the mean square-error of the result obtained this way is found to be 0.7344 which is only 9.6 per cent higher than that of the true optimum system proposed in this paper. However, the two independently obtained filters given by Eqs. 45 and 48 are time-invariant and are much easier to be implemented than the time-varying filters of Eqs. 36, 37, 41 and 42, though the latter give a true optimum system. Here the optimum system presented in this paper is used as a guide to evaluate the output quality of a much simpler system.

440

Journal of The Franklin Institute

O p t i m u m Recovery of a Continuous Signal

Conclusion Very often, two sets of discrete data, obtained independently, are available for the recovery of a continuous signal. These two sets of discrete data may also have different data rates. A better signal recovery will be achieved if both sets of data are employed in an optimum way, rather than when only one of the two sets is used. Optimum systems using both sets of discrete data are, in general, time-varying as opposed to the time-invariant property of the system using only one set of discrete data, even though the signals and noise are stationary. The theory and method of obtaining the optimum system for the recovery of a continuous signal from two sets of independently received discrete data have been developed here in detail. The crucial points in the development are : first, how to introduce the time-varying characteristic into the mathematical formulation of the system; and second, how to solve the set of two simultaneous summation equations involving two different sampling rates. Methods of evaluating the mean square-error of the final optimum signal recovering system have also been presented. An example is given in detail to illustrate the design procedure. It can be seen in this typical example that the mean square-error of the double-reception system is less than those of the two single reception systems by 12.7 per cent and 32.6 per cent. Several extensions of the method to handle the case of prediction, delayed interpolation, and inputs which are pulse trains having finite pulse-width have been given. The practical application of the proposed system is discussed.

Appendix and Z~- Transformation of Summation Equations

Z-Transformation

The transformation of the summation Eqs. 5 and 6, or equivalently Eqs. 7 and 8, to the frequency domain is derived here. The two summation equations are f ~ ( n T ~ ) = E Crlrl(nT1 -- m T 1 ) g l ( m T ~ ) m

+ E Crl~(T~ + n T ~ -- r2 -- iT2)g~(iT~) -- ¢rl~(r~ + nT~)

(52)

i

f 2 ( i T 2 ) = E Cr,r~(rl + n T 1 -- r2 -- i T 2 ) g l ( n T ~ ) n

+ E Cr~r2(iT2 -- j T 2 ) g 2 ( j T ~ ) --

+~(r2 +

iT2)

(53)

i

where

Vol. 277, No. 5, May 1964

fl(nT1) = 0

n >_ 0

(54)

f2(iT2)

i ~_ O.

(55)

= 0

441

James C. Hung

Take the two-sided z-transform of Eq. 52 with respect to n term by term. (56)

{f~ (nT~) } = zF~ (z)

where F~(z) has all its poles outside the unit-circle of z-plane. The factor z in front of F~(z) is included to indicate that the series expansion of ~{f~(nT~)} does not possess a constant term. These are the consequence of the condition imposed by Eq. 54 3 { E ¢~,~(nT~ -- mT~)g~(mT~)}

= E E dp~,(nT~ -- mT~)g~(mT~)e -~T~.

=

E

+~r~(z)e-~Tl*g~(mT1)

= +~,~(z)G~(z)

(57)

m

i

=

~{~EE

~,~(~

+

~ -

~

-

iT2)g2(iT:)~}

i

i

= ~{+~(s)e('~-'~>'G~(z,)}

(58)

where the £-transform in the second line of Eq. 72 is taken with respect to the variable ~ = nT~. ~{~b~(TX + nTx)} = ~{£[+~(r~ + ~)]} = ~{+~(s)e ~1}

~ = nTx.

(59)

Equating the sum of Eqs. 57 and 58 to the sum of Eqs. 59 and 56 gives Eq. 9. Similarly, take the two-sided z,-transform of Eq. 53 with respect to i term by term.

(60)

~,{f~(iT~)} = z=F2(z,) ~ : { ~ $~r2(T~ + n T ~ - - r2 - - iT2)g~(nT~) n

= ~a{2[~-~ ¢~,(r2 + iT2 - r~ -- nT~)g~(nT~)]} n

= ~{+~2rl(s)e(*~-~l)~Gl(z)} = ~a{+r2~l(s)e(*~-'~)*}Gl(z)

442

(61)

Journal of The Franklin Institute

Optimum Recovery of a Continuous Signal where t h e last step is legitimate since T~ = a T : (a being an integer). ~{E

¢~2~(iT~ -- jT2)g2(jT~_)} i = ~ ~ ¢~(iT2

-- j T 2 ) g 2 ( j T 2 ) e -~T~

= Z ~2(z~)e-Jr'g2(jT2) = ~2~(z~)G~(z.)

(62)

i ~{¢,2~(r2 + iT2)} = ~{2[¢~2~(r2 + p)]} = ~a{q)~2~(s)e''2}

p = iT2.

(63)

E q u a t i n g the s u m of Eqs. 61 and 62 to the s u m of Eqs. 63 a n d 60 results in Eq. 10.

Acknowledgment This research w o r k was s u p p o r t e d b y t h e N a t i o n a l A e r o n a u t i c s a n d Space A d m i n i s t r a t i o n t h r o u g h R e s e a r c h G r a n t No. NsG-351. T h e m a t e r i a l p r e s e n t e d here is a n u p d a t e d version of a p a p e r presented at t h e I n t e r n a t i o n a l T e l e m e t e r ing Conference, L o n d o n , E n g l a n d , S e p t e m b e r 25, 1963.

Bibliography (I) G. F. Franklin, "Linear Filtering of Sampled Data," Tech. Report T-5/B. Dept. of Elec. Eng., Columbia University, New York, 1954. (2) N. Wiener, "Extrapolation, Interpolation, and Smoothing of Stationary Time Series," New York, John Wiley and Sons, Inc., 1949. (3) d. C. Hung, "Theory of Optimum Multiple Measurements," AFOSR 1550, Air Force Office of Scientific Research, Dept. of Elec. Eng., New York University, 1961. (4) R. Courant, and D. Hilbert, "Method of Mathematical Physics," Vol. I, New York, Interscience Publishers, 1953. (5) J. C. Hung, "Double Measurement with Both Sampled and Continuous Inputs," IRE International Convention Records, Vol. 10, Part 2, pp. 125-142, 1962. (6) H. C. Hsieh, and C. T. Leondes, "On the Optimum Synthesis of Sampled-Data Multipole Filters with Random and Nonrandom Inputs," Trans. IRE, AC-5, No. 3, pp. 193-208, 1960. (7) J. S. Bendat, "Optimum Filters for Independent Measurements of Two Related Perturbed Messages," Trans. IRE, CT-4, No. 1, pp. 14-19, 1957. (8) d. R. Ragazzini and G. F. Franklin, "Sampled-Data Control Systems," New York, McGraw-Hill Book Co., Inc., 1958. (9) H. Cramer, "Mathematical Methods of Statistics," Princeton, N. J., Princeton University Press, 1946. (10) J. H. Laning, Jr. and R. H. Battin, "Random Processes in Automatic Control," New York, McGraw-Hill Book Co., Inc., 1956.

Vol. 277, No. 5, May 1964

443