On the Statistical Design of Linear Random Sampling Systems A. R. BERGEN Introduction
Derivation of the Spectral Density
In the analysis or synthesis of linear sampled-data systems subject to random inputs, the spectral density of a random process after sampling is often required. The spectral density of t~e signal after sampling must be related to the, presumably, specified spectral density of the signal prior to sampling. In the case of periodic sampling the relationship is well known (leading to the familiar spectrum periodic in the frequency variable)1 ,2. It may be of interest to consider systems in which the sampling intervals are not constant, but are independent random variables. This latter type of sampling will be called random sampling; it is hoped that this designation will not lead to confusion when employed in the context of a sampled-data system. Random sampling may occur because of 'jitter' in a sampling mechanism, or 'misses' (when the availability of data at the nominal sampling instants may only be described probabilistically)*, or when intentional randomization is employed. In this latter connection it has been suggested 3 that, for reasons of economy, a time-shared digital computer, used for the control of a number of factory or plant processes, may be available to ?ny particular process at random, rather than specified, Instants. The problem of specifying the spectral density of a random signal after random sampling is related to several problems which have been considered previously, Rice4 has considered ~he spectral density of 'shot noise', which may be interpreted In a sampled data context as the spectral density of the output ?f a lin~ar filter to which a randomly sampled constant signal IS applied; the first-order probability density function of sampling intervals is exponential in this case. Tsien 5 has derived an infinite series representation for the spectral density ?f a sequence of randomly spaced identical pulses. The Intervals between pulses are assumed independent with a general first-order probability distribution. Grenander and Rosenblatt 6 have derived a closed form expression for the spectral density of a sequence of equally spaced samples. These samples are obtained by random sampling in the sense in which it is used in this paper, but are analysed as an ordered sequence of signal values, without consideration of the time tag associated with each signal value. It is the primary purpose of this paper to derive a closed form expression for the spectral density of a random signal after a general type of random sampling, in a form suitable for use in problems in which the data is processed in real time.
C.onsider the block. diagram shown in Figure I. A continuous statIOnary random Signal x(t), with spectral density
is), is sampled, yielding x*(t) . This signal is then filtered by a linear
* A ~ell known example of the occ~rrence of misses is provided by a scannmg radar m whl~h the detectIOn process is such that target mformatlOn IS not available for each antenna scan, but with a probability called the blip-scan ratio.
r - - - - - - - - -I I
x (t) :
I
y.
I I
....:;~!.-.;I- I
+.(5) I T I ________ L
I --lI
y (I)
x*(I)
+~(s)
Impulse modulalor
Linear filler
wilh gain T
Figure I. A random sampler
time-invariant filter with transfer function H(s) yielding an output yCt) with spectral density .y(s) . As is usual in the sampled-data literature, the switch symbol shown in Figure 1 represents a unit impulse modulator which converts the value of x(t) at a sampling instant into a Dirac delta function having the same area, i.e. unit impulses are modulated by x(t). It is assumed that the reader is familiar with the applications of this useful representation and the conditions under which its use is justified 7 , 8, 9 . Usually, in the sampled-data literature, the modulation just described is called sampling. However, in treating random signals there are advantages to a slightly different definition. In the following treatment sampling will be defined as the modulation by x(t) of impulses of area l' (1' is the average sampling interval). This definition is reflected in Figure I by cascading the unit impulse modulator with a factor of gain T. While x*(s) , the spectral density of x*(t), is clearly unrealizable energeticallyt, it is useful to define such a quantity. Then if it is required to relate is) to ,.(s) this may be accomplished conveniently in two steps. First <1>.1'*(5) may be related to is) and then ,,(s) may be found from J.*(s) by use of the well known relation for linear systems i5) = J. *(s)H(s)H( -s)
(1)
The remainder of this section is devoted to the calculation of .r*(s) , the spectral density of X*(l). The quantity <1» .*(s) is defined as the bilateral Laplace transform of the autocorrelation function of x*(t). Here it is to be noted that the autocorrelation function of a sequence of impulses is not a function in the usual sense, but may be defined in a symbolic sense as the limiting expression for the autocorrelation function of an appropriate sequence of pulses as the pulse width approaches zero . Consider first, therefore, the modulation of narrow, rectangular, non-overlapping pulses of area l' by x(t). As the
t This comment is irrelevant when the sampled data are a sequence of numbers rather than a physical signal. 430
440
ON THE STATISTICAL DESIGN OF LINEAR RANDOM SAMPLING SYSTEMS
pulse widths approach zero, these pulses tend to impulses of area T. The result of this modulation is suggested in Figllre 2, which represents a portion of a sample function of an ergodic ensemble of pulse sequences. The interval tn +1 - tn is defined as the nth sampling interval T" and is assumed to be an independent random variable with first-order probability density function peT). peT) must be chosen to ensure that the pulses
Since this result does not depend on the pulse width, then
(4)
Thus the cross-correlation function of x*(t) and a random process z(t) is unaffected by the sampling process. This desirable property is a consequence of the definition of sampling as modulation of impulses of area T.
Figure 2. The pulse sequence x,,(t)
do not overlap, but in the limit as the pulse width decreases to zero this restriction may be removed. It should be noted that the time series after random pulse modulation is characterized by both the signal values, x(tn), and the sampling times, t,.. The random pulse modulation may be described as multiplication of x(t) by a statistically independent stationary random process u(t)/ii. This process, whose statistics depend only on the probability density function of intervals, peT), is a sequence of narrow rectangular pulses of width y, occurring at the random sampling times. The multiplication yields xJt), xu(t)
1
= u-:: 1I(t )x(t)
(2)
The autocorrelation function of x,,(t) is denoted by
(5)
because of the statistical independence of the processes. Here
X,,(t) tends to x*(t) as the pulse width approaches zero. While the amplitude of the pulses constituting 1I(t) is arbitrary, it is
convenient to assume that these are pulses of unit amplitude. In this case ii, the average value of 1I(t), is simply y/T*. Consider now the cross-correlation function of x,,(t) and an arbitrary random time series z(t), which is assumed statistically independent of the sampling process. Usually z(t) is a system input. I
= -:11 E{II(t)x(t)z(t + T)} I
+ T)X(t)X(t + T)}
(3)
= -:11 E{II(t) }E{x(t)z(t + T)] Here E denotes the expectation or ensemble average+.
* The average value of u(t) is most conveniently computed as a time average. However, ergodicity being assumed this is also the ensemble average. t For convenience, a bar over the quantity being averaged is also 'used as in ii. The bar is also used for a statistical average of a random variable as in T
sequence of randomly spaced non-overlapping pulses of unit amplitude and width y. It has been assumed that the intervals T between the leading edges of successive pulses are statistically independent random variables with first-order probability density function peT). The definition for
+ T)}
(6)
To simplify the discussion assume T is non-negative. Now, since lI(t) and 11(' + T) may each take on only the values zero and one,
y;.
:;: Considerations of ergodicity require ..l to be uniformly distributed between 0 and ;'. but this is not of consequence in the derivation.
431
441
A. R. BERGEN
expression (for positive argument) for the autocorrelation function of x ,,(r). This autocorrelation function converges in a symbolic sense to the autocorrelation function of x*(t) as the puise width y approaches zero, if it is assumed that the autocorrelation function of the process x(/) is continuous. This assumption will be made ; it is not a serious restriction in practice. The spectral density of x ,,(t) converges to that of x*(t) in the usual sense . Consider G;r./s) the one-sided Laplace transform of 4>;rJT). Substituting equation 13 in equation 5 and interchanging orders of integration (which is legitimate for Re(s) greater than Cl' i.e. s in the right half plane).
occurs when the random interval 6. satisfies the following inequality (7)
If T is greater than 6., then it may be deduced from Figure 3 that the interval T will end on a pulse having begun on a pulse, t
t+1:
oiJUWU l _ _ t Tl
+
T2
~Tn.ll- Tn -1 Tl+T2+··· +Tn~
-I
t-
Figure 3. The pulse sequence u(r)
< 6.
+ . . . + Tn :s:;; T + y
= Tl
(8)
The condition of equation 7 may be combined with that of equation 8 by defining an interval To of zero length and gratuitously adding T to the upper limit of equation 7. The combined statement is now: an interval T which starts during a pulse ends during a pulse when for a particular non-negative n T
< 6.
+ To + Tl + . . . + Tn :s:;; + Y T
+Yp n(a)da
(10)
(12)
'} C, -1'"
The abscissa of absolute convergence of the generating function· is less than zero, but assume here that Cl is greater than zero, in which case the magnitude of M(A) on the path of integration is less than one. Substituting equation 12 in equation 11, and interchanging various orders of integration and summation it is found that for positive T
__
~l cl +jOO
cp,,( i ) - 2 .
nJ c, -jet)
I M D-m
MX + l(A) e!.r(e!·T - I) I MO) A d }. . •
y- O
=
f l c,+j oo Gx(S - A) 2n·
.
'} C, - l OO
I _ M(A) dA
f (c+joo GxCw) Gx*(s) = 2nj Jc -iOO I _ M(s _ w) dw
Define now as a generating function, M(A), the one-sided Laplace transform of peT), and MD-(A), the one-sided Laplace transform of the probability density function of 6.. Since a is the sum of n identically distributed independent random intervals T I , T 2 , • • • , T" and an independent interval 6., Pll (a) may be expressed for positive T , in terms of its generating function as follows: M "(}.)M:,CA)e !.a d}.
.
hm Gx,,(s)
(IS)
which, in terms of equation 15, yields the spectral density of x(/) after random sampling. The path of integration of the integral in equation 15, lies in an analytic strip which separates the singularities of Gx(s - A), which are in the right half plane, from those of 1/(1 - M(A)), which are in the left half plane. It is desirable to change variables so that the singularities of Gx take their more familiar locations in the left half plane. Let w = s - A, then
(11 )
.
=
(16)
Since these events for n = 0, 1,2, . . ., N, (N is the largest integer less than or equal to T/Y) are mutually exclusive, it therefore follows that for non-negative T
c,+jOO
C, -lOO
(9)
+ To + . . . + Tn-
I l Pn(a) = 2n .
I - MN + l(A)e AY - I I _ M(A) -A-GxCs - A)d}.
MD-(}')
where Gx*(s) is the one-sided Laplace transform of the autocorrelation function of x*(t). Finally, since the autocorrelation function is an even function of its argument,
where Pn(a) is the probability density function of a = 6.
.
(14)
Gx*(s)
For a particular n this event occurs with probability f
'}
Cl+jOO
Here Gx(s) is the one-sided Laplace transform of the autocorrelation function of x(t). Now in the limit as the pulse width y approaches zero, N approaches infinity, MN +l(A) converges uniformly to zero, MD-(A) converges uniformly to one, and eJ•a approaches 1 + Ay. Then
if, and only if, for some positive integer n the random interval 6. + Tl + . .. + Tn satisfies the inequality T
I l G" Js) = 2n ·u
(17)
Here again the path of integration separates the singularities of the two functions. Now while Re(s) must be greater than c, in order that this analytic strip exist, the result of the integration, once obtained, may be extended throughout the s plane by analytic continuation . In evaluating equation 17, use may be made of the analyticity of Gx(w ) in the right half plane, and that of 1 - M(s - w) in the left half plane. The reader is cautioned here that in using contour integration to evaluate equation 17 the integral around a path completing the closed contour will not vanish in general. In particular, if the contour is closed in the left half plane, the integral around the closing path will not vanish. The contribution along this path may be evaluated, however, and it may then be shown that
*
- f
i
G;r (s) - 4 .
7T/r
1 + M(s - w)
GJ.. (w) 1
-
M(
S-(V
i
) du
(18)
where the contour r includes the path para lied to the imaginary axis and the closing path in the left half plane.
(13) Example I. 'Purely random' sampling
Equation 13, in conjunction with equation 5, yields an
Let the sampling times be purely random. Then the sampling 432
442
ON THE STATISTICAL DESIGN OF LINEAR RANDOM SAMPLING SYSTEMS
intervals will be independent with an exponential probability density function. 1
peT) = -= e- T / T , T ~ 0 T = 0, T< 0
By use of equations 16 and 18 the spectral density after this type of sampling is
(19)
x*(s)
=
To 2TTj
r
I
= sT +
=
+
T [c+jOO (s - w)T I 2~ G.iw) ( )T- dw TT) .c-joo s- w
T
Example 4. The limiting spectral denSity
(21)
Consider a general sampling interval probability density
+ "2 ~x(O)
= Gx(s)
peT)
By application of equation 16 x*(s)
=
x(s)
+ T~iO)
(22)
Iim M(s, rx) = I - sT
= 2TTj JrGxCw)
(30}
+ 0(rx2)
(31)
i
and therefore
(23)
lim x*(s)
= e- sTo
=
(33)
xCs)
under fairly weak conditions. This convergence of the spectral density after dampling to the spectral density before sampling as the sampling rate approaches infinity, is an intuitively reasonable result. It should be noted that this result is a consequence of choosing T as the gain associated with the sampler and further justifies that choice.
(24)
[I - e 2wTo ] dw [I _ e - (S-w)To][I _ e(s +w) T o]
(25)
which differs from the result of Ragazzini and Franklin10 by a factor of gain T02 because of a difference in the definitions of the gains of the sampler.
Application to the Wiener Problem Following the usual assumptions and development of the Wiener theory it is found that the optimum filter must satisfy the equation
Example 3. The missed samples case
Frequently a system with a nominal sampling period To may fail to receive samples at the nominal sampling times owing to a fault of the data-gathering equipment or data link. Ordinarily it is assumed that the misses occur independently. Then if the probability of a miss is p, (q = I - p), the intervals Tare independent with a density 00
peT)
loopo(t)e-ots t dt
under fairly weak conditions (i.e. E{P/n!} < 00). Then, using equation 17 T C+ jOO G (w) (32} Iim Gx*(s) = 2TT· . (s..: W)T dw = G,.{s) T~O 'l C-JOO
By using equation 16 and equation 18
r
=
T~O
M(s)
To
loopo(T/rx) e- st dT/rx
with generating function
x*(s)
(29)
As rx approaches zero
In this case an interval To occurs with probability I. Then
b(T - To)
=
M(s, rx)
Example 2. Periodic sampling
=
I
= rx- Po(T/rx)
where rx is an adjustable parameter. Note that if the average sampling interval with rx = I is Tt then, in general T = rxT1 • Thus, as rx approaches zero, T approaches zero. The generating function of peT), with rx as a parameter, is
The spectral density after sampling, in this case, is equal to the spectral density before sampling, plus a white spectral component T~x(O). Note that, as T approaches zero, this component also approaches zero, and the spectral density after sampling converges to the spectral density before sampling. A useful interpretation of equation 22 is the following with regard to the spectral density after purely random sampling. The random sampler may be replaced by a direct connection provided a fictitious white noise of spectral density T~xCO) is added . Furthermore, by virtue of equation 4 this fictitious noise is uncorrelated with any of the system inputs.
peT)
(28}
Note by comparison with equation 25 that with respect to the spectral density, the effect of misses may be simulated by adding white noise to the output of a sampler operating without misses. This noise, furthermore, is uncorrelated with any of the system inputs.
(20)
I
and using equation 17 Gx*(s)
e 2wTo ] dw w)To][1 _ e(B +w )To]
+ pTo q ~ x (0)
The Laplace transform of peT) is M(s). M(s)
[I -
JrGxCw) [I _ e (s
= L qpn- tr';(T -
nTo)
(26)
n ~ l
and generating function
q e- sTo M(s) = I
- pe
-sT
°
(27)
f" h(T)~r*(t
- T) d
= ~rcit),
0
~t
(34)
This differs from the usual Wiener- Hopf equation only in that the autocorrelation function (of the input to the filter) is that of a sampled signal r*(t). The function on the right of equation 24 is the cross-correlation function between the continuous input r(t) and the desired output c,lt). In solving the equation in the usual way the spectral density of the sampled signal is needed; this may be computed by use of equation 16 and either equation I7 or 18 The factoriza tion step of the procedure may lead to difficulties. These difficulties may be avoided if the computed spectral
433
443
A. R. BERGEN
density is rational in -S2 or esT, + e -s1',. This will occur if the spectral density prior to sampling is rational in _S2 and either the sampling interval density is discrete with commensurate intervals (integral multiples of T l ) or the function M(s) is rational in s (this occurs in the case of a gamma distribution). In the first case the optimum filter is the cascade of a device operating discretely in time (with period T l ) and a lumped parameter network. In the second case the optimum filter is a lumped parameter filter.
transferred to the left side of the sampler. The gains To and I / To cancel and the factor q may be incorporated in the function G.
n1:! __-~"IG(5.a.~)11-
---...--0 E
An Error-sampled Servomechanism
There are applications for devices which generate as an output a continuous version of a sampled-data input. A search radar incorporating such an extrapolater (or predictor) is commonly called a track-while-scan radar. The version of the device shown in Figure 4 is capable of tracking a constant velocity input with zero steady-state error, for all r:t. and (3 compatible with stability. If in addition the input
b
c r(t)
1---.--0 e (t)
Figure 4. An error-sampled servomechanism
has a noise component, the steady-state mean-square tracking error is a function of r:t. and {3. Sklansky ll.l2 has computed this mean-square error in the case of a periodically operating sampler and a white noise input and obtains
ii/di = (r:t.
.'
(3) = 6r:t. 2 - 3r:t.{3
+ 6{3 - f32 3r:t.(4 - 2r:t. - (3)
(35)
Here n2 is the mean-square value of the noise input. The question now arises as to the effect of misses on performance . Again, it is found that if the device is stable it follows a constant velocity input with zero steady-state error; this is one of the attractive features of this extrapolation scheme. The steady-state mean-square error is then entirely due to the noise present in the input and may be computed by application of the result of example 3. Here the discussion is facilitated by reference to the sequence of block diagrams shown in Figure 5. Assuming a constant velocity input the steady-state system error (or inconsequentially, its negative) is simply the 'output' of the system when noise alone is applied to the system. This is shown in Figure 5 (a). Note also that the constant interval To has been replaced by the random-interval T. In Figure 5 (b) a fictitious gain and inverse gain are cascaded to introduce the sampled variable e* as defined in this paper. In Figure 5 (c) the result of example 3 is used; <1>e*(s) is shown to be generated by adding a fictitious white noise c(!) to the output of a sampler operating without misses. For economy of notation e* which is changed in the process is not relabelled. c(!) itself may be generated as shown in Figllre 5 (d). Here II(!) is white and uncorrelated with ne!) . The relationship between the mean-square value of u(!) and the spectral density of r(t) may be conveniently derived using equations 16 and 18 and assuming that lI(t) has a Markov spectral density with a bandwidth approaching infinity. Also in Figllre 5 (d), f has been evaluated and replaced by To/q. The final step is shown in Figure 5 (e) where 1I(t) has been
e Figure 5. Development of an equivalent block diagram 1J)oo
.9 '-
e
a;~ ~o
"'C
100
:1 Cl>
~'-
C:~ "'~ Ec Cl> ,
'"
_Cl>
°E
Q
iii
Q::
iN"
~
1"'"",
05
0
02
06
OB
10
p= Probability of a miss
Figure 6. Mean square tracking error
The problem has now been reduced to the original Sklansky problem. The mean-square error may now be computed using equation 35 (36)
434
444
ON THE STATISTICAL DESIGN OF LINEAR RANDOM SAMPLING SYSTEMS it remains to compute e2 • Here, a consideration of the operation of the device indicates that
This work was supported by the Air Force Cambridge Research Center under contract numbers A F19(604)- 1572, 5460. A part of the work reported here was performed in partial fulfilment of the requirements of the Sc. D. Eng. degree at the Department of Electrical Engineering, Columbia University.
(37) Substituting equation 37 in 36 and collecting terms
-;y./~ =
References
f(qrx, q/J) q - pf(qrx, q/J)
(38)
1
2
which reduces to equation 35 if the miss probability is zero. Figure 6 shows the mean-square error as a function of miss probability for rx = /J = 0·5. For miss probabilities larger than 0·62 the system is unstable in the mean square. The behaviour shown is typical. However, for certain choices of rx and /J the system may exhibit mean-square instability for relatively small miss probabilities.
3
4
5
Conclusion The spectral density of a signal at the output of a random sampler may be expressed in the form of equation 16. Here the function Gx*(s) is analytic in the right half plane. Gx*(s), in turn, may be expressed as a complex convolution integral, either in the form of equation 17 or that of equation 18. To make the evaluation of the integrals practicable the method of residues is indicated. In this case equation 18 is convenient if the contour is closed in the left half plane. Equation 17 should be used if the contour is closed in the right half plane. Here the contribution of the integral along the closing path mayor may not vanish. The relations developed in this paper extend the applicability of various methods for the statistical design of sampled-data systems to systems in which the sampling is random.
6
7
8
9 10
11
12
RAGAZZINI, J. R. and FRANKLlN, G. Sampled-Data Contro' Systems, p. 255, 1958. New York ; McGraw-Hill GRENANDER, V. and ROSENBlATT, M . Statistical Analysis of Stationary Time Series, p. 57. 1957. New York; Wiley KAlMAN, R. E. Analysis and synthesis of linear systems operation on randomly sampled data, Doctoral Dissertation, Department of Electrical Engineering, Columbia University, 1957, p. 4. RI CE, S. O. Mathematical analysis of random noise. Be/! Syst. tech. J. 23 (July 1944) 39. Also included in WAX, N . Selected Papers on Noise and Stochastic Processes, p. 171 . 1954. New York; Dover Publications TSIEN, H. S. Engineering CybernetiCS, p. 118. 1954. New York ; McGraw-Hill GRENANDER, V. and ROSENBlATT, M. Statistical Analysis of Stationary Time Series, p. 58. 1959. New York ; Wiley RAGAZZINI, J. R. and FRANKLlN, G. Sampled-Data Control Systems, p. 19. 1958. New York; McGraw-Hill RAGAZZINI, J. R. and ZADEH, L. A. The analysis of sampleddata systems. Trans. Amer. Inst. elec. Engrs 71, Pt II (1952) p. 225 JURY, E. I. Sampled-Data Control Systems, p. 5. 1958. New York; Wiley RAGAZZINI, J. R. and FRANKLlN, G. Sampled-Data Control Systems, p. 2~O. 1958. New York; McGraw-Hill SKlANSKY, J. On closed-form expansions for mean squares in discrete-continuous systems, Inst. Radio Engrs Trans. Automatic Control, PGAC-4 (1958) 24 SKlANSKY, J. Optimizing the dynamic parameters of a trackwhile-scan system. R .C.A. Rev. 18 (1957) 163
Summary In some sampled-data systems of interest, the sampling intervals are random variables. In the statistical design of such systems, it may be necessary to compute the spectral density of a random signal at the output of a random sampler. This spectral density may be determined from a complex convolution integral, which, in many cases of practical interest, is easily evaluated by the method of residues.
The applications of the theory include Wiener filtering and the analysis of an error-sampled servomechanism characterized by 'misses'. In the latter application, the miss probability profoundly influences the design and neglect of this factor may lead to instability in a me::tn square sense.
Sommaire Dans certains systemes de donnees-echantillons presentant un inten!:t, les intervalles sont des variables aleatoires. Dans l'etude statistique de ces systemes, il peut etre necessaire de calculer la densite spectrale d'un signal aleatoire a la sortie de \'echantillonneur. Cette densite spectrale peut etre determinee a partir d'une integrale complexe de convolution. Cette integrale, dans de nombreux cas concrets, est aisement evaluee par la meth ode des residus.
L'application de la theorie comprend un filtrage de Wiener, et \'analyse d'un servomecanisme. L'erreur echantillonnee est caracterisee par les 'manques'. Dans cette derniere application, la probabilite de 'man que' influence profondement l'etude. Le fait de negliger ce facteur peut conduire a une instabilite au sens quadratique moyen.
Zusammenfassung Bei einer Reihe interessierender diskontinuierlicher Systeme sind die Tastintervalle nicht konstant sondern regellose Veranderliche. Wenn ein solches System nach statistischen Grundsatzen ausgelegt werden soli , so kann eine Berechnung der spektralen Dichte der regellosen Ausgangsgrbf.le eines regellosen Tasters erforderlich sein. Diese spektrale Dichte kann als komplexes Faltungsintegral ausgedruckt werden, welches sich flir vie le praktische FaLle nach dem Verfahren des Residuums leicht berechnen la f.lt.
Die Anwendungen der Theorie zeigen das Filtern regellos getasteter Grbf.len nach Wiener sowie die Analyse eines Fehler-getasteten Servos, gekennzeichnet durch 'Verfehlungen' . Bei dieser Anwendung beeinflu!3 t die Wahrscheinlichkeit des Verfehlens wesentlich die Bemessung, und eine Vernachlassigung dieses Faktors kann im Sinne des mittleren Fehlerquadrates zur Instabilitat fuhren.
DISCUSSION B. M.
BROW N
(U.K.)
I would like to offer a few remarks on the background of the problem discussed in this paper. If the sampling period T and the sampling
phase are fixed , and if x(t) belongs to a stationary ensemble, x*(t) and yet) are non-stationary. The ensemble offunctions x*(t) possesses no auto-correlation function since the former takes the form of a train of impulse functions . However, yet ) has a time-va rying
435
445
A. R. BERGEN
auto-correlation function obtained as an ensemble average. It also has a periodic spectral density. This case has been discussed by M. Mori in Statistical Treatment of Sampled-data Control Systems for Actual Random Inputs. Trans. Amer. Soc. mech. Engrs. 80 , 2 (1958 444-450. If, however, T is fixed but the phase is stochastic and uniformly distributed, x*(t) is stationary and has a fixed auto -correlation function and spectral density. The usual methods for continuous linear systems can be used to determine the stationary spectral density of yet). The standard Wiener- Hopf technique can then be used for optimization. This is discussed by Ragazzini and Franklin (see Reference 1 of the paper). Dr. Bergen generalizes one stage further to the case where T itself is stochastic. Again x(t) and yet) turn out to be stationary. I have been investigating the solution of the original problem by means of a general theory for time-varying systems with nonstationary inputs and outputs. This makes use of double transforms, that is to say, transforms of correlation functions with respect to time difference and with respect to current time. Tt is possible to obtain a general formula relating the transforms of input and output. Unfortunately I have not succeeded, as yet, in extracting solutions to potentially useful problems by this method .
A. M.
BATKOV
(U.S.S.R.)
G. P.
TARTAKOVSKI
(U.S.s.R.)
There are two approaches to the analysis of random processes in impulse systems with variable parameters, including those with variable parameters of the impulse element, in particular periodic alteration. The first approach is associated with interest in the value of the output magnitudes only at discrete instants of time. In this case it is possible to construct the general theory of random processes in impulse variable systems including solutions of problems arising in the case of a variable random period of sequence. The corresponding general theory has been formulated in published works. The second approach is associated with the consideration of the output value as a continuous function of time. Many cases exist in which this approach is necessary. The work under consideration is of this tendency, and in my opinion is of considerable interest since it finds, in very elegant form, the spectral density of the process at the output. A. BERGE N , ill reply. The calculation of spectral density presumes the stability of the system. Other methods must be used to investigate this stability. The situation is analogous to that occurring in system optimization by means of the Phillips method; the mean-square error may be minimized by choice of a parameter, but the stability must be checked by other means.
Are systems with random T, and systems with fixed T and supplementary noise, equivalent from the point of view of stability?
436
446