Journal of The Franklin Institute DEVOTED
Volume
301,
Number
TO SCIENCE AND THE MECHANIC
ARTS
May
5
Delta Modulation
of Time-discrete
1976
Signals with
Independent Samples by THEODORE
S. KOUBANITSAS”
Research Centre for JVational Defence Galatsi, Athens, Greece ABSTRACT: Previous
studies
of Delta Modulation
with various aspects of noise introduced directed
towards
optimizing statistics
the DM-signal
the D&r-system
behaviour,
against
graphically
samples. compared
lation is considered
The approximate
u*ere predominantly process.
a concern
the disturbances
of the output signal is considered
input-signal
(DM)
by the quantising
which
becomes
under the assumption
distribution
of the output
l%aally,
on the basis of results produced
necessary
of the transmission
analytically
with the exact distribution.
concerned
This paper is primarily channel.
The
of independent is derived
signal
the incidence
when
of sample
and corre-
by means of simulation.
I. Introduction
Speech or image signals that are sampled at the Nyquist rate normally exhibit a significant correlation between adjacent samples. Oversampling further increases the correlation, thereby increasing the order of the Markov process representing the signal. Delta Modulation (DM) takes advantage of this situation by employing a simple quantizing strategy, as well as a memory in order to improve its performance (l-3). Optimality would then necessitate matching the memory size to the order of the input Markov process. The analysis and design of a DM system can, in principle, be performed analytically if the statistics of the input process is known. This, however, turns out to be a formidable task, since multidimensional distribution functions and nonlinearities are usually involved. Even for a first-order input Markov process and a simple linear DM processor present formulations are exceedingly complex and unwieldy. Studies of DM have, therefore, been confined predominantly to its noise aspects (4,5) and noise performance (6, 7,2). The results presented are simplified closed-form approximations, mostly obtained through computer simulations. * Mailing address:
116 Egnetia
St., Thessaloniki,
399
Greece.
Theodore X. Koubanitsas This paper is concerned with the statistical aspects of linear DM, which are useful when the estimation of channel performance of DM is required. The analysis is relatively simple and general, the results are expressed in closed-form. This is, however, attained somehow at the expense of reality: it is assumed that the input-signal samples at the system clock-rate are mutually independent. Nevertheless, it is believed that the results will provide some insight into the behaviour and optimization of DM. ZZ. Formulation
of the Problem
The ideal linear DM system under consideration is schematically shown in Fig. 1. No memory is involved and the channel is assumed noiseless. All
FIG. 1. Ideal
DM system.
(a) Modulat’or.
(b) Demodulator.
signals are discrete and represented by sequences of samples. Let {xn> and {Y,}, ~zE.I, denote the input and output signals, respectively, related by the equation Yn = Yn-1 {z,~>
+
%*
(1)
is the channel binary signal, defined as z, = d. sgn (x, - y,_J
=
+a,
for ~,-y+~>o,
- d,
for x, - yn-l 6 0,
where d is the constant step-size. The problem that is posed here is the determination of the statistical behaviour of {y,} at the demodulator output. The input signal, {xn}, is assumed to have independent samples, and its statistics to be known and stationary. Let x and X denote the p.d.f. and the c.d.f., respectively, of (xJ. Let, also, y and Y denote the same distribution functions of {y,}. These functions are, of course, discrete. Then, assuming that D is an integer multiple of d, we have that Pr{yn = D} = Pr{y,_l+d
= D, z,>O}+Pr{y,_,-d
Making use of Bayes’ rule and the independence samples it is possible to write Pr{Y,_l+d
= D, z,>O}
= Pr{y,_,
= D,z,
= D-d).Pr{z,>OIy,_,
(3)
of the input = D-d}
= Pr{yn_r = D-d}.Pr{xx,>D-d} and, similarly, Pr(Y,_,--d
400
= D, zn
= D+d).Pr{z,
= Pr{y,_,
= D+d}.Pr(x,
= D+d)
Journal of The Franklin
Institute
Delta Modulation of Time-discrete Signals with Independent Samples Now, since Pr(y,
= D> = y(D),
Pr {Y,_~ = D - a} = y(D - a), Pr(yn_l
= D+d}
= y(D+d),
Pr(x,>D-d}
=
D+d}
Pr{x,<
l-x(D--d),
= X(D+d),
Eq. (3) may be recast into the form y(D) = y(D-d).[l-X(D-d)]+y(D+d).X(D+d).
(4)
This is a finite difference equation that can be solved by various methods (8).
1 11 10 .7
.1 -
.a
.6 5 .L .3 d-.1 q&&k&&
0 FIG.
It
2.
.2
.5
1.0
25
t
2.0
2.5
30
Exact g(D), defined at t = Id (I E I), for normalized Gaussian x.
is in fact shown in the Appendix
=‘a i
Y(D) = A II
I=1
that
-x[(i- qa] x(za)
(5)
3
where il is a normalizing constant. Estimation of (yD), for all D, is then equivalent to knowing the distribution of (y,}. This distribution (symmetric about t = 0) is plotted in Fig. 2 for normalized Gaussian x and various values of d. It is characteristic that the probabilities Pr{y,>djd
z l}
and
Pr{y,>ljd
z o>
are very small.
Vol. 301, No. 5, Nay
1976
401
Theodore 8. Koubanitsas It is nation to y is of Rey
clear from Eq. (5) that very large products are involved in the determiof the tails of y. For this reason a simple closed-form approximation derived in the next section. The derivation draws from some results (9).
ZZZ. Distribution
of Output
Signal
Let us consider again Eq. (4). This can be rewritten as yp
+ d) . X(D + d) -y(D). = [y(D) - Y(D -
X(B)
41+
CYP - 4.
X(D - 4 -Y(D).
xv%
or, formally, A(yP
+ 4.
X(D + 4]=
A(W)}
- A(W).
X(D)},
where A is the difference operator with respect to d. Finally, Eq. (4) may be reduced to the form A{y(D+d).X(D+d)-y(D).[l-X(D)]}
= 0.
Now, the fact that y and X are distribution functions leads to the conclusion that the operand is not periodic but constant. Furthermore, since X is bounded and y can take arbitrarily small values, it follows that the constant is zero; that is, y(D+d).X(D+d) = y(D).[l-X(D)]. (6) If Eq. (6) is written as
y(D+d) = l-x(D) WY)
X(D+d)’
then it is possible to draw some preliminary conclusions about y. The righthand side of Eq. (7) decreases with large positive D and increases with large negative D. Therefore it follows that y is unimodal, irrespectively of the particular format of x. Also, due to the symmetry of the DM process, if x is symmetric then y is also symmetric about the same point. Before proceeding to the derivation of y from Eq. (7) some qualifying comments are in order. As stated earlier, all distribution functions involved are discrete. It is now stipulated that they are continuous functions. This places no restriction on generality, since they will implicitly be considered only for integer values of their arguments. Finally, a simplifying assumption : X is at least twice differentiable and piecewise linearizable; that is, X is smooth enough to be satisfactorily approximated by the first two terms of its Taylor expansion in a finite range of width d at any point, D, of its domain of definition. On this basis, it follows that 1 -X(D)
X(D+d)
402
_ 1 -X(D) = X(D)+dx(D)
Journal of The Franklin
Institute
Delta Modulation of Time-discrete Signals with Independent Samples and, hence, Eq. (7) may be written as y(D+d)
_ l/d@)-X(D)/dz(D) X(D)@(D) + 1 =
y(D) This equation is formally
(8)
similar to the ratio Pr{k+
1 IliT,+}
Pr{EIK,i) where the probability
*
K-k =k+l’
(9
terms follow the binomial law (lo), defined by
Pr {k /K, 4) = %‘f(4)k (1 - i)K-k = 2-” %?F.
(10)
The factor %f is given by %?~=I’(K+l)/I’(k+l).I’(K-k+l), where I’(. ) is the Gamma function, r(v) =
s
defined as
amexp(--u,v-
1)du.
It can therefore be concluded that, except for a constant multiplying y follows the binomial distribution; that is,
factor,
y(D) z B%?f2-K, where K L l/&x(D), k g X(D)/&(D).
I
(11)
The value of B, determined from
5 /;;;,-y
(t) . dt = 1,
or for piecewise linearizable y from &d. Thus, y may finally be expressed as
y(D) = 1, turns out to be N l/d.
2-x r(K+ 1) Y(D) z d r(k+I).r(K-k+I)’
(12)
with K and k given from Eq. (11). Plots of the continuous y for uniformly distributed and normally distributed input signals are shown in Figs. 3 and 4, respectively. It may be seen that the tails of x hardly have any significant effect in the vicinity of its median. It may also be noted in passing that the binomial y may be approximated by a normal p.d.f. for large K and k, or, equivalently, for small cl. Two important conclusions can immediately be drawn from Figs. 3 and 4: (i) For d smaller than the r.m.s. of the output signal {yn} can follow {rJ with an exceedingly small probability. This is due to the independence of input samples and gives rise to overload noise. (ii) For d larger than the r.m.s. of the output signal {y,} can follow (zn} with a relatively high probability, but in a coarse manner. This is due to the fact that the Pr{y,>d} is very small and gives rise to granular noise. It
Vol. 301, Xo. 5, May
1976
403
Theodore 8. Koubanitsas
404
The Franklin Institute
Delta k!odulation
of Time-discrete Xignals with Independent Samples
appears, therefore, that in order to strike a balance d should be of the same order as the r.m.s. of the input signal. Finally, a few words about the accuracy of the approximation of Eq. (12). The formal procedure followed for its derivation is legitimate and usually employed for the derivation of Boltzman’s energy distribution (10). Figures 5
loor
,o;L. 0
FIG.
.2
.4
t
.6
.t3
1.0
5. y(D), delined at t = Id (EEI), for z uniform in [-+, Approximate and exact y-values coincide.
+$I.
a,nd 6 compare the approximate y [Eq. (12)] with the exact y [Eq. (5)]for uniform and normal x, respectively. It may be seen that the approximation is in general satisfactory and that it improves with decreasing cl. This is so because the piecewise linear approximation improves with decreasing a. In the case of uniform x the approximate y and the exact y coincide, since X is a perfectly linearizable function. IV.
Non-zero
Correlation
The mean of y is found to be PO = !Y, where q is the quantile of X at level 4, and the variance 4 = *(l-+)d2K~ad/4x(q).
Vol. 301, No. 5, May 1976
(13) (14)
405
Theodore X. Koubanitsas It can be shown (9) that both p,, and o,, do not depend considerably on the tails of y and, hence, they may be regarded as robust estimates of location and scale, respectively. It would be interesting to see how these estimates are modified when the input samples are correlated. On account of the mathematical complexities involved heuristic arguments will be used, while the conclusions derived will be supported by simulation results. 100
FIG. 6. Approximate
r
y (solid line) and exact y (dashed line) for normalized Gaussian 2.
Let p, and uz denote the mean and variance of y in the nonzero correlation case. Due to the symmetry of the DM process the mean is not expected to change by the introduction of correlation. It is apparent, however, that y is expected to become more flat for p > 0, or less flat for p < 0, than in the zerocorrelation case (p = 0). Thus, Eq. (14) provides an underestimation, or an overestimation, of 0:. Simulation of the DM process with a first-order Gaussian-Markov input signal was performed on a digital computer. The signal can be mathematically described by 2, = @x,-1 +
406
(15)
P-%&t
Journalof The Franklin
Institute
Delta Modulation of Time-discrete Signals with Inde;pendent Sam$es where {wn} is a stationary uncorrelated Gaussian process, and be produced by sampling the output signal of a single-pole 1inea.r filter driven by white noise (11). The result,s show that, with an error E< 0.1, uf is given by
for 3.4 < 1p 1-c 7-l. Other Monte Carlo methods also seem to yield similar results (9). For a better approximation or an extension of the ra’nge of validity of Eq. (16) higher powers of p need to be taken into account.
V.
Conclusions
The statistical distribution of the output DM signal has been analytically derived. The analysis was based on the assumptions of independent inputsignal samples and piecewise linearizability of their distribution. The treatment was general and the results were expressed in closed-form. It is shown that the best results are achieved if the step-size, d, is of the same order as the r.m.s. of the input signal. It is also found through simulation that in the p # 0 case the output variance approximately varies as a simple rational function of p-for moderate values of p. To conclude, it is emphasized that in the past studies of DM performance were restricted to the estimation of noise or distortion introduced by the quantizing process. Knowledge of the DM output-signal statistics, however, is necessary when optimizing against the transmission channel disturba,nces.
References of an optimum digital system and applications”, IEEE Trans. Information Theory, Vol. IT-IO, p. 287, Oct. 1964. (2) P. A. Bello et al., “%atistical delta modulation”, Proc. IEEE, Vol. 55, 1,. 308, March 1967. (3) L. H. Zetterberg and J. Uddenfeldt, “Adaptive delta modulation with delayed decision”, IEEE Trans. Commun., Vol. COM-22, p. 1195, Sept. 1974. (4) H. Van De Weg, “Quantizing noise of a single integration delta modulation system with an N-digit code”, Philips Res. Rept., Vol. 8, 1953. (5) E. N. Prot.onotarios, “Overload noise in differential pulse code modulation systems”, Bell Syst. Tech. J., Vol. 46, p. 2119, Nov. 1967. (6) J. B. O’Neal, “Delta modulation quantizing noise analytical and computer simulation results for Gaussian rmd television input signals”, Bell Syst. Tech. J., Vol. 45, p. 117, Jan. 1966. (7) J. E. Abate, “Linear and adaptive delta modulation”, Proc. IEEE, Vol. 55, p. 298, March 1967. (8) K. S. Miller, “Linear Difference Equations”, Benjamin, New York, 1968. (9) W. J. J. Rey, “Robust estimates of quantiles, location and scale in time series”, Philips Res. Rept., Vol. 9, 1974. (10) A. Renyi, “Probability Theory”, North-Holland, Amsterdam, 1970. (11) M. J. Levin, “Generation of a sampled Gaussian time series having a specified correlation function”, IEEE Trans. Information Theory, Vol. IT-B, p. 545, Dec. 1960.
(1) T. Fine, “Properties
Vol. 301, No. 5, Nay
1976
407
Theodore S. Koubanitsas Appendix Estimation of the emct y As shown in the text, the finite difference y(D+d).X(D+d) which may be written
Eq.
(4) can be reduced
to the form
= y(D).[l-X(D)],
as
y(D+d) This is a simple recursive Setting
relation
=
s
y(D).
that can easily be solved.
W(D)
P
I-X(0-d) X(D)
’
we obtain y(D) = W(D).y(D-d) = W(D).
W(D-d)+y(D-2d)
................................... = W(D).
W(D-d).
W(D-2cZ)
. . . W(d).y(O).
Thus,
which is identical
to Eq. (5). The constant
factor A ry(0)
is determined
by the relation
Did
MY
= A F
/‘IIWW+A
= 1.
DfO
408
Journal
of The Franklin
Institute