Composite modeling of nonstationary signals

Composite modeling of nonstationary signals

Composite Modeling of Nonstationary Signals ~~M.BOUDAOLJD and L.F. CHAPARRO Department of Electrical 15261, U.S.A. Engineering, University of...

633KB Sizes 0 Downloads 76 Views

Composite

Modeling

of Nonstationary

Signals ~~M.BOUDAOLJD

and

L.F. CHAPARRO

Department of Electrical 15261, U.S.A.

Engineering,

University

of Pittsburgh,

Pittsburgh,

PA

ABSTRACT: Most temporal signals of practical interest are nonstationary and need to be modeled using time-varying systems. In this paper a composite model for these signals is proposed which accounts for nonstationarities in the mean and the autocorrelation functions. Unbiased and consistent time-varying estimators for the mean and the variance functions are studied and used to produce zero-mean, constant-variance signals that can be modeled using autoregressive systems with time-varying coeficients. The identification of the coejicients is implemented recursively using the parameterization of the coeficients by a set of basis functions. We illustrate the application of the composite model in the analysis and synthesis of speech and in the estimation of instantaneous frequencies in radar return si~qnals.

I. Introduction Although most signals of practical interest are nonstationary in the mean and/or the autocorrelation and as such should be modeled with time-variant systems (l6) the use of time-invariant models is more prevalent because of computational advantages. In this paper, we propose a composite model for nonstationary temporal signals which takes into consideration nonstationarities due to the mean and the autocorrelation. Using time-varying estimators for the mean and variance functions, we transform the given signal into a zero-mean, constant-variance signal which is then modeled by an autoregressive system with time-varying coefficients. The identification of these coefficients is made possible by parametrizing them using a set of basis functions (226). The lack of recursive algorithms to obtain the coefficients is a drawback of earlier procedures (3,4) proposed to model nonstationary signals. In this paper, we propose to do the identification using the recursive algorithm given in (7). Such an algorithm is based on conjugate directions, and it reduces to the Levinson algorithm in the case of modeling stationary signals. The proposed modeling procedure finds application in the processing of nonstationary signals such as speech, seismic signals and radar return signals. We illustrate the application of our procedure in the analysis and synthesis of speech signals, and in the estimation of instantaneous frequencies in radar return signals. The synthesis of speech signals results in a generalization of Linear Prediction (LP) methods used for coding purposes (3,s).

0 The Franklin lnstitute001~32/87%3.00+0.00

113

M. Boudaoud and L. F. Chaparro The remainder of the paper is organized as follows. In section II, we discuss the different issues involved in the composite modeling of nonstationary temporal signals. Namely, we consider the development of time-varying estimators for the mean and variance functions and the conditions under which these estimators are statistically unbiased and consistent. Also, we discuss the autoregressive modeling, with time-varying coefficients, of signals and the identification of the coefficients by means of a recursive algorithm. In section III, we provide examples using speech and radar return signals to illustrate the application and the efficiency of the proposed modeling procedure.

II, Composite

Model

Considering that the given nonstationary signal x(n) is defined in a domain W = {n (0Q n < N- l}, and that it is a second-order random process, a simple composite model for it is x(n) = m,(n) + Z(n) where m,(n) is a function of time, representing is a zero-mean nonstationary signal.

nE W

(1)

the mean function

of x(n), and Z(n)

A. Time-varying estimator of m,(n) We are interested in the development of statistically unbiased and consistent estimators (1,9) of the mean function m,(n). Let {p,(n)};l_, be a set of orthonormal functions defined with respect to some inner product (.;) and let us expand m,(n) using these functions as

We then have the following Theorem I (a) Unbiased and consistent are of the form

results.

estimators

of the expansion

coefficients

cli, 0 < i < 1,

(3)

6 = (x(n), B(n)> provided

that i?i_m2ElIx(n)-mX(n)l12

(4)

+ 0

where 11.II denotes the norm with respect to the inner product expected-value operator and N is the length of the sequence x(n). (b) An estimator of m,(n) is then of the form h,(n)

= i

(.;),

E is the

hfii(n)

i= 0 and the estimator 114

r&(n) is Journal of the Franklin Institute Pergamon Journals Ltd.

Composite (i) unbiased

Modeling

of Nonstationary

Signals

if and only if oi, is unbiased.

(ii) consistent

if oi, is consistent,

and i

j&(n)l’ is bounded

for all n.

i=O

The proof the functions From the oi, and h,(n) signal x(n).

of the above theorem is easily constructed using the orthogonality of {pi(n)} and the Schwarz inequality. above theorem, we note that requiring the consistency of the estimators imposes certain restrictions on the set of functions {pi} and on the For instance, if the inner product is defined as

where * denotes

complex

conjugate,

where a:(n) is the variance that is

then (4) can be written

of x(n). Therefore,

6!i = c$ C’ x(n)p,*(n)

as

for estimators

of ai given by (3),

i = 0,l >..., i

n-0

to be consistent the variance of x(n) must be bounded as the size of data window Wis increased. The consistency of the estimator&(n) given in (5), however, cannot be guaranteed because in order to define the inner product according to (6) the function pi(n) might not be bounded functions. For our applications, we use the inner product defined by (6) and the discrete Legendre polynomials for {p,(n)>. Subtracting h,(n) from x(n) we obtain the nonstationary, zero-mean signal T(n) which we now proceed to model. B. Autoregressive modeling of x(n) To assume that x”(n) is representable by an autoregressive system with timevarying coefficients requires that the impulse response of such system (h(n ; m)) be a degenerate sequence of the form (l&12) : h(n ; m) = i i=

v,(n)o,(m)

(9)

I

where {vi(n)> and {o,(m)> are sets of linearly independent functions, and L is the order of the degenerate function. If this condition is satisfied, the sequence Z(n) can then be related to an input white-noise sequence e(n) by a difference equation of a certain order p : Z(n) + i a,(n - i)T(n - i) = bo(n)e(n). i= I This equation

can be thought

Vol. 324, No. I, pp. 113-124, Printed in Great Bntain

of as a time-varying

transformation

(IO) that converts

1987

115

M. Boudaoud and L. F. Chaparro Z(n) into a stationary sequence e(n). The coefficients (ai(n-i)} and bO(n) in (10) are represented as expansions in terms of the functions {ui(n)}. Since h,(n) in (10) cannot be calculated directly, we simplify the model by letting y(n)

x(n)

=

(11)

b,(n) and require

that y(n) be a constant-variance

signal. We then find that

b”(n) = o&)/o, where a,(n) will be replaced set of equations :

by an estimate

y(n)+

(12)

b,(n). Letting

i c,(n-i)y(n-i) 1=1

ov = 1, we obtain

= e(n)

a new

(13)

where (14) The estimation

of the variance

of Z(n) is done similarly

to the mean of x(n). We

let

6%) = C YiPi

(154

I=0

where i=O,l,...,Z

(15b)

and (/Ii} are the functions defined before in section I1.A. A procedure similar to the one used to show the characteristics of the mean estimator (5) can be used here to show that yi and 6:(n) are unbiased estimators, To show their consistency Gaussian statistics are required (9). The identification of the {ci(n)} coefficients is done by minimizing the following quadratic error : ep =

C IlEW

y(n)+

To make this identification possible a given set of functions {f;(n)} :

C,(n) =

5 c,(n-i)y(n-i) I=

(

I

we expand

2 Ci,

fj(n>,

(16)

‘. )

the coefficients

{ci(n)} in terms of

fo(n> = 1.

(17)

j=O

This parametrization is a consequence of the existing relation between difference equations and degenerate impulse response functions for time-varying systems (l&12) and it has been used before (24). Substituting (17) into (16) results in 116

Journal

of the Franklin Institute Pergamon Journals Ltd.

Composite

&P,l? =

c(

j) t

y(n)+

Modeling

of Nonstationary

C;jf;(n-i)_Y(n-ii)

(18)

*

i=lj=O

ntw

Signals

>

The minimization of (18) with respect to the cij coefficients the following set of linear equations :

requires

the solution

of

(19) where $(k, i; I,j) = 1 f;(n-i)fr(n-k)y(n-i)y(n-k). new

(20)

To solve the equations in (19) in a systematic manner we need to establish a certain ordering for coefficients (c,.}. The lexicographic ordering of two-dimensional indices (k, I> and (i,j) states that (k, I) is of higher order than (i,j) if and only if k > i, and that when k = i, j > 1. With this ordering and assuming p and q are known, we write Eq. (19) in a matrix form : Rc+r

where R is a p x p positive

= 6

definite,

P = p(9+1)

symmetric

R = PA

(21)

block covariance

l
i
matrix defined by (22)

where Qkr is a (q+ 1) x (q+ 1) matrix

Qki = [d@, i;

LA1

061,

j
(23)

The vectors c and r are defined as : c

=

[c,o,

c20,.

. . ,

cp 1.. . I Clq, czq,.

r = [~(1,0;0,0)~(~,0;

. . ,

cp,l’

130). .‘4(P,O;q,w.

(24)

C. Recursive identi$cation of {C,} Since the values of p and q are not known a priori (they should actually be calculated along with the cijs) we must solve Eqs. (19) in a recursive manner starting withp = 1, q = 0 and increasing them in the lexicographic order until the quadratic error E~,~remains unchanged. In Ref. (7) the authors developed an algorithm based on conjugate directions to solve recursively a symmetric set of equations, and we propose to apply it to solve the equations in (19). To obtain the {cij} coefficients we initially assume the signal is stationary and proceed to calculate the coefficients {cio}. The algorithm in (7) provides the quadratic error at each recursion and if it becomes smaller than a preset terminal error Ed, the algorithm is stopped and we have a stationary model for the signal. Otherwise, the recursion proceeds until at some recursion m, E,--E,,_ , is smaller than a preset difference error cd. At such a point, the original signal is multiplied by f,(n) and the correlation and crosscorrelation with the original signal are computed and used to determine the coefficients {ci,}. Proceeding as before, we Vol. 324, No. I, pp. 113-124, Printed in Great Britain

1987

117

M. Boudaoud and L. F. Chaparro

I

0

I

20

40

*lo

SO

80

100

TIME (SAMPLES)

FIG.

1. (a) Speech signal, time-varying mean and zero-mean signal.

continue calculating {ci,}s until the quadratic recursive procedure does yield the appropriate of the expansion of c,(n).

error becomes smaller than Ed. The order of the system and the degree

III. Examples To compare the performance of the composite model with that of procedures using time-invariant models we consider the modeling of the speech segment shown in Fig. l(a). The sequence x(n) represents about 100 ms of voiced and unvoiced sounds with a small additive hum introduced by the recording process. We remove this hum from the signal by estimating the mean of the signal using Eqs. (3,5). In (3,5) we let A = 15, N = 1000 and the functions {pi(,)} be the discrete Legendre polynomials (4) : (25) with ~(~1= s(s-

118

1) . . . (s-k+

1) for a variable

s. Journal

of the Franklin institute Journals Ltd.

Pcrgamon

Composite Modeling of Nonstationary Signals

z 2

“0

8

6

4

10

P FIG. 1. (b)

Normalized error vs p and q.

To simplify the procedure we assume ho(n) is a constant so that we do not need to estimate the variance. The {ci(n)} coefficients are expanded using the basis functions : ,(n)=(,-l-i]

i=O,l,...,

q,

O
and then obtain recursively the values {cjj}. In Fig. 1(b), we display the normalized error

lZ=O

is given in (18), for various values ofp and q. For q = 0, since fo(n) = 1, where sPSY to the error of the covariance method of the normalized error ipp,ocorresponds linear prediction (8). As expected, for fixed value of p, the error &4 decreases as q increases, and it shows small improvement after q = 4. For the various values of q the error flattens out for values of p equal to about 10. As a second example, we analyze and synthesize the speech segment shown in Fig. 2(a) which represents about 100 ms of unvoiced/voiced speech. For 1 = 15, N = 1000 in (3,5) and using the basis functions in (25) we estimate the time-varying mean and variance of the signal and then obtain the zero-mean, constant-variance signal y(n). The autoregressive model for this signal is achieved by parametrizing the time-varying coefficients with the functions {f;(n)> defined in (26) and the (c;,} values are obtained recursively applying the algorithm in (7). In this case, the autoregressive model is of order p = 11 and each coefficient c,(n) was expanded as in (17) with q = 4. Vol. 324, No. I, pp. 113-124, Printed m Great Britain

1987

119

M. Boudaoud and L. F. Chaparro

0

Ei

(W

Y

2?_ *o

0

7

I

0

20

40

80

80

1

100

*lo

0

r

0

20

40

60

80

I

100

*lo TlME

FIG. 2. (a) Unvoiced-voiced The prediction

(SAMPLES)

speech signal. (b) Prediction error. (c) Synthesized signal.

error e(n) = y(n)+

f; i q,f;(n-i)y(n-ii) i=l,=O

(28)

is calculated and displayed in Fig. 2(b). Reversing the above analysis procedure we are able to synthesize the speech signal. To make this synthesis more efficient we extract certain basic information from e(n) that permits us to reconstruct it at the receiver end. The error sequence e(n) displays the characteristics of noise and quasi-periodic peaks because of the existence of unvoiced and voiced sounds in x(n). Estimating the variance or envelope of e(n) and using it to pick the almost periodic pulses in e(n), we can approximately recreate this signal by using E(n) = (I-

c(n))g(n)u(n)

+ Gc(n)

(29)

where c(n) = picks the quasi-periodic 120

I

if le(n)l >, ag(n), LXis a threshold

0

otherwise

pulses by using the envelope

value (30)

g(n), u(n) is white noise and Journalof

the Franklm lnst~tute Pergamon Journals Ltd.

Composite

Modeling

of Nonstationary

Signals

6 is a constant gain. Using the expansion coefficients for the mean and the variance of x(n) together with the autoregressive space-varying parameters, and using as input g(n) we obtain the synthesized signal shown in Fig. 2(c). As a final example, we consider the nonstationary modeling of radar return signals with the intention of estimating their instantaneous frequencies (13, 14). We illustrate our procedure using the following signals given in (13)

3t)t +sin2~(0.1+8~

10P4t)t].

(32)

The problem is to estimate the instantaneous frequencies of these signals from sampled versions of the signals x, (nT), x*(nT), 0 < n < N- 1. The instantaneous frequency of each of the above sinusoids is defined as the derivative with respect to time of the argument of the sinusoid. Using trigonometric identities it was shown in (13) that x,(t) satisfies a regression equation with time-varying coefficients. Such an equation cannot be obtained however for x,(t). Using the autoregressive model discussed before to fit a set of discrete-time values of the signals in a time interval, we obtain regression equations for both signals having time-varying coefficients. Because of the slow varying nature of these coefficients, the frozen-time transfer function (15)

h”

H(z, n T) = -- ~~

C(z, n T)

(33a)

where C(z,nT)

= 1 + f c,(nT-iT)z-’ ;= I

is used to approximate the generalized transfer function of the model. As discussed in (13) all natural modes of the signals considered are on the unit circle at discrete frequencies corresponding to instantaneous frequencies. Calculating the roots of c(z, nT) over the interval 0 < n < N- 1, estimates of the instantaneous frequencies are obtained. In Fig. 3(a) we show the original (obtained from equations) and the estimated values of the instantaneous frequencies of x,(t) and x*(t). Although the procedure provides acceptable estimates of the instantaneous frequencies for both signals, for x2(t) we need to consider smaller intervals than for x,(t). The effect of adding noise to the signal in the estimation was considered for x,(t) with the results shown in Fig. 3(b). The procedure seems to be robust for moderate signal to noise ratios.

IV. Conclusions The composite model for temporal signals proposed in this paper consideration nonstationarities due to the mean and the autocorrelation Vol. 324, No. 1, pp. 113-124, Printed in Great Britain

takes into functions.

1987

121

M. Boudaoud

and L. F. Chaparro

q &

ORIGINAL ESTIMATED

TIME (SAMPLES)

TIME (SAMPLES)

FIG. 3. (a) Estimation

of instantaneous

frequencies

for x,(t) and n,(f).

The time-varying estimators developed for the mean and variance are shown to be unbiased, but their consistency depends on choices of inner product and basis. For the autoregressive modeling of zero-mean, constant-variance nonstationary signals, we propose an efficient algorithm to identify the time-varying coefficients. As shown in the examples, the composite model is more appropriate than time-invariant models for many signals of practical interest. The extra computational load required by our model is alleviated by the recursive algorithm. An interesting topic for further research is the stability of the obtained autoregressive model. References (1) W. A. Fuller, “Introduction to Statistical Time Series”, John Wiley, New York, 1976. (2) S. T. Rao, “The filtering of non-stationary time series models with time-dependent parameters”, J. R. Stat. Sot., Vol. 32, Ser. B, pp. 312-322, Mar. 1970.

122

Journalof the Franklin institute Pergamon Journals Ltd.

Composite Modeling of Nonstationary Signals

A

ESTIMATED

2 “0

10

20

30

40

1 50

*10 TIME (SAMPLES)

TIME

FIG. 3. (b) Estimation

of instantaneous

(SAMPLES)

frequency

for noisy x,(t)

with SNRs

of 7 and

1.8 dB. (3) L. A. Liporace, “Linear estimation of nonstationary signals”, J. Acoustics Sot. of Am., Vol. 58, pp. 12881295, Dec. 1975. (4) F. Kozin, “Estimation and modeling of nonstationary time series”, Proc. Symp. on Appt. of Comp. Methods in Eng., pp. 6033612, Los Angeles, CA, 1977. (5) D. Darlington, “Nonstationary smoothing and prediction using network theory concepts”, IRE Trans. on Circuit Theory, pp. 1-13, May 1959. (6) Y. Grenier, “Time varying lattices and autoregressive models : parameter estimation”, Proc. ZCASSP, pp. 1337-1340, May 1982. (7) L. F. Chaparro and M. Boudaoud, “Recursive solution of covariance equations for linear prediction”, J. Franklin Inst., Vol. 320, pp. 161-167, Sept. 1985. (8) L. R. Rabiner and R. W. Schafer, “Digital Processing of Speech Signals”, PrenticeHall, Englewood Cliffs, NJ, 1978. Vol. 324, No. I, pp. 113-124, 1987 Printed in Great Britain

123

M. Boudaoud and L. F. Chaparro (9) V. Z. Marmarelis, (10) (11)

(12) (13) (14) (15)

“Practical estimation of correlation functions of nonstationary ’ Gaussian processes”, IEEE Trans. on Info. Theory, Vol. 29, pp. 9377938, Nov. 1983. H. D’Angelo, “Linear Time-Varying Systems”, Allyn and Bacon, Boston, MA, 1970. F. G. Tricomi, “Integral Equations”, Interscience, New York, 1967. digital filters”, IEEE Trans. N. Huang and J. K. Aggarwal, “On linear shift-variant Circ. and Syst., pp. 672-679, Aug. 1980. K. C. Sharman and B. Friedlander, “Time-varying autoregressive modeling of a class of nonstationary signals”, Proc. ICASSP, pp. 22.2.1-22.2.4, New York, 1984. R. G. Wiley and W. R. Carmichael, “A practical procedure for estimation of instantaneous frequency”, IEEE Trans. Instr. and Meas., pp. 76-79, Mar. 1981. T. Leou and J. K. Aggarwal, “Recursive implementation of LTV filters: frozen-time transfer functions versus generalized transfer function”, Proc. of IEEE, Vol. 72, pp. 98&981. Jul. 1984.

Journal

124

of the Franklin lnstltute Pergamon Journals Ltd.