Signal Processing 5 (1983) 163-173 North-Holland Publishing Company
163
A D Y N A M I C P R O G R A M M I N G A L G O R I T H M FOR NONLINEAR SMOOTHING Hermann NEY
Philips GmbH Forschungslaboratorium Hamburg, Vogt-Kiilln-Strafle 30, D-2000 Hamburg 54, Federal Republic of Germany Received 12 March 1981 Revised 15 March 1982 and 1 October 1982
Abstract. This paper presents a smoothness optimization approach to the nonlinear smoothing problem. Linear smoothing techniques fail to provide adequate results for curves which exhibit both sharp discontinuities to be preserved and, due to measurement or processing errors, outliers and noise to be filtered out. The nonlinear algorithm is based on a criterion for the overall smoothness of the curve. The smoothness criterion is optimized by a dynamic programming strategy. The resulting algorithm turns out to be computationally attractive. The computation time grows proportionally to N 2 and the storage requirements are 2N locations where N is the number of samples to be smoothed. The algorithm is applied to smoothing pitch contours.
Zusammen|assung. Dieser Artikel beschriebt einen Ansatz fiir das Problem der nichtlinearen Gl~ittung, der auf einer Optimierung eines Gliitte-Kirteriums beruht. Lineare Gl~ittungsverfahren liefern keine befriedigenden Ergebnisse fiir Kurven, bei denen einerseits ausgepr~igte Sprungstellen beizubehalten und andererseits Ausrei~er und Rauschen, bedingt durch MeJ]- oder Verarbeitungsfehler, herauszufiltern sind. Der nichtlineare Algorithmus basiert auf dem Konzept eines Kriteriums fiir die Gesamtglfitte der Kurve. Das Gl~itte-Kriterium wird optimiert mit Hilfe der dynamischen Programmierung. Es ergibt sich ein vom Rechenaufwand her sehr einfacher Algorithmus. Die Rechenzeit w~ichst proportional zu N 2, und der Speicherplatzbedarf urnfal]t 2N Speicherpl~itze, wobei N die Anzahl der Kurvenpunkte ist. Der Gl~ittungsalgorithmus wird benutzt, um die Verl~iufe der Stimmbandgrundfrequenz zu gl~itten. R~sum& Cet article pr6sente une approche d'optimisation d'adoucissement du probl~me d'adoucissement non-lin6aire. Les techniques d'adoucissement lin6aire ne donnent pas des r6sultats ad6quats pour les courbes qui ont des discontinuit6s aigues qui doivent ~tre pr6serv6es et des perturbations qui doivent 8tre 61imin6es. L'algorithme non-lin6aire est bass sur un crit~re pour l'adoucissement global de la courbe. Ce crit~re est optimis6 par une strat6gie de programmation dynamique. L'algorithme r6sultant appara~t attirant du point de vue de calcul. Le temps de calcul croit proportionnellement ~t N 2 et la m6moire n6cessaire est de 2N ott N e s t le nombre d'6chantillons ~t adoucir. L'algorithme est appliqu6 ~ l'adoucissement des contours de la fondamentale de la parole. Keywords. Nonlinear smoothing, dynamic programming, elimination of outliers.
1. Introduction
A h i g h - f r e q u e n c y noise c o m p o n e n t s u p e r i m p o s e d o n t h e t i m e signal is thus i n d i st i n g u i sh ab l e f r o m
A l t h o u g h in m a n y signal p r o c e s s i n g a p p l i c a t i o n s
t h e sh ar p discontinuities, as far as spectral p r o p e r -
the m e t h o d of l i n e a r s m o o t h i n g t h r o u g h l o w p a s s
ties ar e i n v o l v e d . T h e s a m e holds e v e n t r u e r if
filtering p r o v e s to w o r k v e r y satisfactorily, t h e r e
o u t l i e r s in t h e d a t a are p r e s e n t d u e to m e a s u r e -
are a n u m b e r of cases w h e r e l i n e a r s m o o t h i n g
m e n t or p r o c e s s i n g errors. A s a result a l i n e a r
leads to u n a c c e p t a b l e results. T h e c o n t o u r s of t he
s m o o t h e r w o u l d s m e a r o u t t h e sharp
pitch p e r i o d are an e x a m p l e of such a case. T h e s e
tinuities in t h e c u r v e as well as filter o u t t h e noise
cases h a v e
an d t h e outliers. T h e essential r e a s o n for this is t h e t i m e or shift i n v a r i a n c e of a l i n e a r s m o o t h e r ,
in c o m m o n
that
the
c u r v e to b e
s m o o t h e d c o n t a i n s i n h e r e n t s h a r p discontinuities. 0166-1684/83/0000-0000/$03.00
O 1983 N o r t h - H o l l a n d
d i sco n -
164
H. Ney / Nonlinear smoothing
which means that local properties of the curve are not processed specifically. In some cases it may be possible to approximate the curve to be smoothed by an analytic function, e.g. a polynominal, which depends on a set of parameters. Then by a least-squares fit these parameters may be adjusted so that the measured data are optimally represented by the analytical function and are thus smoothed. But even in this case poor results are likely to be produced under certain conditions since there is no way to detect outliers in the data and omit them from the leastsquares fit. Moreover the squared error criterion, which is often chosen because of its mathematical tractability, places higher weight on large errors than on small errors. The computational requirements can be of additional disadvantage. Another nonlinear smoothing method, the running median [1, 2], will also fail if the outliers to be corrected occur in clusters. The ideal smoothing algorithm desired must be capable of preserving sharp discontinuities in the signal and yet simultaneously be capable of filtering out large errors and noise superimposed on the signal. A promising approach to the design of a smoothing algorithm with the aforementioned features is to develop a procedure for selecting those curve samples which provide an overall smooth curve. It is the purpose of this paper to treat the smoothing problem as a nonlinear optimization problem where the smoothness of the measured contour is to be optimized. The dynamic programming strategy [3], known as a powerful technique for solving optimization problems, is used to carry out the optimization. The use of dynamic programming for nonlinear smoothing was suggested by White [4]. In a number of applications the use of dynamic programming is severely limited by the amount of storage required to implement the basic calculations. It is therefore remarkable that in our case the storage requirements turn out to be extremely moderate; they require only 2 N locations where N is the number of data to be smoothed. Signal Processing
In Section 2 we formulate the nonlinear smoothing problem as an optimal selection procedure. In the following section we show how the dynamic programming strategy is applied to solve the problem. Further, considerations of the amount of computation and of the storage requirements are included. In Section 4 we show some typical examples of how the method has been used to smooth pitch period contours. Finally modifications of the method are discussed in Section 5.
2. Formulation of the nonlinear smoothing problem The curve A to be smoothed is given by a time sequence of measured data: A=[a(i)]=a(1)
.....
a(N);
a(i) .....
(1)
where the index i denotes the time of measuring. An example of such a contour is shown in Fig. 1.
ii
lO
t' ' ' ~ ....
~b . . . .
2b'
~ .... • ~..-
time
' ' 'Js'
'
i
Fig. 1. Illustration of the nonlinear smoothing problem. The data at time points i = 9, 11, 26 are obviously outliers whereas the datum at time point i = 20 may be viewed as caused by a noise superimposed on the data. Evidently it is difficult to find among the traditional algorithms one that is capable of recognizing the incorrect samples as such. The purpose of a smoothing algorithm, as we define it, is to select those samples which provide a 'smooth' curve. The selection can be represented
165
H. Ney / Nonlinear smoothing
by a sequence of indices: J = [/(k)] =/(1) .....
(iv) the squared length:
j(k) ..... j(K).
(2)
The sequence of indices along with the corresponding samples is defined to be the smoothed curve A: =
[a(j(k))]
= a (/(1)) . . . . . a(j(k)) . . . . . a(i(K)).
(3)
The sequence J will also be called smoothing function. The smoothed curve A must exhibit the same temporal structure as the original curve A. In other words the sequence Y has to preserve the chronological order of the samples, which requires the monotony of the sequence J:
j(k)
for k = i . . . . . K - 1 .
(4)
The initial sample at i = 1 and the final sample i = N cannot be expected to be necessarily correct samples, as can be seen from Fig. 1 where the final sample is not correct. Thus we have only the mild boundary condition:
l<~j(k)<~N
fork=l
. . . . . K.
(5)
In addition to the monotony condition (4) and the boundary condition (5) imposed on the smoothing function J we must develop an appropriate criterion for the goodness of the smoothing procedure. A quantitative determination of the smoothing function J requires a point-by-point criterion of smoothness between two samples of the measured curve. There are a number of ways to define a criterion d(i, l) of smoothness between two samples a(i) and a(l): The following definitions are of particular interest: (i) the absolute difference:
d(i, l)= [a(i)-a(t)l,
(6a)
(ii) the absolute slope:
d(i, t)
=
a (i) - a (l) ,
(6b)
(iii) the length or Euclidean distance:
d(i, l) = [a(i-l)Z+(a(i)-a(l))2] l/z,
(6b)
d(i, l) = o~(i - i)z + (a (i) - a (l)) 2.
(6d)
The parameter ~ in (iii) and (iv) is necessary to take into account the difference between units of time and data. Due to its suitability and computational simplicity, the absolute slope is likely to be used in preference to the other criteria for most applications. As we will see later, the squared length is very useful if the initial and final samples of the smoothed curve are known a priori and can thus be kept fixed. Based on the criterion of smoothness between two consecutive samples, it is straightforward to define an overall criterion of smoothness for a curve. However, since the criteria of smoothness given by eq. (6) are never negative, the smoothed curve would consist of solely one or a few samples of the same value under the condition that the endpoints of the curve are not kept fixed. To avoid this it is necessary to introduce a nonnegative bonus or reward B which is obtained for each sample that is preserved in the smoothed curve. Thus a reasonable measure for the optimum smoothing function J is to optimize the criterion of overall smoothness minus the bonus B for each sample included: K
min{k~2(d(/(k),/(k-1))-B)
}.
(7)
The effect of the bonus B is visualized by Fig. 2. If we keep the samples a(il) and a (i2) fixed, i.e., we consider them to belong a priori to the optimally smoothed curve, the sample a(i*) will be included in the smoothed curve if and only if
d(i2, il) >d(i2, i*) + d(i*, i l ) - B .
(8a)
Eq. (8a) can be illustrated by considering the following special case. Apart from one sample at time i*, all curve samples result in a straight line of slope s :
a ( i ) = l a ( O ) + s .i [a(O)+s i + H
i#i*, "i=i*. Vol. 5. No. 2, March 1983
H. Ney / Nonlinearsmoothing
166
point i:
lAX
C u'l
O(i) := msin {k~=*2 ( d ( / ( k ) , j ( k - 1 ) )
~.~--
I II
- a ) : / ' ( k * ) = i}. I I
I I I
I I
ii
I I I I I in i2
---~.-
Splitting up the sum into two parts yields:
I I
time i
D ( i ) = - B +min
Fig. 2. Effectof the bonus on the amount of smoothing.
{d(i,/(k*-
1))
k*-I
+ •
The sample a(i*) is an outlier with deviation H from the straight line. Using the absolute slope (6b) and eq. (8a), we obtain the result for il = i * - 1 and i2 = i * + 1 that the sample a (i*) is included in the smoothed curve if and only if Is[ > IH +sl + IH - s [ - B .
(9)
(8b)
Evidently, eq. (8b) is still valid if we define the slope s as the slope associated with the two samples at times il = i * - 1 and i 2 = i * + 1 . For a horizontal line, i.e. s = 0, the inequality (8b) results in
}
(d(j(k),j(k-1))-B)
(10)
.
k=2
From the boundary condition (5) and the monotony condition (4) it is known that either 1 <-/(k* -
1) < i
(11)
or k * = 1, i.e., the sample at i has no preceding sample. Using the definition (9) for l = / ( k * - 1 ) , we obtain the fundamental recurrence relation D ( i ) = - B +min{0, d (i, l ) + D ( I ) :
1 -- 1 . . . . . i - 1}.
(12)
For i = 1 eq. (12) is to be read as D(1) = - B .
n > 2In].
(8c)
The actual value of the bonus B determines the amount of smoothing. The smaller the value of the bonus, the more the curve will be smoothed. The smoothing results in skipping the noisy or incorrect samples. The smoothing does not determine how to replace them. This could be done by linear interpolation or by substituting the nearest correct sample of the contour.
3. Dynamic programming strategy The nonlinear optimization problem given by equation (7) is efficiently solved by using the theory of dynamic programming [3]. First we introduce a partial measure of smoothness D ( i ) at the point i, defined for the samples [1 . . . . . i] as the optimally smoothed curve through the Signal Processing
(13)
For sake of convenience we initialize D ( i ) by D(i)=O,
i= l ..... N
(13')
and can rewrite eq. (12) as D ( i ) = - B + min{d(i, I)+ D(I): 1 = 1 . . . . .
i}.
(12') The dynamic programming approach has decomposed the global optimization problem, given by eq. (7), into a number of local optimization stages as defined by eq. (12). This is illustrated by Fig. 3. At each optimization stage i a decision is made by the operation 'min{d(i, l)+D(I): 1 = 1 . . . . . i}'. To find the optimally smoothed curve, the decisions are stored in an additional array: Ind(i) = position of the minimum.
(14)
In this way the dynamic programming strategy can be thought of as an exhaustive search procedure
H. Ney / Nonlinearsmoothing
167
d o for each time point i in ascending order
I i
initialize cumulative distance: D(i) = 0
compute transition cost d {i,l) to all ]
I
preceding time points 1 ~ i
--4
I
---(
compute overall costs All,l) as the sum of
[
cumulative costs D(1) and transition costs d(i,l):
I
&(i,l) = d(i,l) + D(1)
Fig. 3. Illustration of the dynamic programming strategy. The global optimization problem is decomposed into a number of local optimization stages.
locate the minimum of Ali,l):
substract
where nonoptimum solutions are dropped as soon as they have been recognized. Starting with the initial condition (13'), the dynamic programming equation (12') must be used to recursively determine D(i) at every time point i in ascending order with respect to i. Following the recursion, the unknown final sample of the optimally smoothed contour is given by the index of the minimum of all D(i). The optimum smoothing function J and thereby the optimally smoothed curve is determined by tracing the optimum decisions stored in the array Ind(i) back from the final point to the initial point given by I n d ( i ) = i. A flowchart of the algorithm is shown in Fig. 4. Fig. 5 shows different possible smoothings of the data sequence of Fig. 1, dependent on the value of the bonus B. The smoothness criterion used was the absolute slope (6b). A value of B = 4.4 seems to be appropriate to isolate the erroneous data and to provide the desired amount of smoothing. Note that the endpoints of the data sequence are handled correctly. It is not possible to determine the exact value of the bonus B a priori for all types of curves because the appropriate value depends on the character of the curve data and of the outliers. In many cases, however, a rough estimate for the appropriate value of the bonus can be obtained by considering tae special case of an outlier of deviation H with respect to a horizontal line as discussed above. Considering
[
A (i,k) = m i n [ & (i,i):i=I ..... i 1
bonus
B and
store
new c u m u l a t i v e
I
cost:
D(i) = -B + ~(i,k) along with a backpointer to the minimlt~ of ~(i,l) Ind(i) = k
I determine the smoothed curve by tracing the backpotnters backwards from the minimum cumulative costs D(i}
Fig. 4. Flowchart of the nonlinear smoothing algorithm. the curve shown in Fig. 5, we assume that curve samples with deviation IH[~<2.0 from a horizontal line are known to be correct samples. Hence using equation (8c), we obtain the estimate B =4.0, which value approximately provides the desired degree of smoothing. An easy method for checking the degree of smoothing is to count the samples that have been eliminated from the contour. In general, this number is known to be less than a fixed percentage of the total number of samples. It is instructive to compare the smoothing algorithm presented with a median filter. For the curve shown in Fig. 5, it is easy to verify that a 3-point median filter is not able to eliminate the two outliers at time points i = 9 and i = 11 and has the undesired effect of smoothing out the curve. A 5-point median filter is capable of Vol. 5, No. 2, March 1983
I
I
H. Ney / Nonlinear smoothing
168
INPUT
o
o oo
E
o
o
o
B=&L
~
o
o
o o o o
3 c
o
10
~
o
o
~
oo o
t
a)
o
o
o
t
oo
o
o o
J
t
I
5
10
15
t
i
20
25
d)
o
'
,'
5
,'5
S o
10
-6
b)
,22w
"
J5
lime i
time i
B:
1LL
o
o
B=3~
t
0 time i
lime i o
o
B=49
B=20
o
Zo 10 "6
o
c
g
o o
o
t c)
f}
0 time i
o
o o
a
I
t~me t
Fig. 5. Smoothing effect for a simple test input using different values for the bonus.
eliminating the two outliers, but the curve itself is even more corrupted as a result of the smoothing out. Thus it must be concluded that a median filter does not have the properties required for the smoothing procedure. An important issue of any smoothing algorithm is the storage and computation requirement to implement it. The dimensionality barrier is known to be the biggest deterrent to the use of dynamic programming. These severe restrictions are due to the number of locations in high-speed access memory required to carry out the recursive optimization, an example of which is eq. (12'), and to store the optimum decisions of each optimization stage. Fortunately, however, the presented algorithm turns out to require only a very moderate amount of storage locations. 2 N locations are needed to store the arrays D(i) and Ind(i). Thus the storage requirements grow proportionally to the length of the data sequence. As can be seen Signal Processing
from Fig. 3, the most interior loop or the smallest 'computation unit' is to be performed ~N 2 times. Thus the computation is quadratic with the length of the data sequence. The computation, however, involves simple arithmetical and comparison operations and can therefore be carried out very fast. Using a medium-speed minicomputer ( P D P l l / 3 5 ) and 16-bit integer arithmetic (add 1.0 ixs, divide 13.0 ixs) the crucial loop to be performed 1N2 times requires less than 50 Ixs, if the smoothness criterion uses the absolute slope defined in eq. (6b). This computation time can, if necessary, be reduced by using the absolute difference of eq. (6a) as smoothness criterion, which has been found to produce only slightly inferior results. A Fortran subroutine for the smoothing algorithm is shown in Fig. 6. The criterion of local smoothness d(i, l) is implemented as an arithmetic statement function. The form of the local smoothness measure and the two
169
H. Ney/Nonlinearsmoothing C SUBROUTINE CUVSMT(A,N, BON) C C C C C C C C C C C C
~
~
~
~
~
~
DYNAMIC A(N) N BON
PROGRAM FOR NONLINEAR CURVE SMOOTHING. I S THE ARRAY OF DATA (CURVE) TO BE SMOOTHED. I S THE NUMBER OF POINTS I N THE ARRAY. I S A BONUS THAT DETERMINES THE AMOUNT OF SMOOTHING: THE SMALLER THE VALUE OF BON, THE MORE THE SMOOTHING W I L L BE. DIS(I,L) DEFINES CRITERION OF LOCAL SMOOTHNESS BETWEEN DATA POINTS I AND L. INTEGER A ( N ) , B O N , A I , A L INTEGER I N D I ( 5 1 2 ) INTEGER SDIS(512),S, SMIN, SSMIN, DIS DIS(IoL) = I A B S ( I O * ( A I - A L ) ) / ( I + I - L ) IF (N. GT. 512) STOP 'ARRAYOF DATA TOO LONg~
C C C C C
11
10 C C C C C C
21 20 C
PERFORM DYNAMIC PROGRAMMING STORE MINIMUM SCORES IN ARRAY SDIS(1) STORE OPTIMUM DECISIONS IN ARRAY I N D I ( I ) SSMIN=O DO 10 I = I , N AI=A(I) I F ( A I . L E . O ) GOTO 10 SDIS(I)=O SMIN=O DO 1 1 L = l o I AL=AtL) I F ( A L . L E . O ) GOTO 11 S = SDIS(L) + D I S ( I , L ) IF (S. GT. SMIN) OOTO 11 SMIN=S LMIN=L CONTINUE SMIN=SMIN-BON IF (SMIN. GE.O) STOP 'INTEGER OVERFLOW' SDIS(I)=SMIN INDI(1)=LMIN IF (SMIN. GT. SSMIN) OOTO 10 SSMIN=SMIN IMIN=I CONTINUE
TRACE THE OPTIMUM DECISIONS BACKWARDS FROM THE OPTIMUM ENDING POINT I M I N TO F I N D THE OPTIMALLY SMOOTHED CURVE BY S K I P P I N G POINTS OF THE CURVE WHERE NECESSARY. THE SKIPPED POINTS ARE BET TO ZERO. DO 2 0 I=N, 1,-1 AI=O I F ( I . NE. IMIN) QOTO 21 AI=A(IMIN) IMIN-INDI(IMIN) A(I)=AI CONTINUE
RETURN END
Fig. 6. A Fortran subroutine for the nonlinear smoothing algorithm. Vol. 5, No. 2, Match 1983
H. Ney / Nonlinear smoothing
170
IF statements that perform a comparison with zero refer to the special application described in the following section. An additional reduction of computation time can be achieved if a moving window is introduced into the smoothing algorithm to limit the number of samples which can be skipped in one step. Eq. (12') is then altered to D ( i ) = - B + min{d(i, l) + D(1):
l = i - 1 - w . . . . . i},
(15)
where w is the window size. As a result, the computation is proportional to N . w . Apart from the computational aspects, a window may be used in order to perform finite memory smoothing. The basic idea of finite memory smoothing is that the correlation between samples decreases if the time difference between them increases. Although this finite memory smoothing significantly reduces the computational burden, it will not be studied further because it depends strongly on the type of data to be smoothed and implies a second adjustable parameter. In the above considerations the data sequence to be smoothed was defined for all time points i between 1 and N. It is straightforward to modify the smoothing algorithm if the data sequence has been measured only for certain times i. In this case the optimization equation (12') has to be carried out solely for the corresponding samples. In summary, starting with a fundamental definition of smoothness we have developed a nonlinear smoothing algorithm that performs the smoothing operation by optimizing the smoothness criterion.
Fig. 7 shows plots of the input curve as obtained from a pitch detector and the output curves from the smoothing algorithm for different degrees of smoothing. The pitch period was computed every 2 0 m s from a 4-s utterance transmitted via a dialed-up telephone connection. The number of outliers in the curve is increased intentionally by improperly setting certain parameters of the pitch detector. The Fortran subroutine shown in Fig. 6 was used to perform the smoothing operation. For unvoiced speech segments, the pitch period was set to zero; these samples were excluded from the smoothing operation. That is the reason for the two mentioned IF statements in the Fortran subroutine in Fig. 6. For each output curve in Fig. 7 the used value for the bonus B is indicated. By contrasting Figs. 7(b), (c), (d), (e), (f) with each other, the increasing amount of smoothing obtained using smaller values for the bonus B is clearly seen. A value of B between 15 and 25 appears to ensure the appropriate degree of smoothing. In view of eq. (8c), setting B = 2 0 means that outliers of deviation [HI/> 20 (in 100~xs units)= 2 ms from horizontal lines (s = 0) are eliminated, which is a reasonable requirement for pitch period contours. As expected, for large values of B no smoothing is obtained while for too small values of B the smoothed curve results in a more or less straight line consisting only of a few samples. For finite memory smoothing with a window size w = 20, similar results are obtained. Finally Figs. 8(a), (b), (c), (d), (e), (f) illustrate the smoothing capability of the algorithm even if the pitch period contour to be smoothed is severely corrupted by a large number of outliers occurring in clusters. It is evident that even a human viewer would not do better unless he had additional a priori information on the contour.
4. Application to pitch period contours 5. Modifications The smoothing algorithm described so far may be especially useful in the area of speech processing. We have applied the smoothing algorithm to pitch period contours. Signal Processing
In this section we consider some interesting modifications of the smoothing algorithm described.
H. Ney / Nonlinear smoothing
171
200
200
B:25
INPUT u~ :3-
o
o
"1o o t._
L-
--,,,.
cU
;%' .,,..,,,.--'/
•
.-~-.
. .'%..,'.-" ~'''''t
• ~d=,
4....
%;
5.
%
i
t
a)
I
I
I
I
I
I
I
I
I
200
0
I
I
I
I
I
I
I
I
20O
--,=.-
time [20 ms]
--D.~
200
d) 0f0
time [20 ms]
2001
;
B:15
B = 200 el
:3_
o
o
"tD
.o "E o ¢3_
,
£: u
.,,-/
"-\.; "~',,
CX u
4.'
"4..
~x
A
b) a0
I
I
I
i
I
I
I
I
I
e)
00
i
i
I
I
I
I
i
I
200
200
time [20 msl
~
time [20 ms]
2OO
200
B=10
B:50 ::1.
8
o
~o k.
tu
'
.'-,, : .,-~,,
"5.
•° #
J
,,., J
• ."%'
41
%,#=
n
*.
..~...o=P °
.-~d"°
t
c)
0
200
li me [ 20 ms ]
0
200
~
time [ 20 ms ]
Fig. 7. Effects of the nonlinear smoothing algorithm on a pitch period contour (a) for different amounts of smoothing (b), (c), (d), (e), (f). Vol. 5, No. 2, March
1983
H. Ney / Nonlinear smoothing
172
2OO
2O0
B:25
INPUT ]3-
-1
o
O
"I:3 O
"O O 4 II
.=i"
o,.e
• • \_,: .
Is-
°1
:
,,,.,.....
U
o°
£3. •,
a)
i
I
i
i
i
I
I
#
I
I
i
d)
0
i
250
0 --D,,.-
I
i
i
I
i
i
i
0
250
time [20 ms]
--D,--
time[20
ms]
200
200
B:I,5
B = 200 v~ :3_
,:3..
o
o "u o 1_
j#
Q. ru
°# •
•
~,wL"
,
"
5.
\.,.
r~
i
t b) %
I
I
I
t
I
I
t 250
e) 0
I
I
I
I
I
I
I 250
time [20 ms]
~ t i m e
200
I
) [20 ms]
z00 I
B:
50
B=
t
o o
10
o r-"
d, #
~,
..F
£3. cu
°dP
ci.
•
#
%
\-
5
¢-,,
i
t c) 0
i
[
i
I
t
.~---P--time
Fig.
8. Effects
Signal Processing
I
I
I
[20 ms]
of the nonlinear s m o o t h i n g
I
250
f} 0
)
L
J
J
~ ~
t
I
I
i
I,
time [20 ms]
algorithm on a severely corrupted pitch period contour (a) for different a m o u n t s of s m o o t h i n g (b), (c), (d), (e), (f).
H. Ney / Nonlinear smoothing
The first modification to be mentioned concerns the bonus. In some cases a measure of reliability may be known for each measured sample. An example is the pitch period contour the samples of which are determined by the location of the maximum of a correlation type function. The value of the maximum itself can be viewed as a measure of how reliably the pitch sample has been measured. By relating the bonus to the value of the maximum we can incorporate the statistical reliability of the measuring process into the smoothing process. In the case that the initial and final samples of the data sequence are kept fixed, the presented algorithm is still valid. The only modifications are that the optimization defined by eq. (12') must not include the considered sample a (i) itself and the backtracing of the optimum decisions must start from the endpoint of the data sequence. For this case of fixed endpoints the squared length defined in eq. (6d) as smoothness criterion may have the advantage among the different criteria in eq. (6). It turns out that the bonus is unnecessary and the amount of smoothing can be adjusted simply by the scale factor o~. Another modification concerns the definition of the smoothness criterion. The smoothness criterion has been defined for pairs of samples. Instead we could also base it on a triplet of samples. In this view the presented approach is based on the slope of the curve and not on the curvature. However, the slope criterion and curvature criterion lead to the same algorithm if the curvature criterion is separable:
d3(i, /, l) = d2(i, ]) + d2(], l),
(16)
where d3(i, ], /) and dz(i, ]) are suitable functions.
6. Summary The nonlinear smoothing problem has been tackled using a smoothness criterion. The
173
specification of the smoothness criterion and the formulation of the smoothing problem in terms of this criterion have lead straightforward to a nonlinear optimization problem. The optimization has been carried out efficiently by means of a dynamic programming strategy. The resulting smoothing algorithm has the advantage that it can be implemented in a simple and straightforward manner. The amount of computation grows quadratically with the number of data to be smoothed. By using a window, this dependence can be reduced to a linear one. The algorithm is capable of following true discontinuities in the data and eliminating outliers even if they may be comparable in their values. The algorithm has been shown to be successful in smoothing pitch period contours versus time.
Acknowledgement The author would like to thank R. Frehse of Philips GmbH Forschungslaboratorium Hamburg for helpful discussions and for his suggestion to employ the concept of a bonus for the formulation of the smoothing problem.
References [1] J.W. Tukey, "Nonlinear (nonsuperposable) methods for
smoothing data", CongressRecords 1974 EASCON, 1974, p. 673. [2] L.R. Rabiner, M.R. Sambur and C.E. Schmidt, "Applications of a nonlinear smoothing algorithm to speech processing", IEEE Trans. Acoust. Speech Signal Process. Vol. ASSP-23, No. 6, Dec. 1978, pp. 552-557. [3] R. Bellman and S. Dreyfus, Applied Dynamic Programming, Princeton University Press, Princeton, NJ, 1962. [4] G.M. White, "Dynamic programming, the Viterbi algorithm and low cost speech recognition", Proc. IEEE 1978Int. Conf. Acoust. Speech Signal Process., Tulsa, OK, April 1978, pp. 413-417.
Vol. 5, No. 2, March1983