ARTICLE IN PRESS Signal Processing 88 (2008) 2463– 2471
Contents lists available at ScienceDirect
Signal Processing journal homepage: www.elsevier.com/locate/sigpro
Selective partial update and set-membership subband adaptive filters Mohammad Shams Esfand Abadi a,, John Ha˚kon Husøy b a b
Department of Electrical Engineering, Shahid Rajaee Teacher Training University, Tehran, Iran Department of Electrical and Computer Engineering, University of Stavanger, N-4036, Stavanger, Norway
a r t i c l e in fo
abstract
Article history: Received 2 November 2007 Received in revised form 14 April 2008 Accepted 16 April 2008 Available online 7 May 2008
This paper presents three efficient subband adaptive filter (SAF) algorithms featuring low computational complexity. In the first algorithm, which is called selective partial update SAF (SPU-SAF), the filter coefficients are partially updated in each subband rather than the entire filter at every adaptation. In the second one, the concept of set-membership (SM) adaptive filtering is extended to the SAFs and a novel SM-SAF algorithm is presented. This algorithm exhibits superior performance with significant reduction in the overall computational complexity compared with the ordinary SAF. The third algorithm is based on the combination of the ideas in the SPU-SAF and SM-SAF algorithms. We demonstrate the usefulness of the proposed algorithms through simulations. & 2008 Elsevier B.V. All rights reserved.
Keywords: Subband adaptive filter Selective partial update Set-membership Computational complexity
1. Introduction Adaptive filtering is an important subfield of digital signal processing having numerous applications [1–3]. In some of these applications, a large number of filter coefficients are needed to achieve an acceptable performance. Therefore the computational complexity is the main problem in these applications. Several adaptive filter algorithms such as the subband adaptive filters (SAFs), the adaptive filter algorithms with selective partial updates (SPU) and the set-membership (SM) filtering have been proposed to solve these problems. The SPU adaptive algorithms update only a subset of the filter coefficients in each time iteration and consequently reduce the computational complexity. The MaxNLMS [4], the MMax-NLMS [5,6], variants of the SPU normalized least mean square (SPU-NLMS) [7,8] and the SPU transform domain LMS (SPU-TD-LMS) [9] are important examples of this family of adaptive filter algo Corresponding author. Tel.: +98 21 22970003.
E-mail addresses:
[email protected] (M.S.E. Abadi),
[email protected] (J.H. Husøy). 0165-1684/$ - see front matter & 2008 Elsevier B.V. All rights reserved. doi:10.1016/j.sigpro.2008.04.014
rithms. Unfortunately, as with many other adaptive filter algorithms, the step-size determines the tradeoff between steady-state mean square error (MSE) and convergence rate. Having fast convergence, low steady-state MSE, and low computational complexity at the same time is highly desirable. The SM normalized LMS (SM-NLMS) is one of the algorithms that has these three features [10]. Based on [10], different SM adaptive algorithms have been developed. The SM affine projection algorithm (SM-APA) [11,12], and the SM binormalized data-reusing LMS (SMBNDRLMS) algorithms [13] are important examples of this family of adaptive filters. Also in [14], the SM-PU-NLMS is presented based on the combination of the partial updating and SM filtering approaches. In [15], the subband adaptive algorithm called normalized SAF (NSAF) was developed based on a constrained optimization problem. The filter update equation proposed in [15] is similar to that proposed in [16,17], where the fullband filters are updated instead of subfilters as in the conventional SAF structure [18]. Again, in the SAFs, the step-size determines the tradeoff between steady-state MSE and convergence rate [19].
ARTICLE IN PRESS 2464
M.S.E. Abadi, J.H. Husøy / Signal Processing 88 (2008) 2463–2471
What we propose in this paper can be summarized as follows:
The establishment of the SPU-SAF algorithm. In this algorithm the filter coefficients are partially updated in each subband rather than the entire filter at every adaptation. Extension of the SM filtering concept to the SAF, and the establishment of a novel SM-SAF algorithm. The proposed algorithm exhibits superior performance with significant reduction in the overall computational complexity compared with the ordinary SAF. Combination of the SPU-SAF and SM-SAF approaches to develop the SM–SPU-SAF algorithm.
where xðnÞ ¼ ½xðnÞ; xðn 1Þ; . . . ; xðn M þ 1ÞT . Using the method of Lagrange multipliers to solve this optimization problem leads to the following recursion: hðn þ 1Þ ¼ hðnÞ þ
j:j
xðnÞ ¼ ½xT1 ðnÞ; xT2 ðnÞ; . . . ; xTP ðnÞT ,
2
k:k
T
T
hðnÞ ¼ ½h1 ðnÞ; h2 ðnÞ; . . . ; hP ðnÞT .
(4) (5)
The SPU-NLMS algorithm for a single block update in every iteration, minimizes following optimization problem min khj ðn þ 1Þ hj ðnÞk2 ,
hj ðnþ1Þ
(6)
subject to (2), where j denotes the index of the block that should be updated. Again by using the method of Lagrange multipliers, the update equation for SPU-NLMS is given by hj ðn þ 1Þ ¼ hj ðnÞ þ
norm of a scalar squared Euclidean norm of a vector
(3)
where eðnÞ ¼ dðnÞ xT ðnÞ hðnÞ, and m is the step-size that determines the convergence speed and excess MSE (EMSE). Now partition the input signal vector and the vector of filter coefficients into P blocks each of length L1 which are defined as
T
We have organized our paper as follows: In the following section we briefly review the NLMS, the SPUNLMS and the SM-NLMS algorithms. In the next section the SPU-SAF algorithm is introduced. The SM-SAF and the SM–SPU-SAF are introduced in Sections 4 and 5, respectively. Finally, we present several simulation results to demonstrate the good performances of the proposed algorithms. Throughout the paper, the following notations are adopted:
m xðnÞeðnÞ, k xðnÞk2
m xj ðnÞeðnÞ, kxj ðnÞk2
(7)
transpose of a vector or a matrix ð:ÞT Trð:Þ trace of a matrix diagð:Þ has the same meaning as the MATLAB operator with the same name: If its argument is a vector, a diagonal matrix with the diagonal elements given by the vector argument results. If the argument is a matrix, its diagonal is extracted into a resulting vector.
where j ¼ arg maxkxi ðnÞk2 for 1pipP [8]. The SM-NLMS algorithm minimizes (1) subject to hðn þ 1Þ 2 Cn where2
2. Background on NLMS, SPU-NLMS and SM-NLMS algorithms
hðn þ 1Þ ¼ hðnÞ þ
Cn ¼ fh 2 RM : jdðnÞ xT ðnÞ h jpgg.
(8)
This aim is achieved by an orthogonal projection of the previous estimate of h onto the closest boundary of Cn [10]. Doing this, the recursion for the SM-NLMS is given by aðnÞ xðnÞeðnÞ, k xðnÞk2
(9)
where In Fig. 1 we show the prototypical adaptive filter setup, where xðnÞ, dðnÞ and eðnÞ are the input, the desired and the output error signals, respectively. hðnÞ is the M 1 column vector of filter coefficients at time n. It is well known that the NLMS algorithm can be derived from the solution of the following optimization problem: min k hðn þ 1Þ hðnÞk2 ,
(1)
hðnþ1Þ
subject to dðnÞ ¼ xT ðnÞ hðn þ 1Þ,
(2)
d(n) x(n)
h(n)
y(n) −
+
e(n)
Fig. 1. Prototypical adaptive filter setup.
aðnÞ ¼
8 <1 :
0
g jeðnÞj
if jeðnÞj4g;
(10)
otherwise:
3. SPU-SAF algorithm Fig. 2 shows the structure of the SAF [15]. In this figure, f 0 ; f 1 ; . . . ; f N1 , are analysis filter unit pulse responses of an N channel orthogonal perfect reconstruction critically sampled filter bank system. xi ðnÞ and di ðnÞ are nondecimated subband signals. It is important to note that n refers to the index of the original sequences and k denotes the index of the decimated sequences.3 Similar to the NLMS algorithm, the SAFs can be established by the solution of the following optimization 1
Note that P ¼ M=L and is an integer. The set Cn is referred to as the constraint set, and its boundaries are hyperplanes. Also, g is the magnitude of the error bound. 3 It means that in the SAF, the filter vector update is performed each time N new samples have entered the system. 2
ARTICLE IN PRESS M.S.E. Abadi, J.H. Husøy / Signal Processing 88 (2008) 2463–2471
d0(n)
f0 .. .. .. .
d(n)
where
d0,D(k)
↓N
Xj ðkÞ ¼ ½x0;j ðkÞ; x1;j ðkÞ; . . . ; xN1;j ðkÞ,
.. .. .. . dN−1(n)
fN−1
↓N
x0(n)
ments of row m of XTj ðkÞ are consecutive samples of subband no. m, it follows that the off diagonal elements of
dN−1,D(k)
h (k)
.. .. .. .
x(n)
xN−1(n)
.. .. .. .
.. .. .. .. .
+
h( k)
e0(n)
g0 e(n)
↓N
gN−1
eN−1(n)
y0,D(k) −
.. .. .. .
.. .. .. .
fN−1
(19)
and eD ðkÞ ¼ ½e0;D ðkÞ; e1;D ðkÞ; . . . ; eN1;D ðkÞT . Since the ele-
+ ..... f0
2465
↓N
the approximation XTj ðkÞXj ðkÞ diagðdiagðXTj ðkÞXj ðkÞÞÞ ¼ Kj ðkÞ resulting in the following coefficient update equation:
yN−1,D (k)
+ + −
hj ðk þ 1Þ ¼ hj ðkÞ þ mXj ðkÞ½Kj ðkÞ1 eD ðkÞ
hj ðk þ 1Þ ¼ hj ðkÞ þ m
N 1 X
xi;j ðkÞ
i¼0
kxi;j ðkÞk2
ei;D ðkÞ.
(21)
Now, we determine which block should be updated in each subband at every adaptation. From (17) and (20), we obtain
eN−1,D(k)
j ¼ arg min khp ðk þ 1Þ hp ðkÞk2
Fig. 2. Structure of the SAF.
1pppP
¼ arg min feTD ðkÞ½Kp ðkÞ1 eD ðkÞg,
(22)
1pppP
problem: min k hðk þ 1Þ hðkÞk2 ,
(11)
hðkþ1Þ
subject to the set of N constraints imposed on the decimated filter output di;D ðkÞ ¼ xTi ðkÞ hðk þ 1Þ
(20)
This equation can be represented as
e0,D (k)
↑N .. .. .. . ↑N
+
XTj ðkÞXj ðkÞ are sample cross correlations between different subband signals whose values are very small. This justifies
for i ¼ 0; . . . ; N 1,
which is equivalent to ( ) N 1 X jei;D ðkÞj2 j ¼ arg min . 1pppP kxi;p ðkÞk2 i¼0
(23)
(12) 3.1. Extension to the multiple blocks
where T
xi ðkÞ ¼ ½xi ðkNÞ; xi ðkN 1Þ; . . . ; xi ðkN M þ 1Þ .
(13)
By solving this optimization problem based on the method of Lagrange multipliers, the filter update equation for the SAF which was called NSAF, can be stated as [15] hðk þ 1Þ ¼ hðkÞ þ m
N 1 X i¼0
xi ðkÞ ei;D ðkÞ, kxi ðkÞk2
(14)
where ei;D ðkÞ ¼ di;D ðkÞ xTi ðkÞ hðkÞ is the decimated subband error signal, and m is chosen in the range 0omo2 [15]. We are now in the position to establish the SPU-SAF algorithm. Partition xi ðkÞ for 0pipN 1 and hðkÞ into P blocks each of length L which are defined as xi ðkÞ ¼ ½xTi;1 ðkÞ; xTi;2 ðkÞ; . . . ; xTi;P ðkÞT , T
T
T
hðkÞ ¼ ½h1 ðkÞ; h2 ðkÞ; . . . ; hP ðkÞT .
(15) (16)
The SPU-SAF solves the following optimization problem: min khj ðk þ 1Þ hj ðkÞk2 ,
hj ðkþ1Þ
(17)
subject to (12), where j denotes the index of the block. Using the method of Lagrange multipliers to solve this optimization problem leads to the following update equation: hj ðk þ 1Þ ¼ hj ðkÞ þ mXj ðkÞ½XTj ðkÞXj ðkÞ1 eD ðkÞ,
(18)
In the previous section a single block of filter coefficients in each subband is updated during every adaptation. In this section we extend this approach to multiple block update. Suppose, we want to update S blocks out of P blocks in each subband at every adaptation. Let F ¼ fj1 ; j2 ; . . . ; jS g denote the indices of the S blocks out of P blocks. In this case, the optimization problem is defined as min khF ðk þ 1Þ hF ðkÞk2 ,
(24)
hF ðkþ1Þ
subject to (12). Again by using the Lagrange multipliers approach, the filter vector update equation is given by hF ðk þ 1Þ ¼ hF ðkÞ þ mXF ðkÞ½KF ðkÞ1 eD ðkÞ,
(25)
where XF ðkÞ ¼ ½XTj1 ðkÞ; XTj2 ðkÞ; . . . ; XTjS ðkÞT ,
(26)
diagðdiagðXTF ðkÞXF ðkÞÞÞ.
and KF ðkÞ ¼ represented as
hF ðk þ 1Þ ¼ hF ðkÞ þ m
N 1 X
xi;F ðkÞ
i¼0
kxi;F ðkÞk2
Eq. (25) can also be
ei;D ðkÞ,
(27)
where xi;F ðkÞ ¼ ½xTi;j ðkÞ; xTi;j ðkÞ; . . . ; xTi;j ðkÞT . 1 2 S Now, we determine which blocks should be updated in each subband at every adaptation. From (24) and (25),
ARTICLE IN PRESS 2466
M.S.E. Abadi, J.H. Husøy / Signal Processing 88 (2008) 2463–2471
we obtain
which is equivalent to
F ¼ arg min khF ðk þ 1Þ hF ðkÞk2
hðk þ 1Þ ¼ hðkÞ þ XðkÞ½KðkÞ1 aðkÞeD ðkÞ,
F
¼ arg minfeTD ðkÞ½KF ðkÞ1 eD ðkÞg F 8 9 2 31 < = X ¼ arg min eTD ðkÞ4 Kj ðkÞ5 eD ðkÞ , ; F :
where aðkÞ ¼ diagða0 ðkÞ; a1 ðkÞ; . . . ; aN1 ðkÞÞ, ðdiagðXT ðkÞXðkÞÞÞ, and (28)
(35) KðkÞ ¼ diag
XðkÞ ¼ ½x0 ðkÞ; x1 ðkÞ; . . . ; xN1 ðkÞ.
(36)
j2F
which is equivalent to ( ) N 1 X jei;D ðkÞj2 F ¼ arg min . F kxi;F ðkÞk2 i¼0
5. SM–SPU-SAF algorithm (29)
The computational complexity of the exact selection of the blocks to update may be very high. Therefore we may need to use a simplified criterion as we present in the following. 3.2. Simplified SPU-SAF (SSPU-SAF) algorithm To reduce the computational complexity associated with the selection of the blocks to update, we propose two alternative simplified criteria: (1) In the first approach, we compute the following values: TrðKp ðkÞÞ ¼
N 1 X
kxi;p ðkÞk2
for 1pppP.
(30)
i¼0
hðk þ 1Þ ¼ hðkÞ þ m
N 1 X i¼0
Ak xi ðkÞ ei;D ðkÞ, kAk xi ðkÞk2
(37)
where the Ak matrix is the M M diagonal matrix with the ILL and 0LL matrices4 on the diagonal and the positions of 1s on the diagonal determining which coefficients should be updated in each subband at every adaptation. This matrix can be represented as 2 3 ½0 or ILL 0LL 0LL 6 7 ½0 or ILL 0LL 7 6 0LL 6 7 Ak ¼ 6 . (38) 7 .. .. .. .. 6 7 . . . . 4 5 0LL 0LL ½0 or ILL MM
The indices of the set F correspond to the indices of the S largest values of (30) [8]. (2) Another selection strategy would be to modify (23) in such a way that rather than identifying one index, we identify a set of indices, corresponding to the S smallest values. In the first approach we used the simplified form of (23) to identify the indices of F. But in the second strategy, the exact form of (23) was used. This criterion slightly increases the computational complexity but leads to somewhat better performance. Sections 6 and 7 present the computational complexity and the performance of the SSPU-SAF algorithms. 4. SM-SAF algorithm The SM-SAF minimizes (11) subject to hðk þ 1Þ 2 ðCk;0 \ Ck;1 \ \ Ck;N1 Þ,
The positions of the identity matrices at every adaptation are determined by F from (29) or from the simplified procedure associated with (30). Now, from the previous section we obtain the SM–SPU-SAF algorithm: hðk þ 1Þ ¼ hðkÞ þ
N 1 X i¼0
ai ðkÞ
Ak xi ðkÞ ei;D ðkÞ, kAk xi ðkÞk2
(39)
where the ai ðkÞ values for 0pipN 1 are obtained from (34). To differentiate the full implementation of the SM–SPU-SAF algorithm in (29) from its simplified version that uses (23) or (30), we will refer to the latter as the SM simplified SPU-SAF (SM–SSPU-SAF) algorithm. This algorithm combines the features of the SM-SAF and SPU-SAF algorithms. In this algorithm we determine which filter coefficients in which subband should be updated and then we partially update the filter coefficients.
(31) 6. Computational complexity
where Ck;i ¼ fh 2 RM : jdi;D ðkÞ xTi ðkÞ h jpgg.
(32)
This aim is obtained by an orthogonal projection of the previous estimate of h onto the closest boundary of Ck;i in each subband. Doing this, the filter vector update equation for SM-SAF can be stated as hðk þ 1Þ ¼ hðkÞ þ
N 1 X i¼0
ai ðkÞ
xi ðkÞ ei;D ðkÞ, kxi ðkÞk2
(33)
where ai ðkÞ ¼
In this section, we combine the approaches in SPU-SAF and SM-SAF to develop the SM–SPU-SAF algorithm. Eq. (27) can be written in the form of a full update equation
8 <1 :
0
g jei;D ðkÞj
if jei;D ðkÞj4g; otherwise;
In [15], it has been shown that the computational complexity of SAF for each input sampling period is approximately 3M þ 3NK, where K is the length of the channel filters of the analysis filter bank. But from [15] we obtain that the exact computational complexity of SAF is 3M þ 3NK þ 1 multiplications and 1 division. By comparing (14) and (27), it can be shown that the SSPU-SAF based on the first criterion needs 2M þ SL þ 3NK þ 1 multiplications, 1 division, and OðPÞ þ Plog2 S comparisons when using the heapsort algorithm [20]. The SSPU-SAF based on the second criterion slightly increases the
(34) 4
I is the identity matrix and 0 is the zeros matrix.
ARTICLE IN PRESS M.S.E. Abadi, J.H. Husøy / Signal Processing 88 (2008) 2463–2471
Table 1 Computational complexity of the SAF, SSPU-SAF, SM-SAF, and SM– SSPUSAF algorithms Algorithm
Multiplications
Divisions
SAF SSPU-SAF (based on the first criterion) SSPU-SAF (based on the second criterion) SM-SAF SM–SSPU-SAF (based on the first criterion) SM–SSPU-SAF (based on the second criterion)
3M þ 3NK þ 1 2M þ SL þ 3NK þ 1
1 1
2M þ SL þ 3NK þ 2
2
3M þ 3NK þ 1 2M þ SL þ 3NK þ 1
2 2
2M þ SL þ 3NK þ 2
3
computational complexity but leads to somewhat better performance. In such applications as network and acoustic echo cancellation, the adaptive filter may be required to have a large number of coefficients in order to model the underlying physical system with sufficient accuracy. Therefore, the reduction in the computational complexity will be M SL which is large for these applications. In Section 7, we present several simulation results to show the performance of SSPU-SAF algorithm. In SAF, the filter vector adaptation needs 2M þ 1 multiplications and 1 division [15] and all the coefficients in each subband are updated at every adaptation. For the SM-SAF, this adaptation in each subband is related to the condition in (34). This relation determines that the filter coefficients in which subband should be updated at every adaptation. If the condition in (34) always becomes true (which in practice it does not), then the computational complexity of SM-SAF is 3M þ 3NK þ 1 multiplications and 2 divisions which is similar to the complexity of SAF. But the gains of applying the SM-SAF algorithm comes through the reduced number of required updates, which cannot be accounted for a priori, and an increased performance as compared to the SAF. In Section 7, we present several applications to show the ability of SM–SAF to decrease the overall computational complexity. In SM–SSPU-SAF algorithm, the filter coefficients are partially updated in each subband which again leads to additional reduction in the computational complexity. The adaptation in this algorithm is also related to the condition in (34). The computational complexity of SM–SSPU-SAF algorithms is similar to the SSPU-SAF for both selected criteria. This algorithm needs 2M þ SL þ 3NK þ 1 multiplications and 2 divisions based on the first criterion and 2M þ SL þ 3NK þ 2 multiplications and 3 divisions based on the second criterion. The number of comparison operations for both algorithms is OðPÞ þ P log2 S comparisons. Table 1 summarizes the computational complexity of the proposed algorithms.
7. Simulation results We demonstrate the performance of the proposed algorithms by several computer simulations in a system identification scenario. The unknown systems have 32 and 64 taps and are selected at random. The input signal, xðnÞ,
2467
is a fourth order autoregressive (AR(4)) signal generated according to5 xðnÞ ¼ 0:6617xðn 1Þ þ 0:3402xðn 2Þ þ 0:5235xðn 3Þ 0:8703xðn 4Þ þ wðnÞ,
(40)
where wðnÞ is a zero mean white Gaussian signal. The measurement noise, vðnÞ, with s2v ¼ 103 was added to the T noise free desired signal generated through dðnÞ ¼ ht xðnÞ, where ht is the true unknown filter vector. The adaptive filter and the unknown filter vector are assumed to have the same number of taps. For M ¼ 32 and 64, the eigenvalue spreads of the input signal signals are 1497 and 2456, respectively. The filter bank used in the SAFs was the four subband extended lapped transform (ELT) [22]. In all the simulations, the simulated learning curves were obtained by ensemble averaging over 200 independent trials. For M ¼ 32, the number of blocks (P) was set to 4 and for M ¼ 64, this parameter was set to 8. Also the pffiffiffiffiffiffiffiffi value of g was set to 5s2v [13]. Figs. 3 and 4 show the learning curves of SAF [15] and SSPU-SAF algorithms with M ¼ 32 for the two proposed selection criteria for the blocks. For SAF, we set m ¼ 0:5. To make the comparison fair, the step-sizes of SSPU-SAF were chosen to get approximately the same steady-state MSE as the SAF. Different values for the S (S ¼ 1; 2; 3) were employed in SSPU-SAF algorithm. By increasing the S parameter, the performance of SSPU-SAF will be close to the ordinary SAF. Also using the second criterion of Section 3. B leads to better performance especially for S ¼ 1. Selecting S ¼ 1 and using the second criterion corresponds to the exact SPU-SAF algorithm. Figs. 5 and 6 show the learning curves of SAF and SSPU-SAF algorithms for M ¼ 64 and for the two proposed selection criteria. For SAF, we set m ¼ 0:5 and for SSPU-SAF, different values of the S (S ¼ 1; 2; 4; 6) were employed. These figures show the better performance based on the second criterion especially for low values of S. Fig. 7 shows the learning curves of SAF [15] and SM-SAF algorithms for M ¼ 32. For the SAF algorithm, the step-size is set to m ¼ 1 (the same step-size that was used in [15]) and 0.1, respectively. As we can see, the SM-SAF algorithm has both fast convergence similar to that of SAF and a significantly lower steady-state MSE than ordinary SAF. Furthermore, the average numbers of updates in SM-SAF for each subband were 285, 129, 169 and 157, respectively, instead of 2000 for each subband in SAF algorithm. Fig. 8 shows the results for M ¼ 64. For the SAF algorithm, the step-size is set to 0.1 and 1. Again, the SM-SAF algorithm has both fast convergence similar to that of SAF and a significantly lower steady-state MSE than ordinary SAF. The average numbers of updates in SM-SAF for each subband were 692, 305, 408 and 388, respectively, instead of 5000 for each subband in SAF algorithm. Fig. 9 shows the learning curves of SSPU-SAF and SM–SSPU-SAF algorithms for M ¼ 32. The S parameter was set to 2 and the second block selection criterion was used. For the SSPU-SAF algorithm, the same values for the stepsizes (m ¼ 0:1; 1) were used. Again, the SM–SSPU-SAF has 5
The same type of input that was used in [21].
ARTICLE IN PRESS 2468
M.S.E. Abadi, J.H. Husøy / Signal Processing 88 (2008) 2463–2471
30 20 10
(a) SAF [15] (b) SPU-SAF, S = 1
20
(d) SSPU-SAF, P = 8, S = 4
10
Input: Gaussian AR (4) (c) SSPU-SAF, S = 2 (d) SSPU-SAF, S = 3
(b) SPU-SAF, P = 8, S = 1 (c) SSPU-SAF, P = 8, S = 2
MSE in dB
MSE in dB
(b) SSPU-SAF,S = 1
0
30
(a) SAF[15] (b) SSPU-SAF, P = 4,S = 1 (c) SSPU-SAF, P = 4,S = 2 (d) SSPU-SAF, P = 4,S = 3
(e) SSPU-SAF, P = 8, S = 6 Input:Gaussian AR (4)
0
(c) SSPU-SAF, S = 2
−10
−10
−20
−20
(d) SSPU-SAF, S = 4 (e) SSPU-SAF, S = 6
(a) SAF
(a) SAF
−30
−30 0
500
1000
1500
0
2000
1000
×4 Sample Number Fig. 3. Learning curves of SAF and SSPU-SAF algorithms with M ¼ 32 according to block selection criterion no. 1. (Input: Gaussian AR(4).)
30
0
(c) SSPU-SAF, S = 2 (d) SSPU-SAF, S = 3
10
−10
−20
−20 (a) SAF
0
Input: Gaussian AR (4) (a) SAF, = 0.1
0
−10
−30
(a) SAF [15], = 0.1 (b) SAF [15], = 1 (c) SM-SAF
20
MSE in dB
MSE in dB
(b) SPU-SAF, S = 1
(c) SM-SAF
(b) SAF, = 1
−30 500
1000
1500
2000
0
500
×4 Sample Number Fig. 4. Learning curves of SAF and SSPU-SAF algorithms with M ¼ 32 according to block selection criterion no. 2. (Input: Gaussian AR(4).)
1000 1500 ×4 Sample Number
2000
Fig. 7. Learning curves of SAF and SM-SAF algorithms with M ¼ 32. (Input: Gaussian AR(4).)
30
30 (b) SSPU-SAF, S = 1
20
(a) SAF [15] (b) SSPU-SAF, P = 8, S = 1
MSE in dB
(d) SSPU-SAF, P = 8, S = 4
10
(e) SSPU-SAF, P = 8, S = 6 Input: Gaussian AR (4)
0 −10
(a) SAF [15], = 0.1 (b) SAF [15], = 1 (c) SM-SAF
20
(c) SSPU-SAF, P = 8, S = 2
MSE in dB
5000
30
Input: Gaussian AR (4)
10
4000
Fig. 6. Learning curves of SAF and SSPU-SAF algorithms with M ¼ 64 according to block selection criterion no. 2. (Input: Gaussian AR(4).)
(a) SAF [15] (b) SPU-SAF, P = 4,S = 1 (c) SSPU-SAF, P = 4, S = 2 (d) SSPU-SAF, P = 4, S = 3
20
2000 3000 ×4 Sample Number
Input: Gaussian AR (4)
10 0
(a) SAF, = 0.1 (c) SM-SAF
−10
(c) SSPU-SAF, S = 2
(b) SAF, = 1
(d) SSPU-SAF, S = 4 (e) SSPU-SAF, S = 6
−20
−20
(a) SAF
−30 0
−30 1000
2000 3000 4000 ×4 Sample Number
5000
Fig. 5. Learning curves of SAF and SSPU-SAF algorithms with M ¼ 64 according to block selection criterion no. 1. (Input: Gaussian AR(4).)
0
1000
2000 3000 × 4 Sample Number
4000
5000
Fig. 8. Learning curves of SAF and SM-SAF algorithms with M ¼ 64. (Input: Gaussian AR(4).)
ARTICLE IN PRESS M.S.E. Abadi, J.H. Husøy / Signal Processing 88 (2008) 2463–2471
30
30 (a) SSPU-SAF, P = 4, S = 2, = 0.1 (b) SSPU-SAF, P = 4, S = 2, = 1 (c) SM-SSPU-SAF, P = 4, S = 2
10
(a) SAF [15] (b) SPU-SAF, P = 4, S = 1 (c) SSPU-SAF, P = 4, S = 2
20 10
Input: Gaussian AR (4)
MSE in dB
MSE in dB
20
0 (b) SSPU-SAF, = 1
−10
Input: Gaussian AR (4) (b) SPU-SAF, P = 4, S = 1
0
(c) SSPU-SAF, P = 4, S = 2
−10 (a) SSPU-SAF, = 0.1
−20 −30
−20 (c) SM-SSPU-SAF
0
500
1000
1500
2000
(a) SAF
−30 0
×4 Sample Number Fig. 9. Learning curves of SSPU-SAF and SM–SSPU-SAF algorithms with M ¼ 32 according to block selection criterion no. 2. (Input: Gaussian AR(4).)
30 (a) SSPU-SAF, P = 8,S= 2, = 0.1 (b) SSPU-SAF, P = 8,S = 2, = 0.5 (c) SM-SSPU-SAF, P = 8, S = 2
MSE in dB
20 10
Input: Gaussian AR (4) (a) SSPU-SAF, = 0.1
0
(b) SSPU-SAF, = 0.5
−10 −20 −30
(c) SM-SSPU-SAF
0
1000
2000
3000
4000
5000
×4 Sample Number Fig. 10. Learning curves of SSPU-SAF and SM–SSPU-SAF algorithms with M ¼ 64 according to block selection criterion no. 2. (Input: Gaussian AR(4).)
30 (a) SAF[15], µ = 0.1 (b) SM-SSPU-SAF, P = 4,S = 2 (c) SM-SSPU-SAF, P = 4, S = 3 (d) SM-SAF
MSE in dB
20 10
(a) SAF, = 0.1
0
(b) SM-SSPU-SAF, P = 4, S = 2 (c) SM-SSPU-SAF, P = 4, S = 3
−10 Input: Gaussian AR (4)
−20 −30
2469
500
1000 1500 ×4 Sample Number
2000
Fig. 12. Learning curves of SAF, and SPU-SAF algorithms with M ¼ 32. (b) SPU-SAF with S ¼ 1, where the filter blocks are selected based on selection criterion no. 2 and (c) SSPU-SAF with S ¼ 2, where the filter blocks are randomly selected (Input: Gaussian AR(4).)
both fast convergence and low steady-state MSE features compared with ordinary SSPU-SAF. Also, the average numbers of updates in SM–SSPU-SAF for each subband were 413, 182, 247 and 213, respectively, instead of 2000 for each subband in SSPU-SAF algorithm. Furthermore in this algorithm, the filter coefficients are partially updated which again leads to additional reduction in the computational complexity. Fig. 10 shows the learning curves of SSPU-SAF and SM–SSPU-SAF algorithms for M ¼ 64. For the SSPU-SAF algorithm, the values for the step-sizes were set to m ¼ 0:1 and 0.5, respectively. As we can see, the SM–SSPU-SAF has both fast convergence and low steadystate MSE features. The average numbers of updates in SM–SSPU-SAF for each subband were 1033, 470, 621 and 559, respectively, instead of 5000 for each subband in SSPU-SAF algorithm. Fig. 11 shows the learning curves of SM-SAF and SM–SSPU-SAF algorithms with M ¼ 32 and for different values for S. As we can see, for S ¼ 3, the performance of the SM–SSPU-SAF will be very close to the SM-SAF. Fig. 12 presents the results for the random selection of the coefficient blocks to update. As we can see, the results based on the second criterion and with S ¼ 1 is better than the results based on the randomly selection of the filter coefficient blocks with S ¼ 2. Finally, we have presented Figs. 13 and 14. These figures show the number of coefficients updated in SMSAF and SM–SSPU-SAF algorithms with M ¼ 32 in each subband versus sample number for a single realization. Fig. 13 shows the results for SM-SAF and Fig. 14 shows the results for SM–SSPU-SAF with S ¼ 2. These figures show that when the filter coefficients in each subband (i ¼ 0; 1; 2; 3) will be updated during the adaptation.
(d) SM-SAF
0
500
1000
1500
2000
×4 Sample Number Fig. 11. Learning curves of SAF, SM-SAF and SM–SSPU-SAF algorithms with M ¼ 32 and for different values of S. (Input: Gaussian AR(4).)
8. Conclusions In this paper, the concepts of the selective partial updates and set-membership adaptive filtering were extended to the subband adaptive filters and the novel
ARTICLE IN PRESS M.S.E. Abadi, J.H. Husøy / Signal Processing 88 (2008) 2463–2471
No. of coeff. in update (i = 3)
No. of coeff. in update (i = 2)
No. of coeff. in update (i = 1)
No. of coeff. in update (i = 0)
2470
40 20 0 0
500
1000
1500
2000
0
500
1000
1500
2000
0
500
1000
1500
2000
0
500
1000 ×4 Sample Number
1500
2000
40 20 0
40 20 0 40 20 0
No. of coeff. in update (i = 3)
No. of coeff. in update (i = 2)
No. of coeff. in update (i = 1)
No. of coeff. in update (i = 0)
Fig. 13. Number of coefficients updated in SM-SAF algorithm with M ¼ 32 in each subband versus sample number in a single realization (Input: Gaussian AR(4).)
20 10 0 0
500
1000
1500
2000
0
500
1000
1500
2000
0
500
1000
1500
2000
0
500
1000 ×4 Sample Number
1500
2000
20 10 0
20 10 0 20 10 0
Fig. 14. Number of coefficients updated in SM–SSPU-SAF algorithm with M ¼ 32 and S ¼ 2 in each subband versus sample number in a single realization. (Input: Gaussian AR(4).)
ARTICLE IN PRESS M.S.E. Abadi, J.H. Husøy / Signal Processing 88 (2008) 2463–2471
SPU-SAF and SM-SAF algorithms were established, respectively. Also, by combining these two concepts, the SM–SPU-SAF was presented. These algorithms are computationally efficient. The performance of the proposed algorithms were demonstrated through several experimental results. References [1] S. Haykin, Adaptive Filter Theory, fourth ed., Prentice-Hall, New Jersey, 2002. [2] P.S.R. Diniz, Adaptive Filtering: Algorithms and Practical Implementation, second ed., Kluwer, Dordrecht, 2002. [3] A.H. Sayed, Fundamentals of Adaptive Filtering, Wiley, New York, 2003. [4] S.C. Douglas, Analysis and implementation of the max-NLMS adaptive filter, in: Proceedings of the 29th Asilomar Conference on Signals, Systems, and Computers, Pacific Grove, CA, October 1995, pp. 659–663. [5] T. Aboulnasr, K. Mayyas, Selective coefficient update of gradientbased adaptive algorithms, in: Proceedings of the IEEE International Conference on Acoustics, Speech, and Signal Processing, Munich, Germany, April 1997, pp. 1929–1932. [6] T. Aboulnasr, K. Mayyas, Complexity reduction of the NLMS algorithm via selective coefficient update, IEEE Trans. Signal Process. 47 (May 1999) 1421–1424. [7] T. Schertler, Selective block update NLMS type algorithms, in: Proceedings of IEEE International Conference on Acoustics, Speech, and Signal Processing, Seattle, WA, May 1998, pp. 1717–1720. [8] K. Dog˘anc- ay, O. Tanrıkulu, Adaptive filtering algorithms with selective partial updates, IEEE Trans. Circuits, Syst. II Analog and Digital Signal Processing 48 (August 2001) 762–769. [9] K. Dog˘anc-ay, Complexity considerations for transform-domain adaptive filters, Signal Processing 83 (2003) 1177–1192. [10] S. Gollamudi, S. Nagaraj, S. Kapoor, Y.F. Huang, Set-membership filtering and a set-membership normalized LMS algorithm with an
[11] [12]
[13]
[14]
[15]
[16]
[17] [18]
[19]
[20]
[21]
[22]
2471
adaptive step-size, IEEE Signal Processing Lett. 5 (May 1998) 111–114. S. Werner, P.S.R. Diniz, Set-membership affine projection algorithm, IEEE Signal Processing Lett. 8 (August 2001) 231–235. P.S.R. Diniz, R.P. Braga, S. Werner, Set-membership affine projection algorithm for echo cancellation, in: Proceedings of ISCAS, Island of Kos, Greece, May 2006, pp. 405–408. P.S.R. Diniz, S. Werner, Set-membership binormalized data-reusing LMS algorithms, IEEE Trans. Signal Processing 51 (January 2003) 124–134. S. Werner, M.L.R. de Campos, P.S.R. Diniz, Partial-update NLMS algorithms with data-selective updating, IEEE Trans. Signal Processing 52 (April 2004) 938–949. K.A. Lee, W.S. Gan, Improving convergence of the NLMS algorithm using constrained subband updates, IEEE Signal Processing Lett. 11 (2004) 736–739. M. de Courville, P. Duhamel, Adaptive filtering in subbands using a weighted criterion, IEEE Trans. Signal Processing 46 (1998) 2359–2371. S.S. Pradhan, V.E. Reddy, A new approach to subband adaptive filtering, IEEE Trans. Signal Processing 47 (1999) 655–664. A. Gilloire, M. Vetterli, Adaptive filtering in subbands with critical sampling: analysis, experiments, and application to acoustic echo cancellation, IEEE Trans. Signal Processing 40 (August 1992) 1862–1875. M.S.E. Abadi, J.H. Husøy, Variable step-size Pradhan Reddy subband adaptive filter, in: Proceedings of the Fifth International Conference on Information, Communications and Signal Processing, Bangkok, Thailand, December 2005, pp. 909–912. D.E. Knuth, Sorting and Searching, The Art of Computer Programming, vol. 3, second ed., Addison-Wesley, Reading, MA, 1973. S. Werner, P.S.R. Diniz, J.E.W. Moreira, Set-membership affine projection algorithm with variable data-reuse factor, in: Proceedings of the ISCAS, Island of Kos, Greece, May 2006, pp. 261–264. H. Malvar, Signal Processing with Lapped Transforms, Artech House, Norwood MA, 1992.