Influence of input noises on the mean-square performance of the normalized subband adaptive filter algorithm
Journal Pre-proof
Influence of input noises on the mean-square performance of the normalized subband adaptive filter algorithm Zongsheng Zheng, Zhigang Liu PII: DOI: Reference:
S0016-0032(19)30907-X https://doi.org/10.1016/j.jfranklin.2019.12.020 FI 4337
To appear in:
Journal of the Franklin Institute
Received date: Revised date: Accepted date:
24 December 2018 19 November 2019 12 December 2019
Please cite this article as: Zongsheng Zheng, Zhigang Liu, Influence of input noises on the meansquare performance of the normalized subband adaptive filter algorithm, Journal of the Franklin Institute (2019), doi: https://doi.org/10.1016/j.jfranklin.2019.12.020
This is a PDF file of an article that has undergone enhancements after acceptance, such as the addition of a cover page and metadata, and formatting for readability, but it is not yet the definitive version of record. This version will undergo additional copyediting, typesetting and review before it is published in its final form, but we are providing this version to give early visibility of the article. Please note that, during the production process, errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain. © 2019 The Franklin Institute. Published by Elsevier Ltd. All rights reserved.
Influence of input noises on the mean-square performance of the normalized subband adaptive filter algorithm✩ Zongsheng Zheng, Zhigang Liu∗ School of Electrical Engineering, Southwest Jiaotong University, Chengdu, China
Abstract The normalized subband adaptive filter (NSAF) algorithm has gained a large amount of attention owning to its fast convergence rate for correlated inputs. However, the convergence analysis of the NSAF algorithm has not been researched extensively, especially in the presence of noisy inputs which is frequently encountered in the applications of system identification and channel estimation. In this paper, we perform the performance analysis of the NSAF algorithm under noisy inputs. According to several reasonable assumptions and approximations, the expressions of the transient-state and steadystate mean-square deviation (MSD) are derived. Simulations under different kinds of environments confirm the accuracy of our theoretical expressions. Keywords: Normalized subband adaptive filter (NSAF), mean-square deviation (MSD), transient-state, steady-state, noisy inputs
1. Introduction As a useful technique, the adaptive filter has been applied diffusely in lots of applications, including system identification, acoustics/network echo cancellation, channel equalization and active noise control [1, 2, 3, 4, 5]. On account of the simplicity and robustness, the least-mean-square (LMS) and normalized 5
LMS (NLMS) algorithms are the most frequently-used adaptive-filtering algorithms. However, both of them demonstrate slow convergence rate when the inputs are correlated. The affine projection (AP) algorithms [6, 7] attempt to work out this problem by adding multiple input vectors in the input matrix. As a matter of fact, the AP algorithms obtain fast convergence rate but own high computational complexities.
10
Fortunately, the above problem can be addressed appropriately by using the subband adaptive filtering (SAF) algorithms [8, 9], which are famous for their fast convergence rate and low computational burden. In the early SAF algorithms, because the weight adaptation is carried out independently in each subband, their performance was influenced by the structural problems, for instance, the aliasing ✩∗ Corresponding author. E-mail addresses:
[email protected] (Z. Zheng), liuzg
[email protected] (Z. Liu).
Preprint submitted to Journal of the Franklin Institute
December 17, 2019
and band-edge effects [10]. To overcome these structural problems, the fullband weight model was 15
utilized in the following SAF algorithms, in which the adaptive filter weights are not divided into each subband [11, 12]. In [8], by solving a multiple-constraint optimization problem, the normalized SAF (NSAF) algorithm was proposed, who has a similar weight update formula as the algorithms in [11] and [12]. Simulation results exhibited that the NSAF algorithm achieves improved convergence rate but owns the similar computational complexity as compared to the NLMS algorithm.
20
To provide guidelines for designing the adaptive filter, the performance analysis of the adaptivefiltering algorithm is always a research hotspot [13, 14, 15]. The behavior of the NSAF algorithm has been studied in some literatures. In [16], the steady-state mean-square error (MSE) of the NSAF algorithm has been researched by using the concept of energy conservation. The steady-state MSE analyses of the fixed and variable regularization of the NSAF algorithm were presented in [17]. Note
25
that, as compared to the MSE, the mean-square deviation (MSD) is more useful in many applications (e.g., system identification and channel estimation). This is because the purpose of these applications is to identify the unknown system coefficients. The MSD analysis of the NSAF algorithm was first presented in [18]. It achieves good agreement between the theoretical and simulation results but needs a great computational complexity. Afterwards, the MSD behavior of the NSAF algorithm was analyzed
30
in [19]. This analysis achieves not only low computational complexity but also good agreement in various environments. In [20], the MSD behavior of the NSAF algorithm was analyzed in the under-modeling scenario. To our knowledge, the performance analysis of the NSAF algorithm has not yet been performed under noisy inputs. The input noises are usually generated from the sampling errors, the human errors,
35
the modeling errors and the instrument errors, and they generally exist in the system identification applications such as channel identification [21]. In this paper, by using some reasonable assumptions and approximations, the performance of the NSAF algorithm under noisy inputs is analyzed. The expressions of the transient-state and steady-state MSD are presented. Simulations demonstrate that the proposed expressions can describe the simulation results accurately.
40
The main contributions of this paper are summarized as follows: 1) The MSD of the NSAF algorithm under noisy inputs is analyzed for correlated inputs; 2) The expressions of both the transient-state and steady-state MSD are presented; 3) The relationship between the NSAF algorithm with noisy inputs and the other adaptive-filtering algorithms is interpreted;
45
4) The step size selection is suggested for the NSAF algorithm in the heavy input noise environment. Notation: The normal letters are used to denote scalars, the boldface lowercase letters are used to denote vectors, and the boldface uppercase letters are used to denote matrices. The other notations of this paper are listed in Table 1.
2
Table 1: Mathematical notation Notations
Description
(·)T
Transpose of a vector or matrix
E[·]
Expectation of a random variable
k·k
Euclidean norm of a vector
2. NSAF algorithm Consider the following system which is described by d(n) = uT (n)w0 + v(n) 50
T
where w0 ∈ RL×1 is the unknown weight vector with the length of L, u(n) = [u(n), u(n − 1), ..., u(n − L + 1)] ∈ RL×1 is the input signal vector, d(n) represents the desired signal, v(n) denotes the background noise with zero mean and variance σv2 . It should be noted that the variable n is used to index the original sequences and the variable k is used to index the decimated sequences in this paper. 2.1. NSAF The structure of the NSAF is shown in Fig. 1, where N is the number of subbands. By partitioning the desired signal d(n) and the input signal u(n) via the analysis filters Hi (z), i = 0, 1, .., N − 1, the subband signals di (n) and ui (n) are achieved. The subband output signals yi (n) is obtained by filtering the subband input signals ui (n) through the adaptive filter. By decimating the subband signals di (n) and yi (n), the signals di,D (k) and yi,D (k) are acquired. Then, defining ui (k) = [ui (kN ), ui (kN − 1), ..., ui (kN − L + 1)]
T
T
∈ RL×1 and w(k) = [w0 (k), w1 (k), ..., wL−1 (k)]
∈ RL×1 ,
the subband error signal is computed by ei,D (k) = di,D (k) − uTi (k)w(k). 55
2.2. NSAF algorithm According to [8], the weight vector update formula of the NSAF algorithm is given by w(k + 1) = w(k) + µ
N −1 X i=0
ui (k)
2 ei,D (k)
kui (k)k
where µ is the step size to control the convergence of the algorithm.
3. MSD analysis 3.1. Signal model The system identification model including noisy inputs is shown in Fig. 2. The goal of this model is to identify the unknown weight vector w0 by using the adaptive filter. Unlike the noise-free input 3
v n
H z
w
¦ d n
H z #
u n H z
H z #
HNz
HNz
u n
u n
w k
u N n
pN
d n d N n
¦
G z #
GN z
pN #
pN
y n
pN
y n yN n
G z
e n
d n
e n
e n eN n
pN #
pN
nN nN #
nN
d D k d D k
dN D k y D k
¦
y D k
¦
yN D k
¦
e D k
e D k
eN D k
Figure 1: Structure of the NSAF [8].
vector of the unknown system, the input vector of the adaptive filter is corrupted by the noise. The noisy input vector is given by u ¯(n) = u(n) + η(n) T
where η(n) = [η(n), η(n − 1), ..., η(n − L + 1)] ∈ RL×1 , and η(n) represents the input noise signal with
60
zero mean and variance ση2 .
v n u n
w
XQNQRZQV\VWHP
u n
d n
w n
DGDSWLYHILOWHU
Șn
e n
Figure 2: System identification model with noisy inputs.
3.2. Assumptions Assumption 1: The analysis filter bank is paraunitary [22, 14]. Assumption 2: The signals vi,D (k), ui (k), ηi (k) and w(k) are mutually independent [23, 24, 25]. 2 Here, vi,D (k) denotes ith subband noise signal with zero mean and variance δi,v , ηi (k) = [ηi (k), ηi (k −
4
65
T
1), ..., ηi (k − L + 1)] ∈ RL×1 , and ηi (k) denotes ith subband input noise signal with zero mean and
2 variance δi,η .
Note that these assumptions are very common in almost all the performance analyses of the NSAF algorithms. 3.3. Transient-state analysis In the noisy input environment, the weight vector update formula of the NSAF algorithm becomes w(k + 1) = w(k) + µ
N −1 X
u ¯i (k)¯ ei,D (k) 2
k¯ ui (k)k
i=0
which can be represented in terms of the weight error vector, w(k) ˜ = w0 − w(k), as w(k ˜ + 1) = w(k) ˜ −µ
N −1 X
u ¯i (k)¯ ei,D (k)
(1)
2
k¯ ui (k)k
i=0
where e¯i,D (k) = di,D (k) − u ¯Ti (k)w(k) = uTi (k)w0 + vi,D (k) − u ¯Ti (k)w(k)
(2)
= (uTi (k) + ηiT (k))w0 − u ¯Ti (k)w(k) + vi,D (k) − ηiT (k)w0 =u ¯Ti (k)w(k) ˜ + vi,D (k) − ηiT (k)w0 . Considering the above relation and taking expectation on both sides of (1), we have N −1 X
E[w(k ˜ + 1)] = E[w(k)] ˜ − µE[ N −1 X
= E[w(k)] ˜ − µE[
i=0
u ¯i (k)¯ uTi (k)w(k) ˜ 2
k¯ ui (k)k
i=0
u ¯i (k)¯ uTi (k)w(k) ˜ 2
k¯ ui (k)k
]+µ
N −1 X i=0
N −1 X
] − µE[ 2 δi,η
i=0
u ¯i (k)vi,D (k) 2
k¯ ui (k)k
] + µE[
N −1 X i=0
u ¯i (k)ηiT (k)w0 2
k¯ ui (k)k
]
2 w0 .
k¯ ui (k)k
Given that the subband input power has a chi-square distribution with L degrees of freedom [19], we obtain E[w(k ˜ + 1)] = E[w(k)] ˜ −µ
N −1 X i=0
N −1 2 X δi,η 1 E[w(k)] ˜ +µ 2 w0 γi L (L − 2)δi,¯ u i=0
(3)
= E[w(k)] ˜ − µAE[w(k)] ˜ + µBw0 70
where A =
NP −1 i=0
1 γi L ,
B =
NP −1 i=0
2 δi,η 2 , (L−2)δi, u ¯
2 δi,¯ u denotes the subband input variance, and γi ≥ 1 is a
constant influenced by the correlatedness of the input signal. Define the weight error correlation matrix as W (k) = E[w(k) ˜ w ˜ T (k)], then, the mean-square deviation (MSD) can be expressed as MSD(k) = E[w ˜ T (k)w(k)] ˜ = Tr(W (k)). 5
Post-multiplying both sides of (1) by the transpose of itself and taking the expectation, the recursive form of W (k) can be achieved as N −1 X
W (k + 1) = W (k) − 2µE[
u ¯i (k)¯ ei,D (k) 2
k¯ ui (k)k
i=0
N −1 X
w ˜ T (k)] + µ2 E[
u ¯i (k)¯ uTi (k)¯ e2i,D (k) 4
k¯ ui (k)k
i=0
].
(4)
Then, according to the relation (2), we find N −1 X
E[
u ¯i (k)¯ ei,D (k) 2
k¯ ui (k)k
i=0
= E[
N −1 X
u ¯i (k)¯ uTi (k)w(k) ˜ w ˜ T (k) 2
k¯ ui (k)k
i=0
= E[
N −1 X
w ˜ T (k)] N −1 X
u ¯i (k)vi,D (k)w ˜ T (k) 2
k¯ ui (k)k
i=0
u ¯i (k)¯ uTi (k)w(k) ˜ w ˜ T (k) 2
k¯ ui (k)k
i=0
] + E[ ] − E[
N −1 X
2 δi,η w0 w ˜ T (k)
i=0
k¯ ui (k)k
2
] − E[
N −1 X
u ¯i (k)ηiT (k)w0 w ˜ T (k) 2
k¯ ui (k)k
i=0
] (5)
],
and N −1 X
E[
u ¯i (k)¯ uTi (k)¯ e2i,D (k) 4
k¯ ui (k)k
i=0
N −1 X
] = E[
u ¯i (k)¯ uTi (k)
i=0
k¯ ui (k)k
4
2 2 2 (w ˜ T (k)¯ ui (k)¯ uTi (k)w(k)−2δ ˜ ˜ T (k)w0 +δi,v +δi,η w0T w0 )]. i,η w
(6)
Combining the results (5)-(6), and substituting them into (4), we obtain W (k + 1) = W (k) − 2µE[ + µ2 E[
N −1 X
N −1 X
u ¯i (k)¯ uTi (k)w(k) ˜ w ˜ T (k) 2
k¯ ui (k)k
i=0
u ¯i (k)¯ uTi (k)
i=0
k¯ ui (k)k
4
N −1 X
] + 2µE[
i=0
2 δi,η w0 w ˜ T (k) 2
k¯ ui (k)k
]
(7)
2 2 2 (w ˜ T (k)¯ ui (k)¯ uTi (k)w(k) ˜ − 2δi,η w ˜ T (k)w0 + δi,v + δi,η w0T w0 )].
By taking the trace on both sides of (7), the MSD of the NSAF algorithm is given by MSD(k + 1) N −1 X
= MSD(k) − (2µ − µ2 )E[ + µ2 E[
N −1 X i=0
2 δi,v
k¯ ui (k)k
+ µ2
i=0
i=0
N −1 X
2 2 ] + µ E[
i=0
= MSD(k) − (2µ − µ2 ) N −1 X
w ˜ T (k)¯ ui (k)¯ uTi (k)w(k) ˜
2 δi,v 2 (L − 2)δi,¯ u
N −1 X i=0
+ µ2
2 δi,η
2
N −1 X
where C =
NP −1 i=0
˜ 2w
k¯ ui (k)k
T
(k)w0 ]
T 2 ]w0 w0
k¯ ui (k)k
2 δi,η 2 2 kw0 k (L − 2)δi,¯ u
= MSD(k) − (2µ − µ )AMSD(k) + (2µ − 2µ2 )BE[w ˜ T (k)]w0 + µ2 C + µ2 Bkw0 k
i=0
2 δi,η
N −1 2 X δi,η 1 MSD(k) + (2µ − 2µ2 ) ˜ T (k)]w0 2 E[w γi L (L − 2)δ i,¯ u i=0
i=0
2
k¯ ui (k)k
N −1 X
] + (2µ − 2µ2 )E[
2
2 δi,v 2 . (L−2)δi, u ¯
6
(8)
Remark 1: Note that the mean deviation E[w(k)] ˜ is given recursively by (3). By using (8) and (3), the transient-state MSD of the NSAF algorithm is obtained recursively. Remark 2: To guarantee the NSAF algorithm converges in the input noise environment, the following condition should be satisfied |1 − (2µ − µ2 )A| < 1, which results in 0 < µ < 2. 75
Note that 0 < A < 1 was considered since the filter length L is larger than the subband number N all the time. Moreover, the fact that the mean deviation E[w(k)] ˜ is bounded was also used because the condition |1 − µA| < 1 is satisfied for any step size in the range 0 < µ < 2. Note that, whether the input noise exists or not, the step size range of the NSAF algorithm does not change, which means the input noise does not influence the stability of the NSAF algorithm.
80
3.4. Steady-State analysis Defining the mean deviation MD(k) = E[w(k)], ˜ (3) can be expended as MD(k + 1) = (1 − µA)MD(k) + µBw0 =
k+1 X
m=1
µBw0 (1 − µA)
m−1
.
Taking the limit as k → ∞, it can be obtained that M D(∞) =
B w0 . A
(9)
In steady-state, using (9), the recursion (8) becomes MSD(∞) = MSD(∞) − (2µ − µ2 )AMSD(k) + (2µ − 2µ2 )B
B 2 2 kw0 k + µ2 C + µ2 Bkw0 k . A
Then, the steady-state MSD is given by 2
MSD(∞) =
((2 − 2µ) B A + µ)Bkw0 k + µC . (2 − µ)A
Remark 3: The steady-state MSD can be divided into two parts 2
MSD(∞) =
((2 − 2µ) B µC A + µ)Bkw0 k + . (2 − µ)A (2 − µ)A
As presented in [19], the first part is the steady-state MSD of the NSAF algorithm for the input noisefree environment. Then, the second part is the steady-state MSD caused by the input noise. Note that the second part is always larger than zero, which means that the input noise decrease the performance of the algorithm. The larger input noise reluts in the larger steady-state MSD, and the larger unknown 85
weight vector w0 also leads to the larger steady-state MSD. 7
4. Relationship to the other adaptive-filtering algorithm 4.1. Leaky NSAF algorithm In order to prevent unbounded growth of the weight estimates in the NSAF algorithm, a leaky NSAF algorithm was proposed, whose weight vector is recursively updated by w(k + 1) = (1 − µλ)w(k) + µ
N −1 X i=0
ui (k) kui (k)k
2 ei,D (k)
where λ is the leakage parameter. Note that, if λ = B, the mean behavior of the leaky NSAF algorithm is the same as that of the 90
NSAF algorithm with noisy inputs (3). Both the leaky NSAF algorithm and the NSAF algorithm with noisy inputs reduces to the standard NSAF algorithm when λ = B = 0. 4.2. Bias-compensated NSAF algorithm As shown in (9), the mean deviation computed by the NSAF algorithm with noisy inputs will not converge to zero, that is, the NSAF algorithm leads to biased estimates in the presence of noisy inputs. To compensate for the estimate bias caused by noisy inputs, we have presented bias-compensated NSAF algorithm previously [26] w(k + 1) = w(k) + µ
N −1 X i=0
95
u ¯i (k)¯ ei,D (k) k¯ ui (k)k
2
+µ
N −1 X i=0
2 δi,η
2 w(k).
k¯ ui (k)k
5. Simulation Results Simulations in the context of system identification are presented to evaluate the analysis results of the NSAF algorithm with noisy inputs. The weight vector of the unknown system is randomly generated with L = 128, and we use the same length for the adaptive filter. The system input is either a white Gaussian process or an autoregressive process. Here, the autoregressive process (correlated input) is acquired by filtering a zero-mean white Gaussian random sequence via a first-order system T (z) = 1/(1 − 0.8z −1 ). The input noise is a zero-mean white Gaussian process, and it is added to the system input with a signal-to-noise ratio (SNR) of 0, 10 or 20 dB [21]. The background noise is also a zero-mean white Gaussian process, and it is added to the system output with a SNR of 20, 30 or 40 dB [27]. The SNRs for additive noises are calculated by E[u2 (n)] , E[η 2 (n)] E[y 2 (n)] = 10log10 E[v 2 (n)]
SNRinput = 10log10 SNRoutput
8
where y(n) = uT (n)w0 , SNRinput is the SNR of the input, and SNRoutput is the SNR of the output. The cosine-modulated filter bank with subband number N = 4 is used in the following simulations [8]. To measure the performance of the algorithm, the normalized MSD (NMSD) is used ! 2 E[kw(k) − w0 k2 ] NMSD = 10log10 . 2 kw0 k2 The simulation results are obtained by ensemble averaging over 100 trials. 5.1. Transient-state Performance Figs. 3 and 4 show the NMSD learning curves of the theoretical and simulation results for white and correlated inputs. In these simulations, (8) and (3) are used to obtain the theoretical results. As 100
compared to the NSAF algorithm with small step size, the NSAF algorithm with large step size speeds up the convergence rate but increases the steady-state NMSD. The theoretical results match well with the simulation results in both transient-state and steady-state. Although there are slight differences between the theoretical and simulation results, the proposed expressions are still reasonable because that we utilized several assumptions and approximations for simplifying the algorithm analysis. 0
Theory ( =0.05) Theory ( =0.1) Theory ( =0.5) Simulatioin ( =0.05) Simulatioin ( =0.1) Simulatioin ( =0.5)
NMSD (dB)
-5
-10
-15
-20 0
0.5
1
1.5
2
Step size
2.5
3
3.5
4 10 4
Figure 3: NMSD learning curve of the NSAF algorithm for white inputs. [SNRinput = 10dB, SNRoutput = 30dB]
105
5.2. Steady-State Performance The steady-state NMSD curves of the theoretical and simulation results are compared in Figs. 5 and 6 under different SNRsoutput and SNRsinput . As can be seen, the theoretical results match well with the simulation results for different scenarios. Note that, if SNRinput is fixed (e.g., SNRinput = 10 dB as shown in Fig. 5), the steady-state NMSD curves of the NSAF algorithm are the same under different
110
SNRsoutput , which means the effect of the input noise is more serious than that of the background noise on the steady-state performance of the NSAF algorithm in the presence of noisy inputs. Moreover, it can 9
0
Theory ( =0.05) Theory ( =0.1) Theory ( =0.5) Simulatioin ( =0.05) Simulatioin ( =0.1) Simulatioin ( =0.5)
-2
NMSD (dB)
-4 -6 -8 -10 -12 -14 -16 0
0.5
1
1.5
2
2.5
3
3.5
4 10 4
Step size
Figure 4: NMSD learning curve of the NSAF algorithm for correlated inputs. [SNRinput = 10dB, SNRoutput = 30dB]
be seen from Fig. 6 that a small SNRinput results in a higher steady-state NMSD, and the steady-state NMSD becomes a linear function with respect to the step size when the input noise becomes heavy. It should be pointed out that the steady-state NMSD is a logarithmic function with respect to the step size for the conventional adaptive-filtering algorithm, just as shown in Fig. 6 with SNRinput = 20 dB. In the heavy input noise environment, since the steady-state performance of the NSAF algorithm will not decrease exponentially while the convergence rate slows down observably when the step size becomes small (as shown in Fig. 4), the step size small than 0.1 should be avoided in practice. -8 -9
Steady-state NMSD (dB)
115
-10 -11 -12 Theroy(SNR output=20dB) Theroy(SNR output=30dB)
-13
Theroy(SNR output=40dB)
-14
Simulation(SNR Simulation(SNR
-15
=20dB)
output
=30dB)
output
Simulation(SNR output=40dB)
-16 0
0.2
0.4
0.6
0.8
1
Step size
Figure 5: Steady-state NMSD curves of the NSAF algorithm under different SNRsoutput . [correlated inputs, SNRinput = 10dB]
10
0
Steady−state NMSD (dB)
−5 −10 −15 −20 Theroy(SNRinput=0dB) −25
Theroy(SNRinput=10dB) Theroy(SNR
Simulation(SNRinput=0dB) Simulation(SNRinput=10dB)
−35 −40
=20dB)
input
−30
Simulation(SNRinput=20 dB) 0
0.2
0.4
0.6
0.8
1
Step size
Figure 6: Steady-state NMSD curves of the NSAF algorithm under different SNRsinput . [correlated inputs, SNRoutput = 30dB]
6. Conclusion 120
The performance of the NSAF algorithm with noisy inputs was investigated in this paper. Based on several reasonable assumptions and approximations, we derived the expressions of the transient-state and steady-state MSD. The relationship among the NSAF algorithm with noisy inputs, the leaky NSAF algorithm and the bias-compensated NSAF algorithm was interpreted. Simulation results demonstrated that the theoretical results can match well with the simulation results in both transient-state and steady-
125
state for different scenarios. After discussing the simulation results, a step size selection was suggested for the NSAF algorithm in the heavy input noise environment. In the future, we will analyze the performance of other adaptive filtering algorithm under the noisy input such as the AP algorithm and the recursive least square algorithm.
Acknowledgments 130
This work was partially supported by National Nature Science Foundation of China (Grant: U1734202, U1434203), and the Sichuan Province Youth Science and Technology Innovation Team (Grant: 2016TD0012). The first author was also supported by China Scholarship Council (No. 201807000035) and Cultivation Program for the Excellent Doctoral Dissertation of Southwest Jiaotong University. The authors would like to thank the Associate Editor and the reviewers for the valuable comments
135
and suggestions, which significantly improved the quality of the manuscript.
11
References References [1] W. Ma, H. Qu, G. Gui, L. Xu, J. Zhao, B. Chen, Maximum correntropy criterion based sparse adaptive filtering algorithms for robust channel estimation under non-Gaussian environments, Journal 140
of the Franklin Institute 352 (7) (2015) 2708–2727. [2] J. Na, Y. Xing, R. Costa-Castell´o, Adaptive estimation of time-varying parameters with application to roto-magnet plant, IEEE Transactions on Systems, Man, and Cybernetics: Systems. [3] H. A. Hashim, L. J. Brown, K. McIsaac, Nonlinear stochastic attitude filters on the special orthogonal group 3: Ito and stratonovich, IEEE Transactions on Systems, Man, and Cybernetics:
145
Systems. [4] H. A. Hashim, L. J. Brown, K. McIsaac, Nonlinear pose filters on the special euclidean group se (3) with guaranteed transient and steady-state performance, IEEE Transactions on Systems, Man, and Cybernetics: Systems. [5] H. A. Hashim, L. J. Brown, K. McIsaac, Nonlinear stochastic position and attitude filter on the
150
special euclidean group 3, Journal of the Franklin Institute 356 (7) (2019) 4144–4173. [6] K. Ozeki, T. Umeda, An adaptive filtering algorithm using an orthogonal projection to an affine subspace and its properties, Electronics and Communications in Japan (Part I: Communications) 67 (5) (1984) 19–27. [7] Z. Zheng, Z. Liu, Y. Dong, Steady-state and tracking analyses of the improved proportionate affine
155
projection algorithm, IEEE Transactions on Circuits and Systems II: Express Briefs 65 (11) (2018) 1793–1797. [8] K.-A. Lee, W.-S. Gan, Improving convergence of the NLMS algorithm using constrained subband updates, IEEE signal processing letters 11 (9) (2004) 736–739. [9] Y. Yu, H. Zhao, B. Chen, Sparse normalized subband adaptive filter algorithm with l0 -norm
160
constraint, Journal of the Franklin Institute 353 (18) (2016) 5121–5136. [10] A. Gilloire, M. Vetterli, Adaptive filtering in subbands with critical sampling: analysis, experiments, and application to acoustic echo cancellation, IEEE Transactions on Signal Processing 40 (8) (1992) 1862–1875. [11] M. D. Courville, P. Duhamel, Adaptive filtering in subbands using a weighted criterion, IEEE
165
Transactions on Signal Processing 46 (9) (1998) 2359–2371. 12
[12] S. Sandeep Pradham, V. U. Reddy, A new approach to subband adaptive filtering, IEEE Transactions on Signal Processing 47 (3) (1999) 655–664. [13] L. Lu, H. Zhao, W. Wang, Y. Yu, Performance analysis of the robust diffusion normalized least mean p-power algorithm, IEEE Transactions on Circuits and Systems II: Express Briefs. 170
[14] P. Wen, J. Zhang, S. Zhang, D. Li, Augmented complex-valued normalized subband adaptive filter: algorithm derivation and analysis, Journal of the Franklin Institute. [15] Z. Zheng, Z. Liu, Steady-state mean-square performance analysis of the affine projection sign algorithm, IEEE Transactions on Circuits and Systems II: Express Briefs. [16] K.-A. Lee, W.-S. Gan, S.-M. Kuo, Mean-square performance analysis of the normalized subband
175
adaptive filter, in: Signals, Systems and Computers, 2006. ACSSC’06. Fortieth Asilomar Conference on, IEEE, 2006, pp. 248–252. [17] J. Ni, X. Chen, Steady-state mean-square error analysis of regularized normalized subband adaptive filters, Signal Processing 93 (9) (2013) 2648–2652. [18] W. Yin, A. S. Mehr, Stochastic analysis of the normalized subband adaptive filter algorithm, IEEE
180
Transactions on Circuits and Systems I: Regular Papers 58 (5) (2011) 1020–1033. [19] J. J. Jeong, S. H. Kim, G. Koo, W. K. Sang, Mean-square deviation analysis of multibandstructured subband adaptive filter algorithm, IEEE Transactions on Signal Processing 64 (4) (2016) 985–994. [20] Y. Yu, H. Zhao, Performance analysis of the deficient length NSAF algorithm and a variable step
185
size method for improving its performance, Digital Signal Processing 62 (2017) 157–167. [21] S. M. Jung, P. Park, Stabilization of a bias-compensated normalized least-mean-square algorithm for noisy inputs, IEEE Transactions on Signal Processing 65 (11) (2017) 2949–2961. [22] M. S. E. Abadi, S. Kadkhodazadeh, A family of proportionate normalized subband adaptive filter algorithms, Journal of the Franklin Institute 348 (2) (2011) 212–238.
190
[23] L. Lu, H. Zhao, B. Chen, Collaborative adaptive volterra filters for nonlinear system identification in α-stable noise environments, Journal of the Franklin Institute 353 (17) (2016) 4500–4525. [24] W. Ma, X. Qiu, J. Duan, Y. Li, B. Chen, Kernel recursive generalized mixed norm algorithm, Journal of the Franklin Institute 355 (4) (2018) 1596–1613. [25] W. Wang, H. Zhao, B. Chen, Bias compensated zero attracting normalized least mean square
195
adaptive filter and its performance analysis, Signal Processing 143 (2018) 94–105. 13
[26] Z. Zheng, H. Zhao, Bias-compensated normalized subband adaptive filter algorithm, IEEE Signal Processing Letters 23 (6) (2016) 809–813. [27] Y. Wang, Y. Li, R. Yang, Sparse adaptive channel estimation based on mixed controlled l2 and lp -norm error criterion, Journal of the Franklin Institute 354 (15) (2017) 7215–7239.
14