Generalized Maximum Correntropy Algorithm with Affine Projection for Robust Filtering Under Impulsive-Noise Environments
Journal Pre-proof
Generalized Maximum Correntropy Algorithm with Affine Projection for Robust Filtering Under Impulsive-Noise Environments Ji Zhao, Hongbin Zhang, J. Andrew Zhang PII: DOI: Reference:
S0165-1684(20)30067-0 https://doi.org/10.1016/j.sigpro.2020.107524 SIGPRO 107524
To appear in:
Signal Processing
Received date: Revised date: Accepted date:
21 November 2019 24 January 2020 3 February 2020
Please cite this article as: Ji Zhao, Hongbin Zhang, J. Andrew Zhang, Generalized Maximum Correntropy Algorithm with Affine Projection for Robust Filtering Under Impulsive-Noise Environments, Signal Processing (2020), doi: https://doi.org/10.1016/j.sigpro.2020.107524
This is a PDF file of an article that has undergone enhancements after acceptance, such as the addition of a cover page and metadata, and formatting for readability, but it is not yet the definitive version of record. This version will undergo additional copyediting, typesetting and review before it is published in its final form, but we are providing this version to give early visibility of the article. Please note that, during the production process, errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain. © 2020 Published by Elsevier B.V.
Highlights • We have proposed a novel affine projection type robust adaptive filtering algorithm for system identification under impulsive-noise environments. • The proposed algorithm is derived by combining affine projection with generalized maximum correntropy criterion. • There is no matrix inversion in the proposed algorithm. • In comparison with state-of-the-art algorithms, our proposed algorithm is more robust against a great number of large outliers.
1
Generalized Maximum Correntropy Algorithm with Affine Projection for Robust Filtering Under Impulsive-Noise Environments Ji Zhao 1
1 2
, Hongbin Zhang
1∗
, J. Andrew Zhang
2
School of Information and Communication Engineering, University of Electronic Science and Technology of China, Chengdu, Sichuan 611731, PR China 2 Global Big Data Technologies Centre, University of Technology Sydney, Sydney, NSW 2007 Australia
Abstract Combining affine projection (AP) with the generalized maximum correntropy (GMC) criterion, we propose a new family of AP-type filtering algorithms, called as APGMC, for system identification under impulsive-noise environments. By optimizing GMC of the a posterior error vector with a `2 -norm constraint on the filter weight vector, APGMC avoids the computation of the inversion of the input data matrix. Simulation results validate that APGMC achieves better filtering accuracy and faster convergence rate, compared to state-of-the-art algorithms. Keywords: Affine projection, Generalized maximum correntropy, Robust filtering, System identification
1. Introduction Adaptive filtering (AF) algorithms are widely used in signal processing for, e.g., system identification, channel estimation, echo cancellation, and image restoration. Normalized least mean squares (NLMS) is one of the most popular AF algorithms due to its simplicity and good performance. However, highly colored input data can degrade 5
the convergence rate of NLMS. One solution to this degradation is the affine projection algorithm (APA) [1], which uses an affine projection (AP) and updates a weight vector based on the M previous input vectors. Various methods have been proposed to improve the filtering performance of APA [2]. Examples include: the evolutionary technology [3], the shrinkage method [4], the coordinate descent iteration [5] and the variable step-size (VSS) method [6]. Nevertheless, the aforementioned APAs suffer from performance degradation in the presence of impulsive noise,
10
due to the use of `2 -norm criterion. To achieve more robust filtering performance, various nonlinear optimization criteria have been proposed. Examples include: the `p -norm with p ∈ [1, 2) [7], the M-estimate [8], the kernel risk-sensitive loss [9] and the Versoria-cost [10]. About a decade ago, [11] proposed a robust AP-type algorithm by combining AP with `1 -norm minimization. This algorithm is called as affine projection sign algorithm (APSA), and is robust to large outliers. Variations of APSA were further developed based on the VSS method [12, 13, 14] and
15
the proportionate method [15]. Recently, [16] and [10] proposed two different AP-like robust filtering algorithms, AP-like M-estimate (APLM) and AP Versoria (APV), based on M-estimate and Versoria-cost, respectively. These algorithms, including APSA, APLM and APV, are all computationally efficient, since they avoid the computation of matrix inversion. However, when the probability of large outliers increases, their performance degrades notably, especially for APLM. ∗ Corresponding
author Email address:
[email protected] (Hongbin Zhang 1 )
Preprint submitted to Signal Processing
February 4, 2020
20
In this paper, we propose a novel robust AP-type algorithm, which is called as affine projection generalized maximum correntropy (APGMC) algorithm. This new algorithm is derived by minimizing the generalized correntropic loss (GC-loss) function of the a posteriori error vector with a `2 -norm constraint on the weight entries. The GC-loss scheme is based on the generalized maximum correntropy (GMC) criterion [17, 18], which is a generalization of the MC criterion (MCC) being widely used in robust adaptive filtering and gene selection [19, 20].
25
The localization provided by the kernel width of MCC can effectively reduce the detrimental effects of outliers and impulsive noise. Examples of the MCC family include: the MCC algorithm [21], the correntropy inspired VSS sign algorithm [22], the kernel Kalman filtering with MCC [23] and the proportionate AP-type MCC algorithms [24, 25]. Using constrained optimization, APGMC does not require matrix inversion, leading to comparable computational complexity with APLM, APV and APSA-type algorithms. Under the Bernoulli-Gaussian noise model, simulation
30
results validate that our proposed APGMC algorithm outperforms a wide range of state-of-the-art algorithms, in terms of convergence rate and filtering accuracy, i.e., the steady-state mismatch between the filtered outputs and the original signals, particularly in the presence of numerous large outliers.
2. Affine projection generalized maximum correntropy algorithm In this part, we first introduce the GMC criterion, and then present the proposed algorithm, which combines 35
the affine projection method with the GC-loss function induced by GMC. 2.1. Generalized maximum correntropy criterion The local similarity of two random variables (X, Y ) can be measured by the correntropy defined as follows [26] Z VC (X − Y ) = E [κ(X − Y )] = κ(x − y)dFX,Y (x, y), (1) where E[·] is the expectation operator; κ(·) means a Mercer kernel; and FX,Y (x, y) represents the joint probability distribution function (PDF) of (X, Y ). In general, a default Mercer function is the Gaussian kernel defined by κλ (x − y) = Aσ exp −λ(x − y)2 ,
where λ = 0.5σ −2 denotes the kernel parameter; σ is the kernel size; and Aσ =
(2) √
2πσ
−1
is the normalization
value. However, such Gaussian kernel may restrict the good properties of VC (X − Y ). Hence, [17] introduced the generalized Gaussian density (GGD) function as a kernel, which is given by κα,λ (x − y) = γ exp (−λ|x − y|α ) ,
(3)
−1 where γ = 0.5α βΓ(α−1 ) with Γ(·) denoting the Gamma function; α > 0 is the shape parameter1 ; β > 0 is the scale parameter; and λ = β −α . Then the generalized correntropy can be obtained as VGC (X − Y ) = E [κα,λ (X − Y )] .
(4)
Such a definition is very general and flexible, and it includes the original correntropy with the Gaussian kernel as a special case. Many useful properties of (4) can be found in [17]. 1 Although,
0 < α ≤ 2 guarantees the GGD to be a Mercer kernel, α can also take other positive values as investigated in [17].
3
Referring to the definition of correntropic loss (C-loss) [27], a generalized C-loss (GC-loss) function can be defined as JGC (X − Y ) = γ − VGC (X − Y ).
(5)
Obviously, the minimum of JGC (X −Y ) is equivalent to the maximum of VGC (X −Y ). In addition, VGC (X −Y ) can PN 1 be estimated from the sample pairs {xj , yj }N j=1 of random variables (X, Y ), i.e., VGC (X −Y ) = N j=1 κα,λ (xj −yj ),
and then (5) becomes
N 1 X N JˆGC (X − Y ) = γ − κα,λ (xj − yj ), N j=1
(6)
which has been used in adaptive filtering and kernel methods [17, 18, 28]. Note that although both the kernel 40
recursive GMC (KRGMC) algorithm in [28] and our proposed APGMC algorithm are based on the GMC criterion, KRGMC is developed in a non-linear kernel space and is a class of non-linear algorithm. Comparatively, APGMC is developed in a linear space via the Lagrange multipliers method, and is a class of linear adaptive filtering algorithm. 2.2. The proposed APGMC algorithm A general adaptive filtering model can be described as y(n) = xT (n)ω o + v(n), where y(n) is the desired signal, o T x(n) = [xn , xn−1 , . . . , xn−L+1 ]T ∈ RL×1 is the input signal, ω o = [ω1o , ω2o , . . . , ωL ] denotes the intrinsic weight
vector that needs to be estimated; L is the length of taps; and v(n) = vb (n) + vi (n) with vb (n) denoting the background noise and vi (n) being the impulsive interference. The proposed APGMC algorithm is obtained by minimizing the sampled GC-loss function (6) of the a posterior error vector with the `2 -norm constraint on the filter vector, i.e.,
min
ω(n)
M −1 X
subject
j=0
1 JˆGC (ep (n − j)) = γM −
M −1 X j=0
exp (−λ|ep (n − j)|α )
(7)
2
kω(n) − ω(n − 1)k2 ≤ µ2 ,
to
where ep (n − j) = y(n − j) − xT (n − j)ω(n) denotes the a posterior error, and µ2 is a parameter used to ensure that the energy of the weight error vector at two successive time instants is small. Based on the Lagrange multipliers method, we can convert (7) to an unconstrained optimization problem, i.e., min J(ω(n)) = γM −
ω(n)
M −1 X j=0
exp (−λ|ep (n − j)|α ) + θ kω(n) − ω(n − 1)k22 − µ2 ,
(8)
where θ > 0 represents a Lagrange multiplier. Computing the gradient of the objective function in (8) with respect to ω(n) generates ∇J(ω(n)) = −λα
M −1 X j=0
(b(n − j)sgn(ep (n − j))x(n − j)) + 2θ (ω(n) − ω(n − 1)) ,
(9)
where b(n − j) = exp (−λ|ep (n − j)|α ) |ep (n − j)|α−1 , and sgn(·) is the sign function. Let B(n) = diag[b(n), b(n−1), . . . , b(n−M +1)] be a diagonal matrix with elements b(n−j), j ∈ {0, 1, . . . , M −1},
X(n) = [x(n), x(n − 1), . . . , x(n − M + 1)], and sgn(ep (n)) = [sgn(ep (n)), sgn(ep (n − 1)), . . . , sgn(ep (n − M + 1))]T . Injecting B(n), X(n), and sgn(ep (n)) into (9), and setting ∇J(ω(n)) to an all-zero vector, we obtain ω(n) − ω(n − 1) =
λα X(n)B(n)sgn(ep (n)). 2θ 4
(10)
Injecting (10) into the constraint in (7) and taking the equality, we can get the Lagrange multiplier θ as θ = 0.5λαµ−1 kX(n)B(n)sgn(ep (n))k2 .
(11)
Substituting the expression of θ in (11) to (10), the weight updating equation can be obtained as ω(n) = ω(n − 1) + µ
X(n)B(n)sgn(ep (n)) . kX(n)B(n)sgn(ep (n))k2
(12)
In B(n) and sgn(ep (n)), the a posterior error ep (n − j) is unknown and needs to be estimated. However, ep (n − j) is related to ω(n) and cannot be directly estimated before ω(n) is known. Here, we use ep (n − j)’s
prediction e(n − j) = d(n − j) − xT (n − j)ω(n − 1) to approximate it. Then, we use B(n) and sgn(e(n)) to
represent the estimates of B(n) and sgn(ep (n)), respectively. Here, B(n) and sgn(e(n)) have the same formations as B(n) and sgn(ep (n)), respectively, except that ep (n − j) is replaced by e(n − j) with j ∈ {0, 1, . . . , M − 1}. In addition, the positive parameter µ in (12) is a step size, typically selected from (0.005, 0.065), that is used to balance the filtering accuracy and the convergence rate. Hence, (12) can be rewritten as ω(n) = ω(n − 1) + µ 45
X(n)B(n)sgn(e(n)) . kX(n)B(n)sgn(e(n))k2
(13)
To this end, we obtain the key weight update equation, (13), for the proposed APGMC algorithm. For other main parameters in the proposed algorithm, a small λ, such as λ ∈ (0, 0.01), is typically selected to overcome impulsive noise, and α and the project order M are selected from (0.1, 6) and (1, L/10), respectively. Remark: We can establish links between APGMC and ASPA. From (13), APGMC can be regarded as considering the weighted sign operation of errors, i.e., exp (−λ|e(n − j)|α ) |e(n − j)|α−1 sgn(e(n − j)). Comparatively,
50
APSA only contains the sign operation of errors, namely, sgn(e(n − j)) [11]. When signals are disturbed by a great number of impulsive noises, the weighted operations, acting as weighed gains, lead to better constraint of weight coefficients. This can be clearly seen from Figure 1, which compares the gain factors of APSA and APGMC with λ = 0.0005 and α ∈ {0.5, 1, 1.5, 2, 3, 4}. From this figure, we can observe that: 1) The gain of APSA is always 1, which cannot efficiently suppress impulsive noise; 2) APGMC with α > 0 can efficiently overcome the disturbance
55
of large outliers; 3) Smaller α values, such as 0.5 and 1, may lead to APGMC achieving better filtering accuracy, compared to α ≥ 2. This is because a too large α results in vanishing of corresponding gain factors, which is one of the main reasons that we use GC-loss instead of C-loss in APGMC. Furthermore, compared with APSA, our proposed scheme only requires small additional memory to store the M elements of B(n) in (13), since B(n) is a diagonal matrix.
60
2.3. Computational complexity The proposed APGMC does not require matrix inversion operation as can be seen from (13), and its computational complexity is almost the same with APSA [29] except that APGMC has the weighting operations, i.e., exp(−λ|e(n − j)|α )|e(n − j)|α−1 in sgn(e(n − j)). It is worthy noting that exp(−λ|e(n − j)|α ) can be approximated by 1 − λ|e(n − j)|α as λ → 0+ . Since λ is usually selected to be a small value close to 0 in implementation, we
65
α,λ can estimate the complexity order Nexp of exp(−λ|e(n − j)|α )|e(n − j)|α−1 from the complexity order of computing
(1 − λ|e(n − j)|α )|e(n − j)|α−1 . Table 1 compares the computational complexity of several algorithms in terms of
multiplications and additions at each iteration. From this table one can find that our proposed APGMC algorithm has similar computational complexity with others. 5
50 1.5 = 0.5 =1 = 1.5 =2 =3 =4 APSA
Gain factor value
40
30
1
0.5
20 0
0
50
100
150
10
0
0
50
100
150
error Figure 1: The gain factor behaviors of APSA and APGMC with λ = 0.0005. Table 1: Computational complexity of APGMC and related algorithms. In the table, L denotes the number of taps, M denotes the α,λ projection order, and Nexp is a factor associated with the complexity in computing exp(−λ|e(n − j)|α )|e(n − j)|α−1 .
Algorithm
Number of Multiplications
Number of Additions
APSA
M L + 2L + 1
2M L + L
APLM
3M L + 2L + M
3M L + L − 2
λ
APGMC
0.0005
α
α,λ M Nexp
+ 2M L + 2L + 1
1
M + 2M L + 2L + 1
2
3M + 2M L + 2L + 1
3
6M + 2M L + 2L + 1
4
9M + 2M L + 2L + 1
2M L + L
3. Stability analysis We derive the necessary condition that the step size µ in (13) needs to satisfy, to enable APGMC converge. Let ¯ ω(n) = ω o − ω(n) be the weight error vector. Subtracting ω o from both sides of (13), we can get ¯ ¯ − 1) − µ ω(n) = ω(n
X(n)B(n)sgn(e(n)) . kX(n)B(n)sgn(e(n))k2
2 ¯ Let ξ(n) , E kω(n)k 2 where E[x] denotes the expectation of x. Using (14) it can be computed as " # T ¯ − 1) sgnT (e(n))B (n)X T (n)ω(n ξ(n) = ξ(n − 1) − 2µE + µ2 . kX(n)B(n)sgn(e(n))k2
(14)
(15)
To guarantee the convergence and stability of APGMC, ξ(n) − ξ(n − 1) should be less than zero, and hence the bound of µ should satisfy the following equation # " T ¯ − 1) sgnT (e(n))B (n)X T (n)ω(n 0 < µ < 2E . kX(n)B(n)sgn(e(n))k2 70
(16)
4. Numerical simulations We present simulation results for system identification to test the effectiveness of the proposed APGMC algorithm. The unknown weight vector ω o is randomly generated following a Gaussian distribution of zero-mean 6
and variance 1, with the length of L = 256 taps. The input data x(n) is produced by filtering a white, zeromean Gaussian signal with unit variance through a first-order filter H(z) = 1/(1 − 0.7z −1 ). The desired output 75
y(n) = xT (n)ω o is contaminated by a background noise vb (n) and an impulsive noise vi (n). We use a white Gaussian model to generate vb (n), and set the signal-to-noise ratio between y(n) and vb (n) be 30 dB. Let b(n) be a Bernoulli process with a PDF: P [b(n) = 0] = 1−p and P [b(n) = 1] = p, and let g(n) be a zero-mean Gaussian signal with variance δg2 . Then, we use the product of b(n) and g(n) to model vi (n), i.e., vi (n) = b(n)g(n), which is named as a Bernoulli-Gaussian model. Here, we set δg2 = 1000δy2 , where δy2 is the variance of y(n). Unless stated otherwise,
80
in all experiments, we set λ = 0.0005 for APGMC, and the regularization parameter is 0.001 for all algorithms. To measure the filtering performance of these algorithms, we adopt the normalized mean-squared deviation (NMSD), given by 20 log10 (kω o − ω(n)k2 /kω o k2 ) ). All simulation results are averaged over 100 independent trials. We first investigate the impact of projection order on the performance of APGMC. Figure 2 plots the NMSD curves for different projection orders M ∈ {1, 2, 4, 8, 16}, with µ = 0.05 and p = 0.15. The steady-state NMSDs
85
(SsNMSD) are also shown and they are averaged over the last 1000 iterations. From this figure, we can find that: 1) For APGMC with α = 1, increasing the M value from 1 to 16 accelerates the convergence rate, without increasing SsNMSD; 2) For APGMC with α = 2, although a large M value, such as 8 and 16, leads to faster convergence, the filtering accuracy also degrades. Such a filtering behavior is consistent with those general observations for APSA and APA [1, 11]; 3) Under the same projection order M , APGMC with α = 1 achieves comparable and even better filtering performance than with α = 2. 0
0
M=1 M=2 M=4 M=8 M = 16
-10
M=1 M=2 M=4 M=8 M = 16
-5
NMSD(dB)
NMSD(dB)
-5
SsNMSD = -26.526 SsNMSD = -26.274 SsNMSD = -26.401 SsNMSD = -25.993 SsNMSD = -25.362
-15
-10
SsNMSD = -27.523 SsNMSD = -27.918 SsNMSD = -27.135 SsNMSD = -25.409 SsNMSD = -22.690
-15
-20
-20
-25
-25 0
0.5
1
1.5
iterations
2
2.5
0
0.5
1
1.5
2
iterations
104
(a) α = 1
2.5 104
(b) α = 2
Figure 2: The NMSD curves of APGMCs with α = 1 and α = 2 under different projection orders M . 90
We then investigate the influence of α values and the number of taps L on the convergence behavior of APGMC. Here, we set α ∈ {0.5, 1, 1.5, 2, 2.5, 3, 3.5, 4}, M = 10, and L ∈ {256, 512}. Figure 3 plots the NMSD curves of the tested algorithms, and reveals that: 1) APGMC with α = 0.5 achieves the best filtering accuracy; 2) APGMC with α = 1.5 achieves the fastest convergence rate at the expense of increased steady-steady misalignment; 3) In the 95
case of L = 256, APGMC with 2.5 ≤ α ≤ 4 achieves similar filtering performance in terms of convergence rate and SsNMSD; 4) In the case of L = 512, increasing the α values from 2 to 4 decreases the convergence rate of APGMCs, which indicates that the number of taps has a notable impact on APGMC with α ≥ 2. Finally, we compare our proposed APGMC with related AP-type algorithms including: APLM, APSA, VSS-
7
0
0
-10
-15
SsNMSD = -27.368 SsNMSD = -25.458 SsNMSD = -22.788 SsNMSD = -24.343
= 2.5 =3 = 3.5 =4
SsNMSD = -25.226 SsNMSD = -25.747 SsNMSD = -26.100 SsNMSD = -26.342
= 0.5 SsNMSD = -28.367 = 1 SsNMSD = -26.528 = 1.5 SsNMSD = -24.034
-5
NMSD(dB)
NMSD(dB)
-5
= 0.5 =1 = 1.5 =2
-10
= 2 SsNMSD = -25.495 = 2.5 SsNMSD = -26.298
-15
= 3 SsNMSD = -26.804 = 3.5 SsNMSD = -27.090 =4
SsNMSD = -27.325
-20
-20 -25 -25
0
0.5
1
1.5
2
2.5
iterations
3
3.5
-30
4
0
0.5
1
1.5
2
2.5
3
iterations
104
(a) L = 256, M = 10
3.5 104
(b) L = 512, M = 10
Figure 3: The NMSD curves of APGMCs with different values for α under M = 10, L = 256 and L = 512.
APA[6], MIP-APSA[15], RGS-PAP [25], LP-VPAP[24] and APV. In this part, the unknown weight ω o is changed 100
to −ω o at iteration 50000, L = 256 and M = 10. To make a fair comparison, Table 2 lists the parameters that are selected to make all algorithms achieve similar steady-state misalignment in the stage of before system changes (BSC). The identified system is disturbed by impulsive noise with p = 0.01. Table 2 summarizes the SsNMSD results of these algorithms in the BSC stage, and the corresponding NMSD curves are plotted in Figure 4 (a). From them, we can have the following observations. Firstly, in the BSC stage: 1) Compared with other algorithms,
105
APSA and LP-VPAP achieve the best filtering accuracy and worst steady-state misalignment as shown in Table 2, respectively. 2) Compared with APLM, MIP-APSA and VSS-APA, three APGMC algorithms achieve less steadystate misalignment at the expense of slower convergence rate; 3) In comparison with APSA, RGS-PAP and APV, APGMC with α = 2 achieves faster convergence rate and similar SsNMSD. Secondly, for the stage of after system changes (ASC): 1) APLM, MIP-APSA, LP-VPAP and APGMC with α = 2 have similar convergence behavior to
110
that in the BSC stage; 2) APSA, APV and APGMC with α = 0.5 have reduced convergence rate; 3) VSS-APA, RGS-PAP and APGMC with α = 3.5 have the worst convergence behavior. Moreover, under the same system identification case, we increase the value of p from 0.01 to 0.5, which means there are more large outliers disturbing the identified system. All algorithms have the same setting as before. Figure 4 (b) plots the corresponding NMSD curves, and it shows that: 1) APLM, LP-VPAP and VSS-APA cannot identify
115
the system disturbed by a large number of impulsive noises in the whole training process; 2) In comparison with APSA, in the BSC stage, APGMC with α = 2 and α = 3.5 achieves faster convergence, and has the better steadystate misalignment; 3) Compared with other algorithms, APGMC with α = 0.5 achieves the best filtering accuracy in the whole training process. This behavior validates that smaller α value can improve the filtering accuracy for APGMC; 4) MIP-APSA and APGMC with α = 2 can hold their convergence behaviors after the system changes,
120
while APGMC realizes better filtering accuracy in the ABC stage; 5) In the ABC stage, APSA, APV, RGS-PAP and APGMC with α = 3.5 have reduced convergence rate, especially for RGS-PAP. Through Figure 4 (a) and (b), we can conclude that our proposed APGMC with a proper α value is robust against impulsive noises, and can achieve excellent tracking performance even when system abruptly changes.
8
10
30 = 0.5 =2 = 3.5 APSA APLM MIP-APSA VSS-APA RGS-PAP VAP LP-VPAP
0
NMSD(dB)
-5 -10
10
-15
0
-10
-20 -25
-29
-20
-30 -31
-30 -35
MIP-APSA VSS-APA RGS-PAP VAP LP-VPAP
= 0.5 =2 = 3.5 APSA APLM
20
NMSD(dB)
5
-30 0
2
4
6
8
10
0
12
2
4
104
iterations
6
8
10
(a) p = 0.01
12 104
iterations (b) p = 0.5
Figure 4: The NMSD curves for APGMC and related AP-type algorithms under different impulsive-noise environments. Table 2: Parameters for APGMC and related AP-type algorithms and corresponding SsNMSDs.
Parameters
Algorithms
SsNMSD (dB)
Algorithms
L = 256 M = 10
p = 0.01
p = 0.5
α = 0.5 µ = 0.018
-31.81
-30.64
VSS-APA
APGMC
α = 2 µ = 0.011
-30.96
-27.80
APV
λ = 0.0005
α = 3.5 µ = 0.011
-31.39
-29.84
RGS-PAP
-28.68
29.65
LP-VPAP
-31.98
-26.51
MIP-APSA
µ = 0.3 Nw = 9
APLM
λδ = 0.99 APSA
δe2 (0)
µ = 0.01
Parameters L = 256 M = 10
= 0.01
SsNMSD (dB) p = 0.01
p = 0.5
-27.878
-0.18
µ = 0.01 r = 10
-31.40
-28.93
µ = 0.07 B = 256
-30.24
-29.54
-15.97
17.96
-27.34
-12.14
S1
β = 0.5
µ = 0.01 ρ = 4 × 10−7 µ = 0.1
5. Conclusions 125
We have now presented the novel APGMC algorithm which minimizes the GC-loss function based on GMC. Using the Lagrange multiplier method to the `2 -norm constraint on weight vector, APGMC does not require matrix inversion. Simulation results for system identification validate that APGMC is more robust and effective than existing AP-type algorithms under an environment with a great number of large outliers. Furthermore, APGMC can achieve flexible trade-off between filtering accuracy and convergence rate, via adjusting the several parameters
130
in the algorithm. Generally, the parameter λ is chosen as λ → 0+ in GMC [17, 18]. Therefore, the performance of APGMC can be enhanced by adjusting the value of α.
Credit Author Statement Ji Zhao : Methodology, Software, Writing-original draft, Writing-review & editing. Hongbin Zhang: Project administration, Funding acquisition.
9
135
J. Andrew Zhang: Writing-original draft, Writing-review & editing.
Declaration of Competing Interest For this manuscript, there is no potential Conflict of Interest.
6. Acknowledgment 140
This work is supported in part by the National Natural Science Foundation of China (Grant No. 61971100), in part by the China Scholarship Council (Grant No. 201806070013).
References References [1] K. Ozeki, T. Umeda, An adaptive filtering algorithm using an orthogonal projection to an affine subspace and 145
its properties, Electronics & Communications in Japan 67 (5) (1984) 1927. [2] F. Yang, J. Yang, A comparative survey of fast affine projection algorithms, Digital Signal Processing 83 (2018) 297–322. [3] S. Kim, S. Kong, W. Song, An affine projection algorithm with evolving order, IEEE Signal Processing Letters 16 (11) (2009) 937–940.
150
[4] Z. A. Bhotto, A. Antoniou, A family of shrinkage adaptive-filtering algorithms, IEEE Transactions on Signal Processing 61 (7) (2013) 1689–1697. [5] Y. Zakharov, F. Albu, Coordinate descent iterations in fast affine projection algorithm, IEEE Signal Processing Letters 12 (5) (2005) 353–356. [6] I. Song, P. G. Park, A variable step-size affine projection algorithm with a step-size scaler against impulsive
155
measurement noise, Signal Processing 96 (1) (2014) 321–324. [7] M. Shao, C. L. Nikias, Signal processing with fractional lower order moments: stable processes and their applications, Proceedings of the IEEE 81 (7) (1993) 986–1010. [8] S. C. Chan, Y. X. Zou, A recursive least m-estimate algorithm for robust adaptive filtering in impulsive noise: fast algorithm and convergence performance analysis, IEEE Transactions on Signal Processing 52 (4) (2004)
160
975–991. [9] B. Chen, L. Xing, B. Xu, H. Zhao, N. Zheng, J. C. Prncipe, Kernel risk-sensitive loss: definition, properties and application to robust adaptive filtering, IEEE Transactions on Signal Processing 65 (11) (2017) 2888–2901. [10] F. Huang, J. Zhang, S. Zhang, Affine projection versoria algorithm for robust adaptive echo cancellation in hands-free voice communications, IEEE Transactions on Vehicular Technology 67 (12) (2018) 11924–11935.
10
165
[11] T. Shao, Y. R. Zheng, J. Benesty, An affine projection sign algorithm robust against impulsive interferences, IEEE Signal Processing Letters 17 (4) (2010) 327–330. [12] C. Ren, Z. Wang, Z. Zhao, A new variable step-size affine projection sign algorithm based on a posteriori estimation error analysis, Circuits, Systems, and Signal Processing 36 (5) (2017) 1989–2011. [13] J. H. Kim, J. H. Chang, S. W. Nam, Affine projection sign algorithm with l1 minimization-based variable
170
step-size, Signal Processing 105 (2014) 376–380. [14] M. S. E. Abadi, H. Mesgarani, S. M. Khademiyan, Robust variable step-size affine projection sign algorithm against impulsive noises, Circuits, Systems, and Signal Processing (2019) 1–18. [15] F. Albu, H. K. Kwan, Memory improved proportionate affine projection sign algorithm, Electronics Letters 48 (20) (2012) 1279–1281.
175
[16] P. Song, H. Zhao, Affine-projection-like m-estimate adaptive filter for robust filtering in impulse noise, IEEE Transactions on Circuits and Systems II: Express Briefs (2019) 1–1. [17] B. Chen, L. Xing, H. Zhao, N. Zheng, J. C. Prncipe, Generalized correntropy for robust adaptive filtering, IEEE Transactions on Signal Processing 64 (13) (2016) 3376–3387. [18] J. Zhao, H. Zhang, G. Wang, Fixed-point generalized maximum correntropy: convergence analysis and convex
180
combination algorithms, Signal Processing 154 (2019) 64–73. [19] M. Mohammadi, G. A. Hodtani, M. Yassi, A robust correntropy-based method for analyzing multisample acgh data, Genomics 106 (5) (2015) 257–264. [20] M. Mohammadi, H. S. Noghabi, G. A. Hodtani, H. R. Mashhadi, Robust and stable gene selection via maximum-minimum correntropy criterion, Genomics 107 (2) (2016) 83–87.
185
[21] A. Singh, J. C. Principe, Using correntropy as a cost function in linear adaptive filters, in: 2009 International Joint Conference on Neural Networks, 2009, pp. 2950–2955. [22] W. Wang, J. Zhao, H. Qu, B. Chen, A correntropy inspired variable step-size sign algorithm against impulsive noises, Signal Processing 141 (2017) 168–175. [23] L. Dang, B. Chen, S. Wang, Y. Gu, J. C. Prncipe, Kernel kalman filtering with conditional embedding and
190
maximum correntropy criterion, IEEE Transactions on Circuits and Systems I: Regular Papers (2019) 1–13. [24] Z. Jiang, Y. Li, X. Huang, Z. Jin, A sparsity-aware variable kernel width proportionate affine projection algorithm for identifying sparse systems, Symmetry 11 (10) (2019) 1218. [25] Z. Jiang, Y. Li, Y. Zakharov, A robust group-sparse proportionate affine projection algorithm with maximum correntropy criterion for channel estimation, in: 2019 27th European Signal Processing Conference (EUSIPCO),
195
IEEE, 2019, pp. 1–5. [26] W. Liu, P. P. Pokharel, J. C. Principe, Correntropy: properties and applications in non-gaussian signal processing, IEEE Transactions on Signal Processing 55 (11) (2007) 5286–5298. 11
[27] A. Singh, R. Pokharel, J. C. Pr´ıncipe, The c-loss function for pattern classification, Pattern Recognition 47 (2014) 441–453. 200
[28] J. Zhao, H. Zhang, Kernel recursive generalized maximum correntropy, IEEE Signal Processing Letters 24 (12) (2017) 1832–1836. [29] J. Ni, F. Li, Efficient implementation of the affine projection sign algorithm, IEEE Signal Processing Letters 19 (1) (2012) 24–26.
12