Chaos, Solitons and Fractals 35 (2008) 550–561 www.elsevier.com/locate/chaos
Global convergence of an adaptive minor component extraction algorithm q Dezhong Peng, Zhang Yi
*
Computational Intelligence Laboratory, School of Computer Science and Engineering, University of Electronic Science and Technology of China, Chengdu 610054, PR China Accepted 22 May 2006
Abstract The convergence of neural networks minor component analysis (MCA) learning algorithms is crucial for practical applications. In this paper, we will analyze the global convergence of an adaptive minor component extraction algorithm via a corresponding deterministic discrete time (DDT) system. It is shown that if the learning rate satisfies certain conditions, almost all the trajectories of the DDT system are bounded and converge to minor component of the autocorrelation matrix of input data. Simulations are carried out to illustrate the results achieved. 2006 Elsevier Ltd. All rights reserved.
1. Introduction The minor component is the direction in which input data has the smallest covariance. Extraction of minor component from input data is an important task in many signal processing fields, for example, moving target indication [1], clutter cancellation [2], computer vision [3], curve and surface fitting [4], digital beamforming [5], frequency estimation [6] and bearing estimation [7], etc. Neural networks can be used to online extract minor component from input data. Many neural networks minor component analysis learning algorithms have been proposed and analyzed. These learning algorithms can make the weight vector of the neuron converge to minor component via adaptively updating the weight vector. In [8], Oja proposed an interesting MCA learning algorithm based on the well-known anti-Hebbian rule. Since Oja’s pioneer work, many important MCA algorithms have been proposed by Luo and Unbehauen [9], Xu et al. [4], and Cirrincione et al. [10]. Unfortunately, these MCA algorithms may suffer from the norm divergence problem [10,11]. In order to guarantee the convergence and stability, some self-stabilizing MCA learning algorithms are proposed by Douglas et al. [12] and Mo¨ller et al. [13]. Recently, Ouyang et al. [14] proposed an adaptive minor component extraction (AMEX) algorithm which is globally convergent.
q
This work was supported by National Science Foundation of China under Grant 60471055 and Specialized Research Fund for the Doctoral Program of Higher Education under Grant 20040614017. * Corresponding author. E-mail addresses:
[email protected] (D. Peng),
[email protected] (Z. Yi). 0960-0779/$ - see front matter 2006 Elsevier Ltd. All rights reserved. doi:10.1016/j.chaos.2006.05.051
D. Peng, Z. Yi / Chaos, Solitons and Fractals 35 (2008) 550–561
551
The convergence of neural networks learning algorithms is crucial for practical applications. The dynamical behaviors of many neural networks have been extensively analyzed [15–19]. However, the dynamics of MCA learning algorithms described by stochastic discrete time (SDT) systems is a difficult topic for direct study and analysis. Traditionally, the convergence analysis of stochastic discrete learning algorithms can be solved by studying the corresponding continuous-time ordinary differential equation (ODE) [20]. Such a method requires the learning rate to converge to zero [20]. However, this restrictive condition cannot be satisfied in many practical applications where the learning rate is usually taken as a constant due to the round-off limitation and tracking requirements. Recently, a deterministic discrete time (DDT) method has been used to analyze the dynamics of stochastic learning algorithms [21–23]. This DDT method transforms a stochastic learning algorithm into a corresponding DDT system and does not require the learning rate to approach zero. The convergence analysis of the DDT system can shed some light on the convergence characteristics of the original SDT system. In this paper, we will use the DDT method to study the global convergence of AMEX learning algorithm [14] with a constant learning rate. This paper is organized as follows. In Section 2, the DDT formulation and some preliminaries are presented. The global convergence of AMEX learning algorithm is analyzed via a corresponding DDT system in Section 3. In Section 4, some simulations are carried out to illustrate the results achieved. Finally, we draw some conclusions in Section 5.
2. DDT formulation and preliminaries Consider a simple linear neuron with the weight vector w(k) 2 Rn, the input x(k) 2 Rn and the output y(k) = wT(k)x(k). The input sequence {x(k)jx(k) 2 Rn(k = 0, 1, 2, . . .)} of the neuron is a zero-mean stationary stochastic process. Neural networks MCA learning algorithms can be used to adaptively update the weight vector w(k) and make w(k) converge to minor component of input data. In [14], an adaptive minor component extraction algorithm, called AMEX, is proposed as follows: wðkÞ wðk þ 1Þ ¼ wðkÞ g xðkÞxT ðkÞwðkÞ T ; ð1Þ w ðkÞwðkÞ where g > 0 is the learning rate. Taking conditional expectation operator E{w(k + 1)/w(0), x(i), i < k} to (1), and identifying the conditional expected value as the next iterate, we obtain the following DDT system: wðkÞ ; ð2Þ wðk þ 1Þ ¼ wðkÞ g RwðkÞ T w ðkÞwðkÞ where R = E[x(k)xT(k)] is the autocorrelation matrix of input data {x(k)jx(k) 2 Rn(k = 0, 1, 2, . . .)}. The main purpose of this paper is to study the convergence of the DDT system (2) subject to the learning rate g being some constant. Since the autocorrelation matrix R is a symmetric nonnegative definite matrix, each eigenvalue ki (i = 1, 2, . . . , n) of R is nonnegative and the corresponding unit eigenvector vi (i = 1, 2, . . . , n) composes an orthonormal basis of Rn. We assume that the eigenvalues ki (i = 1, 2, . . . , n) are ordered by k1 > k2 > > kn P 0. In many practical applications, due to the noisy signals, the smallest eigenvalue of the autocorrelation matrix R of input data is usually larger than zero. Without loss of generality, we can assume kn > 0. Since {viji = 1, 2, . . . , n} is an orthonormal basis of Rn, the weight vector w(k) can be represented as wðkÞ ¼
n X
zi ðkÞvi
ðk P 0Þ;
ð3Þ
i¼1
where zi(k) (i = 1, 2, . . . , n) are some constants. By substituting (3) to (2), we obtain that g zi ðkÞ ði ¼ 1; 2; . . . ; nÞ; zi ðk þ 1Þ ¼ 1 gki þ T w ðkÞwðkÞ
ð4Þ
for all k P 0.
3. Convergence analysis As discussed in Section 1, some MCA learning algorithms may suffer from the norm divergence problem. To guarantee non-divergence of the weight vector in the DDT system (2), we propose a definition of an invariant set.
552
D. Peng, Z. Yi / Chaos, Solitons and Fractals 35 (2008) 550–561
Definition 1. A compact set S Rn is called an invariant set of (2), if for any w(0) 2 S, the trajectory of (2) starting from w(0) will remain in S for all k P 0. Clearly, an invariant set provides an important method to guarantee the boundedness of (2). Next, we will prove some interesting lemmas and theorems which provide an invariant set of (2). Lemma 1. If gk1 6 0:25; then it holds that rffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi pffiffiffiffiffi 1 g g kn > 0: 2 1 gk1 See the Appendix for the proof. Lemma 2. If gk1 6 0:25; then it holds that kwðk þ 1Þk P 2
pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi gð1 gk1 Þ;
for all k P 0. See the Appendix for the proof. Theorem 1. Denote sffiffiffiffiffi ( ) rffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi pffiffiffiffiffi 1 1 g S ¼ w2 gð1 gk1 Þ 6 kwk 6 þ g kn : kn 2 1 gk1 If gk1 6 0:25; then S is an invariant set of (2). Proof. Since gk1 6 0.25, then, sffiffiffiffiffi pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 1 2 gð1 gk1 Þ 6 : kn Using Lemma 1, clearly, S is not an empty set. Given any k P 0, suppose w(k) 2 S, i.e., sffiffiffiffiffi rffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi pffiffiffiffiffi pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 1 1 g þ g kn : 2 gð1 gk1 Þ 6 kwðkÞk 6 kn 2 1 gk1 Since gk1 6 0.25, it holds that 1 gki þ
g kwðkÞk2
>0
ði ¼ 1; 2; . . . ; nÞ:
Using Lemma 2, then pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi kwðk þ 1Þk P 2 gð1 gk1 Þ: Next, two cases will be considered to complete the proof. qffiffiffiffi pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi Case 1: 2 gð1 gk1 Þ 6 kwðkÞk 6 k1n .
ð5Þ
ð6Þ
D. Peng, Z. Yi / Chaos, Solitons and Fractals 35 (2008) 550–561
From (3)–(5), it follows that Xn z2 ðk þ 1Þ kwðk þ 1Þk2 ¼ i¼1 i Xn 2 z ðkÞ 1 gki þ ¼ i¼1 i 6
"
Xn
z2 ðkÞ i¼1 i
1 gkn þ
" 2
¼ kwðkÞk 1 gkn þ
553
2 g wT ðkÞwðkÞ #2 g kwðkÞk2 #2 g
kwðkÞk2
¼ ð1 gkn Þ kwðkÞk þ
g kwðkÞk
2
sffiffiffiffiffi #2 1 g 6 ð1 gkn Þ þ pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi kn 2 gð1 gk1 Þ "sffiffiffiffiffi #2 rffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi pffiffiffiffiffi 1 1 g ¼ þ g kn ; kn 2 1 gk1 "
i.e., sffiffiffiffiffi rffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi pffiffiffiffiffi 1 1 g kwðk þ 1Þk 6 þ g kn : kn 2 1 gk1 qffiffiffiffi qffiffiffiffi qffiffiffiffiffiffiffiffiffi pffiffiffiffiffi g g kn . Case 2: k1n < kwðkÞk 6 k1n þ 12 1gk 1
ð7Þ
It follows from (3)–(5) that kwðk þ 1Þk2 ¼
n X i¼1
¼ 6
z2i ðk þ 1Þ
2 z ðkÞ 1 gki þ i¼1 i
Xn n X
" z2i ðkÞ
i¼1
1 gkn þ "
2
¼ kwðkÞk 1 gkn þ
g wT ðkÞwðkÞ #2 g
kwðkÞk2 #2 g
kwðkÞk2
2
6 kwðkÞk2 :
And then, sffiffiffiffiffi rffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi pffiffiffiffiffi 1 1 g þ kwðk þ 1Þk 6 kwðkÞk 6 g kn : kn 2 1 gk1 From (6)–(8), it holds that if w(k) 2 S, then w(k + 1) 2 S. This completes the proof.
ð8Þ h
Theorem 2. Suppose that gk1 6 0:25; if w(0) 62 S, then there exists a positive integer k*, such that w(k) 2 S for all k P k*. Proof. Since gk1 6 0.25, it holds that for all k P 0 1 gki þ
g kwðkÞk2
> 0 ði ¼ 1; 2; . . . ; nÞ:
ð9Þ
554
D. Peng, Z. Yi / Chaos, Solitons and Fractals 35 (2008) 550–561
Using Lemma 2, it holds that pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi kwðkÞk P 2 gð1 gk1 Þ;
ð10Þ
for all k P 1. Denote sffiffiffiffiffi rffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi pffiffiffiffiffi 1 1 g c¼ þ g kn : kn 2 1 gk1 Using Lemma 1, clearly, sffiffiffiffiffi 1 c> : kn And then g < 1: c2 qffiffiffiffi qffiffiffiffiffiffiffiffiffi pffiffiffiffiffi g g kn , it follows from (3), (4) and (9) that Suppose that kwðkÞk > k1n þ 12 1gk 1 0 < 1 gkn þ
kwðk þ 1Þk2 ¼
n X i¼1
¼
n X
z2i ðk þ 1Þ z2i ðkÞ 1 gki þ
i¼1
6
n X i¼1
" z2i ðkÞ
1 gkn þ "
¼ kwðkÞk2 1 gkn þ < 1 gkn þ i.e.,
ð11Þ
g c2
2
2 g wT ðkÞwðkÞ #2 g kwðkÞk2 #2 g
kwðkÞk2
kwðkÞk2 ;
g kwðk þ 1Þk < 1 gkn þ 2 kwðkÞk: c
ð12Þ
From (11) and (12), there must exist a positive integer k*, such that sffiffiffiffiffi rffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi pffiffiffiffiffi 1 1 g kwðk Þk 6 þ g kn : kn 2 1 gk1 From (10), clearly, wðk Þ 2 S: Since S is an invariant set, it holds that wðkÞ 2 S; for all k P k*. The proof is completed.
h
Theorems 1 and 2 show the boundedness of the weight vector norm in (2). Next, we will furthermore analyze the convergence of the DDT system (2). From (3), w(k) can be represented as wðkÞ ¼
n1 X
zi ðkÞvi þ zn ðkÞvn
ðk P 0Þ:
i¼1
Clearly, the convergence of w(k) depends on the convergence of zi(k) (i = 1, 2, . . . , n). Next, we will prove some interesting lemmas and theorems to show the convergence of zi(k) (i = 1, 2, . . . , n 1) and zn(k), respectively.
D. Peng, Z. Yi / Chaos, Solitons and Fractals 35 (2008) 550–561
555
Theorem 3. Suppose that gk1 6 0:25; T
if w (0)vn 5 0, then it holds that lim zi ðkÞ ¼ 0 ði ¼ 1; 2; . . . ; n 1Þ:
k!1
Proof. Using Lemma 2, clearly, kwðkÞk2 P 4gð1 gk1 Þ;
ð13Þ
for all k P 1. Since gk1 6 0.25, it holds that g > 0 ði ¼ 1; 2; . . . ; nÞ; 1 gki þ kwðkÞk2
ð14Þ
for all k P 0. From (13) and (14), it follows that 1 gki þ g=kwðkÞk2 1 gkn þ g=kwðkÞk2
¼1
gðki kn Þ
1 gkn þ g=kwðkÞk2 gðki kn Þ 61 g 1 gkn þ 4gð1gk 1Þ 61
gðkn1 kn Þ 1 gkn þ 1=ð4 4gk1 Þ
ði ¼ 1; 2; . . . ; n 1Þ;
ð15Þ
for all k P 1. Denote 2 gðkn1 kn Þ r¼ 1 : 1 gkn þ 1=ð4 4gk1 Þ Clearly, r is a constant and 0 6 r < 1. Since wT(0)vn 5 0, then zn(0) 5 0. From (4) and (14), clearly, zn(k) 5 0 for all k P 0. From (4), (14) and (15), it follows that " #2 z2i ðk þ 1Þ 1 gki þ g=kwðkÞk2 z2 ðkÞ ¼ i2 2 2 zn ðk þ 1Þ zn ðkÞ 1 gkn þ g=kwðkÞk 2 2 gðkn1 kn Þ z ðkÞ 6 1 i2 1 gkn þ 1=ð4 4gk1 Þ zn ðkÞ z2i ðkÞ z2n ðkÞ z2 ð1Þ 6 rk i2 zn ð1Þ
¼r
ði ¼ 1; 2; . . . ; n 1Þ;
for all k P 1. Thus, lim
k!1
z2i ðkÞ ¼0 z2n ðkÞ
ði ¼ 1; 2; . . . ; n 1Þ:
From Theorems 1 and 2, zn(k) must be bounded, then lim zi ðkÞ ¼ 0 ði ¼ 1; 2; . . . ; n 1Þ:
k!1
This completes the proof. h Lemma 3. If gk1 6 0:25;
556
D. Peng, Z. Yi / Chaos, Solitons and Fractals 35 (2008) 550–561
then
sffiffiffiffiffi g 1 ð1 gkn Þx þ 6 ; x kn h pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi qffiffiffiffii for all x 2 2 gð1 gkn Þ; k1n . See the Appendix for the proof. Theorem 4. Suppose that gk1 6 0:25; T
if w (0)vn 5 0, then it holds that sffiffiffiffiffi 1 lim zn ðkÞ ¼ : k!1 kn Proof. Using Theorem 3, it holds from (3) that w(k) will converge to the direction of the minor component vn, as k ! 1. Then, we can suppose that there exists a big enough positive integer k0, such that for all k P k0, wðkÞ zn ðkÞ vn :
ð16Þ
By substituting (16) to (2), we obtain that g zn ðk þ 1Þ ¼ zn ðkÞ 1 gkn þ 2 ; zn ðkÞ
ð17Þ
for all k P k0. Since gk1 6 0.25, clearly, g > 0 ðk P k 0 Þ: 1 gkn þ 2 zn ðkÞ
ð18Þ
Thus, from (17) and (18), it holds that g ; jzn ðk þ 1Þj ¼ jzn ðkÞj 1 gkn þ 2 zn ðkÞ
ð19Þ
for all k P k0. Clearly, jzn ðk þ 1Þj ¼ ð1 gkn Þjzn ðkÞj þ
pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi g P 2 gð1 gkn Þ; jzn ðkÞj
ð20Þ
for all k P k0. From (19), it follows that for all k P k0 qffiffiffiffi 8 > > 1; if jzn ðkÞj < k1n ; > > > < qffiffiffiffi jzn ðk þ 1Þj gkn 1 2 ð21Þ ¼1þ 2 zn ðkÞ ¼ ¼ 1; if jzn ðkÞj ¼ k1n ; > jzn ðkÞj zn ðkÞ kn > qffiffiffiffi > > : < 1; if jz ðkÞj > 1 : n kn qffiffiffiffi From (21), clearly, k1n is a potential stable equilibrium point of (19). Next, three cases will be considered to complete the proof. qffiffiffiffi Case 1: jzn ðk 0 Þj 6 k1n . Using Lemma 3, it holds from (19) and (20) that sffiffiffiffiffi g 1 6 ; jzn ðk þ 1Þj ¼ ð1 gkn Þjzn ðkÞj þ jzn ðkÞj kn
ð22Þ
for all k P k0. From (21) and (22), clearly, jzn(k)j qis ffiffiffiffi monotone increasing and has an upper bound for all k P k0. Thus, jzn(k)j must converge to the equilibrium point qffiffiffiffi Case 2: jzn ðkÞj > k1n , for all k P k0.
1 , kn
as k ! 1.
From (21), it holds that jzn(k)j is monotone decreasing and has a low bound for all k P k0. Clearly, jzn(k)j will qffiffiffiffi converge to the equilibrium point k1n , as k ! 1.
D. Peng, Z. Yi / Chaos, Solitons and Fractals 35 (2008) 550–561
qffiffiffiffi
557
qffiffiffiffi and there exists a positive integer N (N > k0), such that jzn ðN Þj 6 k1n .
Case 3: jzn ðk 0 Þj > k1n qffiffiffiffi jzn ðN Þj 6 k1n , in the same way as Case 1, it can be proven that jzn(k)j must converge to the equilibrium point ffi qffiffiffiSince 1 kn , as k ! 1. From the analysis of the above three cases, we can obtain that sffiffiffiffiffi 1 : lim jzn ðkÞj ¼ k!1 kn It holds from (17) and (18) that zn(k) > 0 for all k > k0 if zn(k0) > 0, and zn(k) < 0 for all k > k0 if zn(k0) < 0. Thus, if jzn(k)j converges, zn(k) will also converge. This completes the proof. h Using Theorems 3 and 4, we can easily obtain the following convergence result of the DDT system (2). Theorem 5. Suppose that gk1 6 0:25; T
if w (0)vn 5 0, then it holds that sffiffiffiffiffi 1 lim wðkÞ ¼ vn : k!1 kn
4. Simulation results 4.1. Illustration of an invariant set Theorem 1 provides an invariant set S to guarantee non-divergence of the DDT system (2). Let us first illustrate the invariance of S. We randomly generate a 4 · 4 symmetric nonnegative definite matrix as 4:8171
6 4:6284 6 R1 ¼ 6 4 4:5330 3:4158
4:6284 4:5330 3:4158
3
5:2653 5:3094 3:4446 7 7 7: 5:3094 6:0537 4:0071 5 3:4446 4:0071 3:7815
3
Upper bound:2.8232 2.5
Norm of w(k)
2
2
1.5
1
0.5 Low bound:0.1812 0
0
200
400
600
800
1000
1200
1400
Number of iterations
Fig. 1. Invariance of S.
1600
1800
2000
558
D. Peng, Z. Yi / Chaos, Solitons and Fractals 35 (2008) 550–561
The largest eigenvalue k1 and the smallest eigenvalue kn of R1 are 17.8747 and 0.1302, respectively. The learning rate g is taken as 0.01, such that gk1 6 0.25. Using Theorem 1, clearly, S ¼ fwðkÞj0:1812 6 kwðkÞk 6 2:8232g: Fig. 1 shows that 50 trajectories of (2) starting from points in S are all contained in S for 2000 iterations. It clearly shows that the invariance of S. 4.2. Illustration of global convergence In order to show the global convergence of the DDT system (2), we randomly select two initial weight vector as 0:0042 0:0327T ;
0:0342
3
Component of w(k)
2.5
2
z4(k) z3(k) z2(k) z1(k)
1.5
1
0.5
0
–0.5
0
500
1000
1500
2000
2500
Number of iterations Fig. 2. Convergence of w(k) starting from w1(0).
35 30
z4(k) z3(k) z2(k) z (k)
25
Component of w(k)
w1 ð0Þ ¼ ½0:0158
20
1
15 10 5 0 –5 –10 0
500
1000
1500
2000
2500
Number of iterations Fig. 3. Convergence of w(k) starting from w2(0).
3000
D. Peng, Z. Yi / Chaos, Solitons and Fractals 35 (2008) 550–561
559
and w2 ð0Þ ¼ ½12:6160 27:3240 3:3960
26:1320T :
Since kw1(0)k = 0.05 and kw2(0)k = 40, clearly, w1(0) 62 S and w2(0) 62 S. Figs. 2 and 3 show the convergence of the weight vector w(k) starting from w1(0) and w2(0), respectively. In two figures, zi(k) = wT(k)vi (i = 1, 2, 3, 4) is the coordinate of w(k) on the direction of the eigenvector vi (i = 1, 2, 3, 4) of R1. In two simulation results, zi(k) (i = 1, 2, 3) converges to zero and z4(k) converges to a fixed value, which means that although the initial weight vector is not selected from S, the trajectories of (2) will converge to minor component, i.e., the system (2) is globally convergent.
5. Conclusions The dynamical behaviors of an adaptive minor component extraction (AMEX) algorithm proposed by Ouyang et al. [14] are studied via a corresponding DDT system in this paper. It is shown that if the learning rate satisfies some mild conditions, almost all trajectories of the DDT system will converge to the direction of the eigenvector associated with the smallest eigenvalue of the autocorrelation matrix of input data. Simulation results show the global convergence of AMEX learning algorithm.
Appendix Proof of Lemma 1. Since gkn < gk1 6 0.25, clearly, pffiffiffiffiffiffiffi 1 1 > 1 and gkn < : 1 gk1 2
ð23Þ
It follows from (23) that rffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi pffiffiffiffiffi 1 pffiffiffi pffiffiffiffiffi pffiffiffi 1 pffiffiffiffiffiffiffi 1 g gkn > 0: g kn > g g kn ¼ g 2 1 gk1 2 2 The proof is completed.
h
Proof of Lemma 2. Since gk1 6 0.25, it holds that for all k P 0 g 1 gki þ > 0 ði ¼ 1; 2; . . . ; nÞ: kwðkÞk2 From (3), (4) and (24), it follows that n X z2i ðk þ 1Þ kwðk þ 1Þk2 ¼ i¼1
2 g ¼ 1 gki þ T w ðkÞwðkÞ i¼1 " #2 n X g P z2i ðkÞ 1 gk1 þ kwðkÞk2 i¼1 " #2 g 2 ¼ kwðkÞk 1 gk1 þ kwðkÞk2 2 g ¼ ð1 gk1 Þ kwðkÞk þ kwðkÞk h pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffii2 P 2 gð1 gk1 Þ ; n X
z2i ðkÞ
i.e., pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi kwðk þ 1Þk P 2 gð1 gk1 Þ; for all k P 0. The proof is completed.
h
ð24Þ
560
D. Peng, Z. Yi / Chaos, Solitons and Fractals 35 (2008) 550–561
Proof of Lemma 3. Define a differentiable function g f ðxÞ ¼ ð1 gkn Þx þ ; x h pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi qffiffiffiffii on the interval 2 gð1 gkn Þ; k1n . It follows that g f_ ðxÞ ¼ 1 gkn 2 : x Clearly, f_ ðxÞ P 0;
if x 2
rffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi g ;1 : 1 gkn
ð25Þ
By gk1 6 0.25, it holds that sffiffiffiffiffi rffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi g 1 : < 2 gð1 gkn Þ < 1 gkn kn This means that sffiffiffiffiffi# rffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi " pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 1 g 2 gð1 gkn Þ; ;1 : kn 1 gkn
ð26Þ
From (25) and (26), it holds that f_ ðxÞ P 0; h pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi qffiffiffiffii h pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi qffiffiffiffii for all x 2 2 gð1 gkn Þ; k1n . This means that f(x) is monotone increasing on the interval 2 gð1 gkn Þ; k1n . Thus, sffiffiffiffiffi! sffiffiffiffiffi 1 1 f ðxÞ 6 f ¼ ; kn kn h pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi qffiffiffiffii for all x 2 2 gð1 gkn Þ; k1n . The proof is completed. h
References [1] Klemm R. Adaptive airborne mti: an auxiliary channel approach. Proc Inst Elect Eng F 1987;134:269–76. [2] Barbarossa S, Daddio E, Galati G. Comparison of optimum and linear prediction technique for clutter cancellation. Proc Inst Elect Eng F 1987;134:277–82. [3] Cirrincione, G. A neural approach to the structure from motion problem, PhD dissertation, LIS INPG Grenoble, December 1998. [4] Xu L, Oja E, Suen C. Modified Hebbian learning for curve and surface fitting. Neural Networks 1992;5:441–57. [5] Griffiths JW. Adaptive array processing a tutorial. Proc Inst Elect Eng F 1983;130:3–10. [6] Mathew G, Reddy V. Development and analysis of a neural network approach to Pisarenko’s harmonic retrieval method. IEEE Trans Signal Process 1994;42:663–7. [7] Schmidt R. Multiple emitter location and signal parameter estimation. IEEE Trans Antennas Propag 1986;34:276–80. [8] Oja E. Principal components minor components and linear neural networks. Neural Network 1992;5:927–35. [9] Luo FL, Unbehauen R. A minor subspace analysis algorithm. IEEE Trans Neural Networks 2005;8(5):1149–55. [10] Cirrincione G, Cirrincione M, Herault J, Huffel SV. The MCA EXIN neuron for the minor component analysis. IEEE Trans Neural Networks 2002;13(1):160–87. [11] Taleb A, Cirrincione G. Against the convergence of the minor component analysis neurons. IEEE Trans Neural Networks 1999;10:207–10. [12] Douglas SC, Kung SY, Amari S. A self stabilized minor subspace rule. IEEE Signal Process Lett 1998;5(12):330–2. [13] Mo¨ller R. A self-stabilizing learning rule for minor component analysis. Int J Neural Syst 2004;14:1–8. [14] Ouyang S, Bao Z, Liao G, Ching PC. Adaptive minor component extraction with modular structure. IEEE Trans Signal Process 2001;49:2127–37. [15] Zhang Qiang, Wei Xiaopeng, Xu Jin. New stability conditions for neural networks with constant and variable delays. Chaos, Solitons & Fractals 2005;26:1391–8. [16] Arik S. Global robust stability analysis of neural networks with discrete time delays. Chaos, Solitons & Fractals 2005;26:1407–14.
D. Peng, Z. Yi / Chaos, Solitons and Fractals 35 (2008) 550–561
561
[17] Yongkun Li. Existence and stability of periodic solutions for Cohen Grossberg neural networks with multiple delays. Chaos, Solitons & Fractals 2004;20:459–66. [18] Yongkun Li, Wenya Xing, Linghong Lu. Existence and global exponential stability of periodic solution of a class of neural networks with impulses. Chaos, Solitons & Fractals 2006;27:437–45. [19] Zhang Qiang, Wei Xiaopeng, Xu Jin. Stability analysis for cellular neural networks with variable delays. Chaos, Solitons & Fractals 2006;28:331–6. [20] Ljung L. Analysis of recursive stochastic algorithms. IEEE Trans Automat Contr 1977;22:551–75. [21] Zufiria PJ. On the discrete time dynamics of the basic Hebbian neural network nodes. IEEE Trans Neural Networks 2002;13(6):1342–52. [22] Zhang Q. On the discrete time dynamics of a PCA learning algorithm. Neurocomputing 2003;55:761–9. [23] Yi Z, Ye M, Lv JC, Tan KK. Convergence analysis of a deterministic discrete time system of Oja’s PCA learning algorithm. IEEE Trans Neural Networks 2005;16(6):1318–28.