Multistability of complex-valued neural networks with discontinuous activation functions

Multistability of complex-valued neural networks with discontinuous activation functions

Accepted Manuscript Multistability of complex-valued neural networks with discontinuous activation functions Jinling Liang, Weiqiang Gong, Tingwen Hua...

3MB Sizes 0 Downloads 104 Views

Accepted Manuscript Multistability of complex-valued neural networks with discontinuous activation functions Jinling Liang, Weiqiang Gong, Tingwen Huang PII: DOI: Reference:

S0893-6080(16)30112-5 http://dx.doi.org/10.1016/j.neunet.2016.08.008 NN 3656

To appear in:

Neural Networks

Received date: 30 January 2016 Revised date: 21 June 2016 Accepted date: 23 August 2016 Please cite this article as: Liang, J., Gong, W., & Huang, T. Multistability of complex-valued neural networks with discontinuous activation functions. Neural Networks (2016), http://dx.doi.org/10.1016/j.neunet.2016.08.008 This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and review of the resulting proof before it is published in its final form. Please note that during the production process errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.

1

Multistability of Complex-Valued Neural Networks with Discontinuous Activation Functions Jinling Liang∗ , Weiqiang Gong and Tingwen Huang

Abstract In this paper, based on the geometrical properties of the discontinuous activation functions and the Brouwer’s fixed point theory, the multistability issue is tackled for the complex-valued neural networks with discontinuous activation functions and time-varying delays. To address the network with discontinuous functions, Filippov solution of the system is defined. Through rigorous analysis, several sufficient criteria are obtained to assure the existence of 25n equilibrium points. Among them, 9n points are locally stable and 16n − 9n equilibrium points are unstable. Furthermore, to enlarge the attraction basins of the 9n equilibrium points, some mild conditions are imposed. Finally, one numerical example is provided to illustrate the effectiveness of the obtained results. Index Terms Complex-valued neural networks, multistability, discontinuous function, attraction basin.

I. I NTRODUCTION The dynamic behaviors of the complex-valued networks have recently received much attention due to their useful applications, such as associative memory, filtering, image processing, computer vision, optoelectronics, speech synthesis, and so on, see [1]–[9] and the references therein for more information. Different from the real-valued networks, all the states and connection weights of the complex-valued networks are complex-valued, which makes them better than the real-valued ones when processing the complex signals. Generally speaking, the complexvalued networks have much more complicated properties than the real-valued ones in many aspects. Take the XOR problem and the detection of symmetry problem for example, both of them cannot be solved with a real-valued signal neuron, however, which could be easily solved by a single complex-valued neuron with orthogonal decision boundaries [10], [11]. Hence, it is important to explore the dynamics of the complex-valued networks, especially the stability issue [12]. Up till now, lots of complex-valued models have been developed and extensively investigated, see [13]–[16] and the references cited therein. In [17], the complete stability, global attractivity, and boundedness have been discussed for the discrete-time neural networks with complex-valued linear threshold neurons. A new stability condition has been derived in [18] for the complex-valued neural network by the energy function approach, and the obtained criterion has not only permitted a little relaxation on the Hermitian assumption of the connection matrix but also generalized some existing results. It is well known that the number of equilibrium points depends on the practical applications of neural networks. Some applications, such as the optimization, require the network to have only one equilibrium point which is monostable. However, the multiple/stable equilibrium points of neural networks are demanded in some other This work is supported in part by the National Natural Science Foundation of China under Grant 61673110, the Natural Science Foundation of Jiangsu Province of China under Grant BK20130017, the Six Talent Peaks Project for the High Level Personnel from the Jiangsu Province of China under Grant 2015-DZXX-003 and the Graduate Research and Innovation Program of Jiangsu Province (No. KYLX 0083). J. Liang and W. Gong are with the Department of Mathematics, Southeast University, Nanjing 210096, China. T. Huang is with the Science Program, Texas A&M University, Doha, Qatar. ∗ Corresponding author. (Email: [email protected])

2

applications such as associative memory and pattern recognition, see Refs. [11], [19], [20] and the references cited therein. Therefore, it is necessary and important to investigate the multistability of the networks. For the real-valued networks, lots of significant results on the multistability have been developed in the literature [21]– [25]. In [26], multistability (including the total number of equilibrium points, their locations and in stability) has been investigated for the neural network with mexican-hat-type activation functions. As pointed out in [27], it is a fundamental problem to increase the storage capacity of the network when referring the neural networks to associative memory applications, and hence the main purpose of investigating the multistability of neural networks is to increase the number of network equilibriums. Compared with the storage capacity of the real-valued networks [28], it can be clearly seen that the storage capacity of the complex-valued ones with same dimensions is larger [27]. Therefore, from the application view point, it is very important and necessary to investigate the multistability of the complex-valued neural networks. However, it should be pointed out that, compared to the mulitistability of the real-valued neural networks, the multistability of the complex-valued ones [27], [29] has gained much less attention, which forms one of our main motivations to develop the present research. There is no doubt that the activation functions play an important role when analyzing the dynamics of neural networks. Hence, choosing the appropriate activation functions is one of the main challenges for the complex-valued networks. Compared with the complex-valued functions given in Refs. [30] and [31], the partial derivatives of the real parts and the imaginary parts of the functions in [13] are no longer required to exist and be bounded. When talking about the multistability of the neural networks with discontinuous activation functions, lots of significant works have been done [32]–[34]. However, to the best knowledge of the authors’, although there have been some results on the multistability of the complex-valued networks with continuous activation functions [27], [29], few works can be found in the literature concerning on the multistability of the complex-valued neural networks with discontinuous activation functions, which forms another motivation to develop the present research. Normally, time delays are unavoidably encountered in practical systems including inferred grinding models, automatic control systems and so on [35]. To be specific, in the electronic implementation of neural networks, for the finite switching speed of the amplifiers, time delays may occur during the process of transmitting the signals, which may influence the dynamics of systems, such as oscillation, instability and bifurcation [36]. Therefore, when investigating the dynamics of the complex-valued networks, time delays should be paid for special attention. Recently, there have been a great deal of significant works concerning this area [37], [38]. For example, in [39], based on the linear matrix inequality, some new criteria have been established to ensure the existence, uniqueness, and globally asymptotical stability of the equilibrium point for the complex-valued recurrent neural networks with constant time delays. For more results on the stability analysis of the delayed real-valued and complex-valued networks, one could refer to Refs. [40]–[43]. However, when adding the effects of the time delays to the multistability of the complex-valued neural networks, few works can be found in the existed literature [29], which also forms one of the main motivations to develop the present research. Different from the mono-stability analysis, when analyzing the multistability of complex-valued networks, both the existence and the explicit number of the equilibrium points have to be determined in the first place, and then local analysis such as local stability, instability and attraction basins will be carried out one by one for each of these equilibria. In this paper, the neural networks under consideration are with discontinuous rather than continuous activation functions, which will bring great difficulty when analyzing the multistability of the complexvalued networks. It should be pointed out that the equilibrium points with their components located just at the discontinuous points of the activation functions should also be taken into account when addressing the number of equilibria. Instead of constructing appropriate Lyapunov functionals, the Brouwer’s fixed point theory and the geometrical properties of the discontinuous activation functions are both utilized to investigate the existence and

3

local stability of the multiple equilibria of the complex-valued networks. Inspired by the above discussions, the multistability of the complex-valued neural networks with time-varying delays and discontinuous activation functions is investigated in this paper. Conditions are derived assuring the existence of 25n equilibrium points (including 9n locally stable equilibrium points and 16n − 9n unstable ones). Besides, the attraction basins of the stable equilibrium points are also estimated and further enlarged. Compared with the previous related multistability works on the real-valued and complex-valued networks, the main contributions of the present paper can be summarized as follows. 1) The neural activation functions utilized here are in more general forms than the ones in [44], where both the real and the imaginary parts of the complex-valued activation functions in this paper are discontinuous rather than continuous [45], [46]. The discontinuity effect on the coexistence and local stability of the multiple equilibria is revealed, which provides an important insight that the discontinuous networks might have higher storage capacity than the continuous ones. 2) Instead of resorting to the general Lyapunov method, the Brouwer’s fixed point theory and the geometrical properties of the discontinuous activation functions have been utilized to investigate the multistability of the complex-valued neural networks with time-varying delays. 3) Not only local stability but also instability of the equilibria are investigated. Besides that, attraction basins of the locally stable equilibrium points are also estimated and enlarged. The remaining part of the paper is organized as follows. In Section II, the complex-valued model with delays and discontinuous activation functions is presented , and some preliminaries are briefly outlined. In Section III, existence and explicit dynamical analysis of the multiple equilibrium points are investigated. In Section IV, one numerical example is given to demonstrate the effectiveness of the obtained criteria. Finally, conclusions are drawn in Section V. Notations: The notation used in this paper is fairly standard. Cn , Cm×n and Rm×n denote the sets of n-dimensional complex vectors, m × n complex matrices and m × n real matrices, respectively. Let i be the imaginary unit, i.e., √ i = −1. The superscript ‘T ’ means the matrix transposition. P R and P I refer to, respectively, the real and the imaginary parts of matrix P ∈ Cm×n . C([−τ, 0], Rn ) represents the Banach space of continuous vectorvalued functions which map the internal [−τ, 0] into Rn with the topology of uniform convergence. For a vector z = (z1 , z2 , . . . , zn )T ∈ Rn , ∥z∥ξ means the norm of z with ∥z∥ξ = maxk {ξk |zk |}, where ξ = (ξ1 , ξ2 , . . . , ξn )T with ξk > 0 for k = 1, 2, . . . , n. f − (s) and f + (s) denote, respectively, the left and the right limits of function f (·) at point s ∈ R. co(M ) is the closure of the convex hull for set M . For a vector-valued function f (x) = (f1 (x1 ), f2 (x2 ), . . . , fn (xn ))T ∈ Rn , define co[f (x)] = co[f1 (x1 )] × co[f2 (x2 )] × . . . × co[fn (xn )], where co[fk (xk )] = [fk− (xk ), fk+ (xk )] for k = 1, 2, . . . , n and ‘×’ represents the Cartesian product. II. P ROBLEM FORMULATION AND SOME PRELIMINARIES Consider the complex-valued neural networks with asynchronous time delays described by the following nonlinear delayed differential equations: 

uk (t) = −ck uk (t) +

n ∑ j=1

akj fj (uj (t)) +

n ∑ j=1

bkj fj (uj (t − τkj (t))) + Lk ,

(1)

where k = 1, 2, . . . , n, uk (t) ∈ C is the state of the kth neuron at time t, C = diag{c1 , c2 , · · · , cn } ∈ Rn×n with ck > 0 is the self-feedback connection weight matrix, A = (akj )n×n and B = (bkj )n×n ∈ Cn×n are, respectively, the connection weight matrix and the delayed connection weight matrix. L = (L1 , L2 , . . . , Ln )T ∈ Cn is the external input vector. τkj (t) is the time-varying transmission delay satisfying 0 ≤ τkj (t) ≤ τ . fk (·) : C → C denotes the nonlinear activation function and is assumed to satisfy the condition given below: Assumption 1: Let ν = νe + iνb with νe, νb ∈ R. fk (ν) can be expressed by its real and imaginary parts with fk (ν) = fkR (νe) + ifkI (νb),

4

where fkR (·), fkI (·) : R → R are discontinuous functions which are defined     µ , − ∞ < ν e < r   k k      f R (νe),  rk ≤ νe ≤ sk R I k,1 fk (νe) = and f ( ν b ) = k R (ν   fk,2 e), sk < νe ≤ pk        ω ,  p < ν e < +∞ k k

as follows: − ∞ < νb < rk rk ≤ νb ≤ sk sk < νb ≤ pk pk < νb < +∞

µk , I (ν fk,1 b), I fk,2 (νb), ωk ,

R (s ), f I (s ) = f I (s ), f R (r ) = f R (p ) = µ , f I (r ) = f I (p ) = µ , ω ̸= in which fkR (sk ) = fk,2 k k k,1 k k k k k k,2 k k,1 k k,2 k k,2 k µk , ω k ̸= µk . Remark 1: It follows from Assumption 1 that the functions fkR (·) and fkI (·) are, respectively, discontinuous at points pk and pk . Moreover, it is easy to see that the left and the right limits exist with fkR− (pk ) = µk , fkI− (pk ) = µk and fkR+ (pk ) = ωk , fkI+ (pk ) = ω k . To proceed, the following assumption is further made on the nonlinear activation functions. R (·) and f I (·) are continuous and monotonically increasing functions, and f R (·) and f I (·) Assumption 2: fk,1 k,2 k,1 k,2 are continuous and monotonically decreasing functions. i.e., there exist positive numbers mk , mk , σ k , σ k as well as negative scalars q k , q k , β k , β k such that

mk ≤ qk ≤

R (ν R (ν e1 ) − fk,1 e2 ) fk,1

νe1 − νe2 R R (ν fk,2 (νe1 ) − fk,2 e2 ) νe1 − νe2

≤ mk , ∀νe1 , νe2 ∈ [rk , sk ]; ≤ q k , ∀νe1 , νe2 ∈ (sk , pk ];

σk ≤ βk ≤

I (ν I (ν b1 ) − fk,1 b2 ) fk,1

νb1 − νb2 I I (ν fk,2 (νb1 ) − fk,2 b2 ) νb1 − νb2

≤ σ k , ∀νb1 , νb2 ∈ [rk , sk ];

≤ β k , ∀νb1 , νb2 ∈ (sk , pk ].

Remark 2: Global stability of the complex-valued neural networks with mixed delays has been investigated in Ref. [31], where the activation functions are in the form of fk (ν) = fkR (νe, νb) + ifkI (νe, νb) with ν = νe + iνb, and the partial derivatives of fkR (·, ·) and fkI (·, ·) are assumed to exist and be continuous. While in this discussion, such kind of restrictions is no longer required. On the other hand, in Ref. [27], the multistability of complex-valued neural networks with real-imaginary-type activation functions has been addressed, where both the real and the imaginary parts of the activation functions are required to be continuous, which is not required any more in the present paper. Denote uk (t) = xk (t) + iyk (t) with xk (t), yk (t) ∈ R, then the network (1) can be rewritten in the equivalent form as follows: n ∑ ( R R ) xk (t) = − ck xk (t) + akj fj (xj (t)) − aIkj fjI (yj (t)) + LR k 

j=1

n ∑ ( R R ) + bkj fj (xj (t − τkj (t))) − bIkj fjI (yj (t − τkj (t))) ,

(2a)

j=1



y k (t) = − ck yk (t) +

n ∑ ( j=1

) I I R I aR kj fj (yj (t)) + akj fj (xj (t)) + Lk

n ∑ ( R I ) bkj fj (yj (t − τkj (t))) + bIkj fjR (xj (t − τkj (t))) . +

(2b)

j=1

First of all, the solution and the equilibrium point for the delayed differential equations (2a)-(2b) with discontinuous right-hand side are defined in the sense of Filippov [47]. Definition 1: For given continuous functions φ ek (s) and φ bk (s) defined on [−τ, 0] as well as the measurable funcek (s))] and ψbk (s) ∈ co[fkI (φ bk (s))] for almost all s ∈ [−τ, 0], the absolute continuous function tions ψek (s) ∈ co[fkR (φ (x(t), y(t)) with x(t) = (x1 (t), x2 (t), . . . , xn (t))T , y(t) = (y1 (t), y2 (t), . . . , yn (t))T and xk (s) = φ ek (s), yk (s) =

5

φ bk (s) for all s ∈ [−τ, 0] is said to be a solution of the system (2a)-(2b) on [0, T ) if there exist measurable functions γ ek (t) ∈ co[fkR (xk (t))], γ bk (t) ∈ co[fkI (yk (t))] for almost all t ∈ [0, T ) such that )   ∑n ( R I γ  x (t) = −c x (t) + a γ e (t) − a b (t) + LR  j j k k k j=1 kj kj k  ( )   ∑n  R I  ej (t − τkj (t)) − bkj γ bj (t − τkj (t)) , a.e. t ∈ [0, T ) + j=1 bkj γ ( ) (3)  ∑ n  y k (t) = −ck yk (t) + j=1 aR  γ bj (t) + aIkj γ ej (t) + LIk  kj  ( )  ∑  I γ  + nj=1 bR γ b (t − τ (t)) + b e (t − τ (t)) , a.e. t ∈ [0, T ) kj kj kj j kj j

and γ ek (s) = ψek (s), γ bk (s) = ψbk (s) for almost all s ∈ [−τ, 0], where k = 1, 2, . . . , n. Definition 2: The point (x∗ , y ∗ ) with x∗ = (x∗1 , x∗2 , . . . , x∗n )T and y ∗ = (y1∗ , y2∗ , . . . , yn∗ )T is said to be an equilibrium point of system (2a)-(2b) if 0 ∈ −ck x∗k + 0 ∈ −ck yk∗ +

n ∑ (

j=1 n ∑ j=1

) R R ∗ I I I ∗ R (aR kj + bkj )co[fj (xj )] − (akj + bkj )co[fj (yj )] + Lk ,

( R ) I ∗ I I R ∗ I (akj + bR kj )co[fj (yj )] + (akj + bkj )co[fj (xj )] + Lk

(4a) (4b)

hold for all k = 1, 2, . . . , n. Remark 3: According to Definition 2, if the equilibrium point (x∗ , y ∗ ) of system (2a)-(2b) is a continuous point for the activation function, i.e., co[fjR (x∗j )] = fjR (x∗j ) and co[fjI (yj∗ )] = fjI (yj∗ ) for j = 1, 2, . . . , n, then (4a)-(4b) will be reduced into the usual definition for the equilibrium point of systems. Next, define the following real-valued functions which are continuous except at the points pk (respectively, pk ), k = 1, 2, . . . , n: R F k (η) = − ck η + aR kk fk (η) +



n ∑ j=1



j=1

R aR kk fk (η)

+

+

j=1

+

j=1

n ∑

n ∑

j=1,j̸=k

+

n ∑

+

j=1

R max{µj bR kj , ωj bkj }

n ∑

n ∑ j=1

+

j=1

(5b)

max{aIkj µj , aIkj ωj }

max{bIkj µj , bIkj ωj } + LIk , n ∑

(5a)

R min{µj bR kj , ωj bkj }

max{µj bIkj , ω j bIkj } + LR k;

R min{aR kj µj , akj ω j }

n ∑

j=1

j=1

R max{aR kj µj , akj ω j } +

j=1

j=1,j̸=k

R min{bR kj µj , bkj ω j } +

n ∑ j=1

n ∑

min{µj bIkj , ω j bIkj } + LR k,

R min{µj aR kj , ωj akj }

n ∑

R max{bR kj µj , bkj ω j } +

Gk (η) = − ck η + n ∑

j=1

max{µj aIkj , ω j aIkj } −

I aR kk fk (η)

R max{µj aR kj , ωj akj } +

n ∑

j=1,j̸=k

I Gk (η) = − ck η + aR kk fk (η) + n ∑

j=1,j̸=k

min{µj aIkj , ω j aIkj } −

F k (η) = − ck η + n ∑

n ∑

(6a)

min{aIkj µj , aIkj ωj }

min{bIkj µj , bIkj ωj } + LIk

(6b)

6

where η ∈ R. In terms of Assumption 1, one obtains lim F k (η) = −∞,

η→+∞

lim F k (η) = +∞,

lim Gk (η) = −∞,

η→−∞

η→+∞

lim Gk (η) = +∞

η→−∞

which infer that there must exist constants rkl < 0, prk > 0, rlk < 0 and prk > 0 (k = 1, 2, . . . , n) such that rkl < rk , rlk < rk , pk < prk , pk < prk and F k (prk ) < 0,

F k (rkl ) > 0,

Gk (prk ) < 0,

Gk (rlk ) > 0.

(7)

Finally, for the convenience of description, denote αn+k

Ωα = {(x1 , x2 , . . . , xn , y1 , y2 , . . . , yn ) ∈ R2n |xk ∈ Ωαk k , yk ∈ Ωk

for k = 1, 2, . . . , n}

with α = (α1 , α2 , . . . , α2n ), where αj =‘I’ or ‘II’ or ‘III’ or ‘IV’ for j = 1, 2, . . . , 2n (herein, ‘I’, ‘II’, ‘III’ and ‘IV’ represent, respectively, ‘First’, ‘Second’, ‘Third’ and ‘Fourth’) and ΩIk = {x ∈ R|rkl < x < rk },

ΩIIk = {x ∈ R|rk ≤ x ≤ sk },

ΩIII k = {x ∈ R|sk < x ≤ pk },

r ΩIV k = {x ∈ R|pk < x ≤ pk };

I

II

Ωk = {y ∈ R|rlk < y < rk },

Ωk = {y ∈ R|rk ≤ y ≤ sk },

III

IV

Ωk = {y ∈ R|sk < y ≤ pk },

Ωk = {y ∈ R|pk < y ≤ prk }.

Similarly, set IV Λα ={(x1 , x2 , . . . , xn , y1 , y2 , . . . , yn ) ∈ R2n | xk ∈ ΩIk or ΩIII k or Ωk , I

Θ ={(x1 , x2 , . . . , xn , y1 , y2 , . . . , yn ) ∈ R | xk ∈ α

2n

III

IV

yk ∈ Ωk or Ωk or Ωk Ωαk k , yk



αn+k Ωk

for k = 1, 2 . . . , n},

for k = 1, 2, . . . , n and there exists II

at least one component xk or one component yk satisfying xk ∈ ΩIIk or yk ∈ Ωk }.

It is easy to know that in Cn , there are, respectively, 16n , 9n and 16n − 9n such kind of disjoint regions as Ωα , Λα and Θα . The main purpose of this paper is to explore the number/location of the equilibrium points for network (1). Dynamical characteristics about the network model around these equilibrium points are also investigated. III. M AIN RESULTS In this section, based on the geometrical properties of the activation functions and the fixed point theory, several sufficient criteria are obtained to ascertain the existence of 25n equilibrium points. Among them, 9n points are locally stable and 16n − 9n points are unstable. Moreover, the attraction basins of the 9n stable equilibrium points are also estimated. The main results are stated one by one as follows. A. Existence and location of the equilibrium points Firstly, the special case is investigated where all components of the equilibrium points are the continuous points of the activation functions. Lemma 1: For any given region Ωα , there exists at least one equilibrium point for system (2a)-(2b) (or equivalently, network (1)) located just in Ωα if + e k (rk ) < 0, G e k (sk ) > 0, G e+ Fek (rk ) < 0, Fek (sk ) > 0, Fek (pk ) > 0; G k (pk ) > 0; k = 1, 2, . . . , n

(8)

7

e k (·) and G e k (·) are defined below: where functions Fek (·), Fek (·), G n ∑

R R Fek (η) = − ck η + (aR kk + bkk )fk (η) +



n ∑

j=1,j̸=k

min{µj (aIkj + bIkj ), ω j (aIkj + bIkj )} + LR k,

n ∑

n ∑

R R R min{µj (aR kj + bkj ), ωj (akj + bkj )}

j=1,j̸=k

max{µj (aIkj + bIkj ), ω j (aIkj + bIkj )} + LR k,

(9b)

j=1

e k (η) = − ck η + (aR + bR )f I (η) + G kk kk k +

(9a)

j=1

R R Fek (η) = − ck η + (aR kk + bkk )fk (η) +



R R R max{µj (aR kj + bkj ), ωj (akj + bkj )}

n ∑

n ∑

R R R max{µj (aR kj + bkj ), ω j (akj + bkj )}

j=1,j̸=k

max{µj (aIkj + bIkj ), ωj (aIkj + bIkj )} + LIk ,

(10a)

j=1

e k (η) = − ck η + (aR + bR )f I (η) + G kk kk k +

n ∑

n ∑

R R R min{µj (aR kj + bkj ), ω j (akj + bkj )}

j=1,j̸=k

min{µj (aIkj + bIkj ), ωj (aIkj + bIkj )} + LIk ,

(10b)

j=1

in which η ∈ R. Proof: To facilitate the discussions, two more functions are introduced: R R Fk (η) = − ck η + (aR kk + bkk )fk (η) + R I Gk (η) = − ck η + (aR kk + bkk )fk (η) +

n ∑

j=1,j̸=k n ∑

R R (aR kj + bkj )fj (xj ) −

R I (aR kj + bkj )fj (yj ) +

j=1,j̸=k

n ∑

(aIkj + bIkj )fjI (yj ) + LR k,

j=1 n ∑

(aIkj + bIkj )fjR (xj ) + LIk ;

(11) (12)

j=1

where η , xj , yj ∈ R for j = 1, 2, . . . , n. It follows immediately from Assumption 1 that F k (η) ≤ Fek (η) ≤ Fk (η) ≤ Fek (η) ≤ F k (η),

e k (η) ≤ Gk (η) ≤ G e k (η) ≤ Gk (η). Gk (η) ≤ G

(13)

Arbitrarily choose xy = (x1 , x2 , . . . , xn , y1 , y2 , . . . , yn )T ∈ R2n and substitute it into (11) and (12). For k = 1, 2, . . . , n, consider the function Fk (·) which is continuous with respect to η except at the point pk . Moveover, it follows from (7), (13) and (8) that Fk (rkl ) ≥ F k (rkl ) > 0, Fk (rk ) ≤ Fek (rk ) < 0, Fk (sk ) ≥ Fek (sk ) > 0, Fk (prk ) ≤ F k (prk ) < 0.

At the point η = pk , from the fact that fkR (pk ) = fkR (rk ) and the condition (8), one derives that Fk (pk ) ≤ Fk (rk ) ≤ + Fek (rk ) < 0 and F + (pk ) ≥ Fek (pk ) > 0. And the discussion given beforehand immediately means that there exists k

IV bk ) = 0. Such kind of point x bk might not be unique, hence if there are x bk ∈ ΩIk or ΩIIk or ΩIII k or Ωk satisfying Fk (x more than one, denote the smallest one to be x bk . In the following similar discussions, we take the same choosing I II III IV method. Similar discussions reveal that there exists ybk ∈ Ωk or Ωk or Ωk or Ωk uniquely such that Gk (ybk ) = 0.

8

Now, for any given region Ωα , we define a mapping F G : co(Ωα ) → co(Ωα ) with F G (x1 , x2 , . . . , xn , y1 , y2 , . . . , yn ) = (x b1 , x b2 , . . . , x bn , yb1 , yb2 , . . . , ybn ), it is easy to verify that the mapping F G is continuous. According to the Brouwer’s fixed point theorem, for the mapping F G , there is at least one fixed point x∗y = (x∗1 , x∗2 , . . . , x∗n , y1∗ , y2∗ , . . . , yn∗ ) with x bk = x∗k and ybk = yk∗ (k = 1, 2, . . . , n) located in co(Ωα ). One can further conclude that the fixed point ∗ xy is in fact belonging to the interior of Ωα . Substitute the fixed point x∗y into equations (11) and (12), one gets easily that the fixed point x∗y of mapping F G is just one equilibrium point of system (2a)-(2b) (or equivalently, system (1)). The proof is complete. R Remark 4: It follows from the condition Fek (rk ) < 0 < Fek (sk ) in (8) of Lemma 1 that aR kk + bkk > 0. Moreover, + + e k (pk ) infer, respectively, that ωk > µk and e k (rk ) < 0 < G the inequalities constraints Fek (rk ) < 0 < Fek (pk ) and G ω k > µk hold. Secondly, we consider the case that some components of the equilibrium points are just the discontinuous points of the activation functions. Lemma 2: Suppose that the conditions in Lemma 1 hold, then system (2a)-(2b) (or equivalently, network (1)) ∑ k 2n−k equilibrium points with the characteristic that some components of the equilibriums has at least 2n k=1 C2n 4 are just the discontinuous points of the activation functions. Proof: For any given Ωα , take the point xy = (x1 , x2 , . . . , xn , y1 , y2 , . . . , yn )T ∈ co(Ωα ) and define N = {1, 2, . . . , n}, κ = {k ∈ N |xk = pk }, ι = {ϱ ∈ N |yϱ = pϱ }.

Let N (κ) and N (ι) be, respectively, the number of elements in κ and ι. Without loss of generality, suppose in the following discussion that xk = pk with k = 1, 2, . . . , N (κ) and yϱ = pϱ with ϱ = 1, 2, . . . , N (ι), and denote correspondingly

Define

e α = {(xN (κ)+1 , . . . , xn , yN (ι)+1 , . . . , yn )|xk ∈ Ωαk for k = N (κ) + 1, . . . , n, Ω k αn+j for j = N (ι) + 1, . . . , n}. yj ∈ Ωj N (κ)

Hk (η) = − ck η +

(aR kk

∑ j=1

ϑkj −

Qk (η) = − ck η +

∑ j=1

γkj +

γ kj +

n ∑

n ∑

R R (aR kj + bkj )fj (xj )

j̸=k, j=N (κ)+1

(aIkj + bIkj )fjI (yj ) + LR k,

k = N (κ) + 1, . . . , n

(14a)

j=N (ι)+1

(aR kk

N (κ)

+

+

∑ j=1

N (ι)



+

R bR kk )fk (η)

+

N (ι)

I bR kk )fk (η)

+

∑ j=1

n ∑

ϑkj +

n ∑

R I (aR kj + bkj )fj (yj )

j̸=k, j=N (ι)+1

(aIkj + bIkj )fjR (xj ) + LIk ,

k = N (ι) + 1, . . . , n

(14b)

j=N (κ)+1

R R R R R R R where η, xj yj ∈ R; γkj ∈ [min{µj (aR kj + bkj ), ωj (akj + bkj )}, max{µj (akj + bkj ), ωj (akj + bkj )}] and ϑkj ∈ R R [min{µj (aIkj + bIkj ), ω j (aIkj + bIkj )}, max{µj (aIkj + bIkj ), ω j (aIkj + bIkj )}]; ϑkj ∈ [min{µj (aR kj + bkj ), ω j (akj + I I I I I R R I R R I bR kj )}, max{µj (akj + bkj ), ω j (akj + bkj )}], γ kj ∈ [min{µj (akj + bkj ), ωj (akj + bkj )}, max{µj (akj + bkj ), ωj (akj + e α , take xy = (p1 , . . . , pN (κ) , xN (κ)+1 , . . . , xn , p1 , . . . , pN (ι) , yN (ι)+1 , . . . , yn )T ∈ R2n with bIkj )}]. For the given Ω e α and substitute x x ey = (xN (κ)+1 , . . . , xn , yN (ι)+1 , . . . , yn )T ∈ Ω ey into (14a)-(14b). From the conditions in Lemma

9

1, we then have the following four facts: R R Hk (rk ) ≤ − ck rk + (aR kk + bkk )fk (rk ) +



n ∑

n ∑

j=1,j̸=k

min{µj (aIkj + bIkj ), ω j (aIkj + bIkj )} + LR k

j=1

=Fek (rk ) < 0,

R R Hk (sk ) ≥ − ck sk + (aR kk + bkk )fk (sk ) +



n ∑

n ∑

max{µj (aIkj + bIkj ), ω j (aIkj + bIkj )} + LR k

j=1

< − ck rk + −

n ∑

(aR kk

+

R bR kk )fk (rk )

+

n ∑

R R R max{µj (aR kj + bkj ), ωj (akj + bkj )}

j=1,j̸=k

min{µj (aIkj + bIkj ), ω j (aIkj + bIkj )} + LR k

j=1

=Fek (rk ) < 0,

R Hk+ (pk ) ≥ − ck pk + (aR kk + bkk )ωk +



R R R min{µj (aR kj + bkj ), ωj (akj + bkj )}

j=1,j̸=k

=Fek (sk ) > 0, Hk− (pk )

R R R max{µj (aR kj + bkj ), ωj (akj + bkj )}

n ∑

n ∑

R R R min{µj (aR kj + bkj ), ωj (akj + bkj )}

j=1,j̸=k

max{µj (aIkj + bIkj ), ω j (aIkj + bIkj )} + LR k

j=1 + e k (pk ) =F

> 0,

where the facts that fkR (pk ) = fkR (rk ) and fkR+ (pk ) = ωk have been utilized. By further considering that Hk (rkl ) ≥ IV bk ∈ ΩIk or ΩIIk or ΩIII F k (rkl ) > 0 and Hk (prk ) ≤ F k (prk ) < 0, one concludes that there exists uniquely x k or Ωk satisI II III IV fying Hk (x bk ) = 0 for k = N (κ)+1, . . . , n. By similar discussions, we have that there is ybk ∈ Ωk or Ωk or Ωk or Ωk such that Gk (ybk ) = 0 for k = N (ι) + 1, . . . , n. e α ) → co(Ω e α ) with H Q (xN (κ)+1 , . . . , xn , yN (ι)+1 , . . . , yn ) = (x Now, one defines a mapping H Q : co(Ω bN (κ)+1 , . . . ,x bn , ybN (ι)+1 , . . . , ybn ). Utilize the similar method as in Lemma 1, one gets that there exists at least one fixed point ∗ ∗ eα x e∗y = (x∗N (κ)+1 , . . . , x∗n , yN e∗y into (14a)-(14b), (ι)+1 , . . . , yn ) located in the interior of Ω . Substitute the fixed point x

10

one obtains that N (κ)

0 ∈ − ck x∗k + N (ι)

− 0∈−

∑ j=1

j=1

+



(aR kj

+

(aIkj + bIkj )fjI (yj∗ ) + LR k,

k = N (κ) + 1, . . . , n

(15a)

j=N (ι)+1

bR kj )[µj , ω j ]

+

j=1



R R ∗ (aR kj + bkj )fj (xj )

j=N (κ)+1 n ∑

N (ι)

ck yk∗

n ∑

R (aR kj + bkj )[µj , ωj ] +

(aIkj + bIkj )[µj , ω j ] −

N (κ)

+



n ∑

R I ∗ (aR kj + bkj )fj (yj )

j=N (ι)+1

(aIkj + bIkj )[µj , ωj ] +

j=1

n ∑

(aIkj + bIkj )fjR (x∗j ) + LIk ,

k = N (ι) + 1, . . . , n.

(15b)

j=N (κ)+1

On the other hand, computation reveals that for k = 1, 2, . . . , N (κ), N (κ)

− ck pk + +



j=1,j̸=k

(aR kk

+

R R R min{(aR kj + bkj )µj , (akj + bkj )ωj } +

bR kk )µk





max{(aIkj

+

bIkj )µj , (aIkj

bIkj )ω j }

+

j=1

N (κ)

− ck pk +



j=1,j̸=k

R R R max{(aR kj + bkj )µj , (akj + bkj )ωj } +



n ∑

e+ F k (pk )





(aIkj + bIkj )fjI (yj∗ ) + LR k

j=N (ι)+1

R R ∗ (aR kj + bkj )fj (xj )

min{(aIkj + bIkj )µj , (aIkj + bIkj )ω j } −

j=1

n ∑

j=N (κ)+1

N (ι)

R + (aR kk + bkk )ωk −

R R ∗ (aR kj + bkj )fj (xj )

j=N (κ)+1

N (ι)

< Fek (rk ) < 0

and

n ∑

> 0;

n ∑

(aIkj + bIkj )fjI (yj∗ ) + LR k

j=N (ι)+1

which immediately infer that N (κ)

0 ∈ −ck pk + N (ι)







R (aR kj + bkj )[µk , ωk ] +

j=1

(aIkj + bIkj )[µj , ω j ] −

j=1

n ∑

R R ∗ (aR kj + bkj )fj (xj )

j=N (κ)+1 n ∑

(aIkj + bIkj )fjI (yj∗ ) + LR k.

(16)

j=N (ι)+1

Similar analysis shows that for k = 1, 2, . . . , N (ι), one can also have N (ι)

0 ∈ −ck pk +

∑ j=1

R (aR kj + bkj )[µk , ω k ] +

j=1

N (κ)

+



(aIkj + bIkj )[µj , ωj ] +

n ∑

R I ∗ (aR kj + bkj )fj (yj )

j=N (ι)+1 n ∑

j=N (κ)+1

(aIkj + bIkj )fjR (x∗j ) + LIk .

(17)

11

Therefore, according to Definition 2, it follows from (15a), (15b), (16) and (17) that x∗y = (p1 , . . . , pN (κ) , x∗N (κ)+1 ∗ ∗ T 2n is an equilibrium point for the system (2a)-(2b). , . . . , x∗n , p1 , . . . , pN (ι) , yN (ι)+1 , . . . , yn ) ∈ R e α , there are totally 42n−(N (κ)+N (ι)) such kind of disjoint regions in Cn . i.e., there In terms of the definition of Ω exists at least 42n−(N (κ)+N (ι)) equilibrium points in the form of x∗y = (p1 , . . . , pN (κ) , x∗N (κ)+1 , . . . , x∗n , p1 , . . . , pN (ι) , ∗ ∗ T 2n for the system (2a)-(2b). Moreover, it is easy to see that for any point x = (x , x , . . . , x , yN y 1 2 n (ι)+1 , . . . , yn ) ∈ R N (κ)+N (ι) T α e y1 , y2 , . . . , yn ) ∈ co(Ω ), there are C2n cases to choose N (κ) + N (ι) components to be the discontinuous ∑ N (κ)+N (ι) points with xk = pk , k ∈ κ and yϱ = pϱ , ϱ ∈ ι. It follows that there exist at least 2n · N (κ)+N (ι)=1 (C2n 2n−(N (κ)+N (ι)) 4 ) equilibrium points for the system (2a)-(2b) (or equivalently, network (1)). The proof is then completed. From Lemma 1 and Lemma 2, one immediately gets the following result which guarantees the existence of multiple equilibrium points for the network (1). Theorem 1: Suppose that the condition (8) holds, then there exist at least 52n equilibrium points for the complexvalued neural network (1). ∑ k 2n−k + 42n = 52n equilibrium Proof: It follows from Lemma 1 and Lemma 2 that there are at least 2n k=1 C2n 4 points for the complex-valued neural network (1). The proof is complete. B. Stability analysis of the equilibrium points In this section, local stability dynamics of the network (1) around the equilibrium points are further analyzed. Lemma 3: The complex-valued neural network (1) has at least 52n equilibrium points if + F k (rk ) < 0, F k (sk ) > 0, F + k (pk ) > 0; Gk (r k ) < 0, Gk (sk ) > 0, Gk (pk ) > 0; k = 1, 2, . . . , n.

(18)

Moreover, the region Λα is positive invariant. Proof: It follows from (13) and (18) that condition (8) holds, which, according to Theorem 1, assures the existence of at least 52n equilibrium points for the network (1). Next, we shall prove that set Λα is positive invariant. Without loss of generality, suppose that the initial condition for network (1) is given by uk (s) = xk (s) + iyk (s) = φ ek (s) + iφ bk (s), s ∈ [−τ, 0]

where k = 1, 2, . . . , n; xk (s) = φ ek (s) ∈ (sk , pk ] and yk (s) = φ bk (s) ∈ (sk , pk ]. Then one can conclude that for all t > 0, the solution xk (t) + iyk (t) of network (1) satisfies sk < xk (t) ≤ pk ,

sk < yk (t) ≤ pk ;

k = 1, 2, . . . , n.

(19)

We prove this conclusion by contradiction. Suppose that (19) does not hold, then there exist an index k1 ∈ {1, 2, . . . , n} and a time point t∗ > 0 such that for all k = 1, 2, . . . , n: sk < xk (t) ≤ pk , sk < yk (t) ≤ pk ; t ∈ [0, t∗ )

(20)

and one of the following cases holds: 

xk1 (t∗ ) = pk1 and xk1 (t∗ ) > 0, 

(21a)

xk1 (t∗ ) = sk1 and xk1 (t∗ ) ≤ 0,

(21b)

yk1 (t∗ ) = pk1 and y k1 (t∗ ) > 0,

(21c)



12 

yk1 (t∗ ) = sk1 and y k1 (t∗ ) ≤ 0.

(21d)

If (21a) is valid, it follows from (2a) and Assumption 1 as well as (18) that 

xk1 (t∗ ) = − ck1 xk1 (t∗ ) + +

n ∑ j=1

n ∑ ( R R ) ak1 j fj (xj (t∗ )) − aIk1 j fjI (yj (t∗ )) + LR k1 j=1

( R R ) bk1 j fj (xj (t∗ − τk1 j (t∗ ))) − bIk1 j fjI (yj (t∗ − τk1 j (t∗ )))

R ≤ − ck1 pk1 + aR k1 k1 fk1 (pk1 ) +

+

n ∑ j=1

+

j=1

j=1,j̸=k1 n ∑

R max{bR k1 j µj , bk1 j ωj } −

< − ck1 rk1 + n ∑

n ∑

R aR k1 k1 fk1 (rk1 )

j=1

R ∗ aR k1 j fj (xj (t ))

j=1,j̸=k1 n ∑

R max{bR k1 j µj , bk1 j ωj } −

≤F k1 (rk1 ) < 0,

j=1

aIk1 j fjI (yj (t∗ )) + LR k1

j=1

min{bIk1 j µj , bIk1 j ω j }

n ∑

+

n ∑

∗ R aR k1 j fj (xj (t )) −

n ∑



aIk1 j fjI (yj (t∗ )) + LR k1

j=1

min{bIk1 j µj , bIk1 j ω j }



which is in contradiction with xk1 (t∗ ) > 0 presented in (21a). If (21d) holds, it follows from (2b) and Assumption 1 as well as (18) that 

y k1 (t∗ ) = − ck1 yk1 (t∗ ) +

n ∑ ( R I ) ak1 j fj (yj (t∗ )) + aIk1 j fjR (xj (t∗ )) + LIk1 j=1

n ∑ ( R I ) + bk1 j fj (yj (t∗ − τk1 j (t∗ ))) + bIk1 j fjR (xj (t∗ − τk1 j (t∗ ))) j=1

≥ − ck1 sk1 + +

n ∑ j=1

I aR k1 k1 fk1 (sk1 )

+

n ∑

I ∗ aR k1 j fj (yj (t ))

+

≥Gk1 (sk1 ) > 0,

j=1

aIk1 j fjR (xj (t∗ )) + LIk1

j=1

j=1,j̸=k1 n ∑

R min{bR k1 j µj , bk1 j ω j } +

n ∑

min{bIk1 j µj , bIk1 j ωj }



which is just in contradiction with y k1 (t∗ ) ≤ 0 shown in (21d). Similar discussions derive that cases (21b) and (21c) do not hold either. Until now, we have demonstrated that the set III

{(x1 , x2 , . . . , xn , y1 , y2 , . . . , yn ) ∈ R2n |xk ∈ ΩIII k , yk ∈ Ωk for k = 1, 2, . . . , n}

is positive invariant. By similar discussions, it is concluded that for any set Λα , it is also positive invariant. The proof is complete. Theorem 2: For any given Λα , there exists an equilibrium point in Λα for the complex-valued network (1) which

13

is locally stable if condition (18) holds and there exist positive numbers ξk , ζk (k = 1, 2, . . . , n) such that ekk )ξ −1 − ( − ck + λ k bkk )ζ −1 − (−ck + λ k

n ∑

j=1, j̸=k



j=1, j̸=k

−1 |aR kj |q j ξj − −1 |aR kj |β j ζj −

n ∑

j=1 n ∑

|aIkj |β j ζj−1 −

j=1

|aIkj |q j ξj−1 −

n ∑

j=1 n ∑ j=1

−1 |bR kj |q j ξj − −1 |bR kj |β j ζj −

n ∑

j=1 n ∑ j=1

|bIkj |β j ζj−1 < 0,

(22a)

|bIkj |q j ξj−1 < 0;

(22b)

ekk = max{aR q , aR q }, λ bkk = max{aR β , aR β }. where k = 1, 2, . . . , n; λ kk k kk k kk k kk k Proof: For any given Λα , it follows from Lemma 3 that in Λα , the network (1) has an equilibrium point u∗ = (u∗1 , u∗2 , . . . , u∗n )T with u∗k = x∗k + iyk∗ for k = 1, 2, . . . , n. Let xy (t) = (x1 (t), x2 (t), . . . , xn (t), y1 (t), y2 (t), . . . , yn (t))T be a solution for system (2a)-(2b) with initial condition xy (s) ∈ Λα (s ∈ [−τ, 0]), owing to the positive invariant property of Λα , xy (t) will stay in Λα for all t ≥ 0. By further considering Assumption 1, it follows that if R R ∗ the k th component of the elements in Λα belongs to ΩIk or ΩIV k , then fk (xk (t)) = fk (xk ) ≡ µk or ωk . Similarly, I IV fϱI (yϱ (t)) = fϱI (yϱ∗ ) ≡ µϱ or ω ϱ if the (n + ϱ)th component of the elements in Λα belongs to Ωϱ or Ωϱ . Hence, by setting eek (t) = xk (t) − x∗k and ebk (t) = yk (t) − yk∗ , system (2a)-(2b) can be transformed into the following form: 

eek (t) = − ck eek (t) + +



bR kj

j∈N1 

+

( R ) ∑ I ( I ) R ∗ aR f (x (t)) − f (x ) − akj fj (yj (t)) − fjI (yj∗ ) j j j j kj



( I ) ∑ I ( R ) I ∗ aR f (y (t)) − f (y ) + akj fj (xj (t)) − fjR (x∗j ) j j j j kj

j∈N1

j∈N2

( R ) ∑ I ( I ) fj (xj (t − τkj (t))) − fjR (x∗j ) − bkj fj (yj (t − τkj (t))) − fjI (yj∗ ) ,

(23a)

j∈N2

ebk (t) = − ck ebk (t) +





bR kj

j∈N2

j∈N2

j∈N1

( I ) ∑ I ( R ) fj (yj (t − τkj (t))) − fjI (yj∗ ) + bkj fj (xj (t − τkj (t))) − fjR (x∗j ) .

(23b)

j∈N1

III

where k = 1, 2, . . . , n; N1 = {k|xk (t) ∈ ΩIII k , k = 1, 2, . . . , n} and N2 = {k|yk (t) ∈ Ωk , k = 1, 2, . . . , n}. It follows from (22a)-(22b) that there exists a positive number ε such that the following two inequalities also hold:   n n n n ∑  ∑ ∑ ∑ −1 −1 −1 −1 R I ετ I R ekk )ξ −1 − ( − ck + ε + λ |a |q ξ − |a |β ζ − e |b |q ξ + |b |β ζ < 0, kj kj kj kj j j j j k j j j j   j=1, j̸=k

bkk )ζ −1 − (−ck + ε + λ k



j=1, j̸=k

−1 |aR kj |β j ζj −

j=1

j=1

n ∑

 n ∑

j=1

|aIkj |q j ξj−1 − eετ



j=1

j=1

−1 |bR kj |β j ζj +

n ∑ j=1

|bIkj |q j ξj−1

  

(24a)

< 0.

(24b)

Let Vek (t) = eεt eek (t), Vbk (t) = eεt ebk (t) and XY (t) = (X(t)T , Y (t)T )T with X(t) = (X1 (t), X2 (t), . . . , Xn (t))T and Y (t) = (Y1 (t), Y2 (t), . . . , Yn (t))T , where { { Vek (t), if k ∈ N1 Vbk (t), if k ∈ N2 Xk (t) = , Yk (t) = ; k = 1, 2, . . . , n. 0, else 0, else

Define

N (t) =

sup ∥XY (s)∥ξ , t ≥ 0

t−τ ≤s≤t

(25)

14

where ∥XY (s)∥ξ = max{∥X(s)∥ξ , ∥Y (s)∥ζ } with ∥X(s)∥ξ = maxk {ξk |Xk (s)|}, ∥Y (s)∥ζ = maxk {ζk |Yk (s)|}. It is claimed that the function N (t) is nonincreasing in its domain [0, +∞). In the following we will give the detail reasons. From the definition of N (t), it is easy to see that ∥XY (t)∥ξ ≤ N (t) for all t ≥ 0. If at some special time point t that ∥XY (t)∥ξ = N (t), then one of the following two cases will occur: ˜ ∈ N1 depending on t satisfying 1) There exists an index k˜ = k(t) ∥XY (t)∥ξ = ∥X(t)∥ξ = ξk˜ |Vek˜ (t)|.

(26)

∥XY (t)∥ξ = ∥Y (t)∥ζ = ζk¯ |Vbk¯ (t)|.

(27)

¯ ∈ N2 depending on t such that 2) There exists an index k¯ = k(t)

For the first case, it follows from condition (24a) and equation (23a) that ξk˜−1 D− ∥XY (t)∥ξ =D− |Vek˜ (t)|

  ∑ ( R ) =ε|Vek˜ (t)| + sign(e ek˜ (t))eεt −ck˜ eek˜ (t) + aR fj (xj (t)) − fjR (x∗j ) ˜ kj  j∈N1 ) ∑ ( I ) ∑ R ( R R ∗ I I ∗ − akj fj (yj (t)) − fj (yj ) + bkj fj (xj (t − τkj ˜ (t))) − fj (xj ) ˜ ˜ j∈N2

j∈N1

 ) I ∗ − fjI (yj (t − τkj bIkj ˜ (t))) − fj (yj ) ˜  j∈N2 ∑ ∑ e ˜ ˜ )|Ve˜ (t)| − e b |aR ≤(−ck˜ + ε + λ |aIkj ˜ |q j |Vj (t)| − ˜ |β j |Vj (t)| k kk kj (







j∈N1

˜ j∈N1 , j̸=k

ετkj ˜ (t) e |bR |Vj (t ˜ |q j e kj

− τkj ˜ (t))| −

e ˜ ˜ )ξ −1 ∥XY (t)∥ξ − ≤(−ck˜ + ε + λ ˜ kk k − −



j∈N2



j∈N2

−1 |aIkj ˜ |β j ζj ∥Y

(t)∥ζ −

ετ −1 |bIkj ˜ |β j e ζj ∥Y



j∈N1

ετ −1 |bR ˜ |q j e ξj − kj

j∈N2

˜ j∈N1 , j̸=k



j∈N1

j∈N2

ετkj ˜ (t) b |bIkj |Vj (t ˜ |β j e

− τkj ˜ (t))|

−1 |aR ˜ |q j ξj ∥X(t)∥ξ kj

ετ −1 |bR ˜ (t))∥ξ ˜ |q j e ξj ∥X(t − τkj kj

(t − τkj ˜ (t))∥ζ

  e ˜ ˜ )ξ −1 − ≤ (−ck˜ + ε + λ ˜ kk k  −







˜ j∈N1 , j̸=k



j∈N2

−1 |aR ˜ |q j ξj − kj

ετ −1 |bIkj ˜ |β j e ζj

  



j∈N2

−1 |aIkj ˜ |β j ζj

N (t),

(28)

where ‘D− ’ means the upper left Dini derivative operator. For the second case, similarly one has from (24b) and (23b) that ζk¯−1 D− ∥XY (t)∥ξ =D− |Vbk¯ (t)|

b ¯ ¯ )|Vb¯ (t)| − ≤(−ck¯ + ε + λ kk k −



j∈N2



¯ j∈N2 , j̸=k

b |aR ¯ |β j |Vj (t)| − kj

ετkj ¯ (t) b |bR |Vj (t − τkj ¯ (t))| − ¯ |β j e kj



j∈N1



j∈N1

e |aIkj ¯ |q j |Vj (t)|

ετkj ¯ (t) e |Vj (t − τkj |bIkj ¯ (t))| ¯ |q j e

15

b ¯ ¯ )ζ¯−1 ∥XY (t)∥ξ − ≤(−ck¯ + ε + λ kk k ∑



j∈N1



− ≤

  

j∈N1

−1 |aIkj ¯ |q j ξj ∥X(t)∥ξ −



j∈N2

¯ j∈N2 , j̸=k



j∈N2

−1 |aR ¯ |β j ζj ∥Y (t)∥ζ kj

ετ −1 |bR ˜ (t))∥ζ ¯ |β j e ζj ∥Y (t − τkj kj

ετ −1 |bIkj ¯ (t))∥ξ ¯ |q j e ξj ∥X(t − τkj

b ¯ ¯ )ζ¯−1 − (−ck¯ + ε + λ kk k





ετ −1 |bR ¯ |β j e ζj − kj



¯ j∈N2 , j̸=k



j∈N1

−1 |aR ¯ |β j ζj − kj

ετ −1 |bIkj ¯ |q j e ξj

  



j∈N1

−1 |aIkj ¯ |q j ξj

N (t).

(29)

The above two inequalities infer that for N (t) ̸= 0, one has D− ∥XY (t)∥ξ < 0 which means that ∥XY (t)∥ξ will strictly decrease when time passes the point t. Combining this fact and the definition of N (t), one can conclude that N (t) is nonincreasing. Thus, there exists a scalar ς > 0 such that N (t) < ς for all t ≥ 0, which immediately infers that |eek (t)| < ςee−εt , k ∈ N1 ;

|ebk (t)| < ςbe−εt , k ∈ N2

(30)

where t ≥ 0, ςe = ς maxk∈N1 {ξk−1 } and ςb = ς maxk∈N2 {ζk−1 }. Inequalities in (30) further mean that for ∀ε1 > 0, there exists a δ1 > 0 such that for all k = 1, 2, . . . , n, the inequalities ∑ R ∑ R R fj (xj (t)) − fjR (x∗j ) + |aR | |bkj | fj (xj (t − τkj (t))) − fjR (x∗j ) kj j∈N1

+



j∈N2

j∈N1



|aIkj | fjI (yj (t))





fjI (yj∗ )

+



j∈N2

|bIkj | fjI (yj (t − τkj (t))) − fjI (yj∗ ) < ε1

hold for t ≥ 0 under the condition that the initial distance restrictions |xj (s)−x∗j | < δ1 for j ∈ N1 and |yj (s)−yj∗ | < δ1 for j ∈ N2 are satisfied, where s ∈ [−τ, 0]. Combing with (23a), one gets D− |eek (t)| ≤ −ck |eek (t)| + ε1 ,

t ≥ 0, k ∈ / N1

which immediately assure the validity of the following inequalities: ) ε1 ( 1 − e−ck t , |eek (t)| ≤ e−ck t |eek (0)| + ck

t ≥ 0, k ∈ / N1 .

(31)

Similarly, one can also obtain that for ∀ε2 > 0, there exists a δ2 > 0 such that ) ε2 ( 1 − e−ck t , t ≥ 0, k ∈ / N2 |ebk (t)| ≤ e−ck t |ebk (0)| + ck

(32)

hold under the condition that |xj (s) − x∗j | < δ2 for j ∈ N1 and |yj (s) − yj∗ | < δ2 for j ∈ N2 , where s ∈ [−τ, 0]. It then follows from (30)-(32) that the equilibrium point x∗y located in Λα is locally asymptotically stable. The proof is complete. C. Instability of the equilibrium points In this section, instability issue is tackled for the multiple equilibrium points of network (1).

16

Theorem 3: For any given Θα , there exists an equilibrium point for the complex-valued network (1) which is unstable if condition (18) holds and there exist positive numbers ξk , ζk (k = 1, 2, . . . , n) such that (−ck + Sekk )ξk−1 −

(−ck + Sbkk )ζk−1 −

ekk )ξ −1 + (−ck + λ k

bkk )ζ −1 + (−ck + λ k

n ∑

j=1, j̸=k n ∑

j=1, j̸=k n ∑

j=1, j̸=k n ∑

j=1, j̸=k

q −1 |aR kj |mj ξj − σ −1 |aR kj |βj ζj − q −1 |aR kj |mj ξj + σ −1 |aR kj |βj ζj +

n ∑

j=1 n ∑

j=1 n ∑

j=1 n ∑ j=1

q −1 |bR kj |mj ξj −

(|aIkj | + |bIkj )|βjσ ζj−1 > 0,

(33a)

(|aIkj | + |bIkj )|mqj ξj−1 > 0;

(33b)

(|aIkj | + |bIkj )|βjσ ζj−1 < 0,

(33c)

(|aIkj | + |bIkj )|mqj ξj−1 < 0;

(33d)

j=1 n ∑

σ −1 |bR kj |βj ζj − q −1 |bR kj |mj ξj + σ −1 |bR kj |βj ζj +

n ∑

j=1 n ∑

j=1 n ∑ j=1

R R R e b b where k = 1, 2, . . . , n; Sekk = min{aR kk mk , akk mk }, Skk = min{akk σ k , akk σ k }, λkk and λkk are the same as defined in Theorem 2, mqk = max{mk , −q k }, βkσ = max{−β k , σ k }. Proof: For any given Θα , under the condition (18) it follows from Lemma 3 that system (2a)-(2b) has at least one equilibrium point x∗y = (x∗1 , x∗2 , . . . , x∗n , y1∗ , y2∗ , . . . , yn∗ )T in the interior of Θα . Let xy (t) = (x1 (t), . . . , xn (t), y1 (t), . . . , yn (t))T be a solution of system (2a)-(2b) with initial condition xy (s) ∈ Θα for s ∈ [−τ, 0] and xy (s) is close enough to x∗y . Without loss of generality, suppose that xy (t) stay in Θα for all t ≥ 0 (if this does not hold, then clearly x∗y is not stable and the conclusion to be proved is validity). By setting eek (t) = xk (t) − x∗k and ebk (t) = yk (t) − yk∗ (k = 1, 2, . . . , n), with similar discussion as done in Theorem 2, system (2a)-(2b) can be transformed into the following form: ∑ ∑  ( R ) ( ) R ∗ eek (t) = − ck eek (t) + aR aIkj fjI (yj (t)) − fjI (yj∗ ) kj fj (xj (t)) − fj (xj ) −

+



j∈N1 ∪N3

bR kj

j∈N1 ∪N3



ebk (t) = − ck ebk (t) + +



j∈N2 ∪N4

( R ) fj (xj (t − τkj (t))) − fjR (x∗j ) − ∑

j∈N2 ∪N4

bR kj

( I ) I ∗ aR kj fj (yj (t)) − fj (yj ) +

( I ) fj (yj (t − τkj (t))) − fjI (yj∗ ) +

j∈N2 ∪N4



j∈N2 ∪N4



j∈N1 ∪N3



j∈N1 ∪N3 II

( ) bIkj fjI (yj (t − τkj (t))) − fjI (yj∗ ) , ( ) aIkj fjR (xj (t)) − fjR (x∗j )

( ) bIkj fjR (xj (t − τkj (t))) − fjR (x∗j ) ;

(34a)

(34b)

where N3 = {k|xk (t) ∈ ΩIIk , k = 1, 2, . . . , n}, N4 = {k|yk (t) ∈ Ωk , k = 1, 2, . . . , n}, N1 and N2 are the same as defined in Theorem 2. Define Nl (t) = (l)

(l)

sup ∥XY (s)∥ξ , l = 1, 2, 3 and t ≥ 0

(35)

t−τ ≤s≤t

(l)

(l)

(l)

(l)

where XY (s) = ((X (l) (s))T , (Y (l) (s))T )T with X (l) (s) = (X1 (s), X2 (s), . . . , Xn (s))T and Y (l) (s) = (Y1 (s) (l) (l) (l) (l) , Y2 (s), . . . , Yn (s))T , ∥XY (s)∥ξ = max{∥X (l) (s)∥ξ , ∥Y (l) (s)∥ζ } with ∥X (l) (s)∥ξ = maxk {ξk |Xk (s)|} and

17 (l)

∥Y (l) (s)∥ζ = maxk {ζk |Yk (s)|}, { eek (t), (1) Xk (t) = 0, { eek (t), (2) Xk (t) = 0, { eek (t), (3) Xk (t) = 0,

in which if k ∈ N1 , else

(1) Yk (t)

if k ∈ N3 , else

(2) Yk (t)

if k ∈ N1 ∪ N3 , else

(3) Yk (t)

= = =

{

{

{

ebk (t), 0,

if k ∈ N2 , else

ebk (t), 0,

if k ∈ N2 ∪ N4 . else

ebk (t), 0,

if k ∈ N4 , else

When taking the initial function of system (2a)-(2b), we further make the restriction such that N2 (0) = (2) (1) ∥XY (0)∥ξ > N1 (0) = ∥XY (0)∥ξ > 0, which intrinsically infers that with the dynamical evolution of network (1), one of the following two cases will occur (we will show the them one by one in the sequel): Case 1: N3 (t) = N2 (t), ∀t ≥ 0. Case 2: There exists t1 > 0 being the least time point such that N3 (t1 ) = N2 (t1 ) = N1 (t1 ). (2) Firstly, we begin with Case 1. If at some time point t, N3 (t) = N2 (t) = ∥XY (t)∥ξ (such a time point exists, for example, t = 0), then at least one of the following two cases occur. (2) 1) There exists an index e k ∈ N3 which is depending on the time point t such that ∥XY (t)∥ξ = ∥X (2) (t)∥ξ = ξke |eeke (t)|. It follows directly from (34a) that ξk˜−1 D− ∥X (2) (t)∥ξ =D− |e ek˜ (t)|

  ∑ ) ( R aR fj (xj (t)) − fjR (x∗j ) =sign(e ek˜ (t)) −ck˜ eek˜ (t) + ˜ kj  j∈N1 ∪N3 ) ( ∑ ∑ ( I ) R ∗ I − akj fj (yj (t)) − fjI (yj∗ ) + bR fjR (xj (t − τkj ˜ (t))) − fj (xj ) ˜ ˜ kj j∈N2 ∪N4





j∈N2 ∪N4

j∈N1 ∪N3

(

I ∗ fjI (yj (t − τkj bIkj ˜ (t))) − fj (yj ) ˜

ek˜ (t)| − ≥(−ck˜ + Sek˜k˜ )|e − +



j∈N4



j∈N1

|aIkj ej (t)| ˜ |σ j |b |bR ej (t ˜ |q j |e kj



˜ j∈N3 j̸=k,

+



j∈N2



− −



− τkj ˜ (t))| −

j∈N4



j∈N3



j∈N4

−1 (2) (t)∥ζ |aIkj ˜ |σ j ζj ∥Y −1 (2) |bR (t ˜ |mj ξj ∥X kj −1 (2) |bIkj (t ˜ |σ j ζj ∥Y



|aR ej (t)| + ˜ |mj |e kj

|aIkj ej (t)| ˜ |β j |b

≥(−ck˜ + Sek˜k˜ )ξk˜−1 ∥X (2) (t)∥ξ −

 )



j∈N4



+

j∈N2

j∈N1



j∈N3

|bIkj ej (t ˜ |σ j |b

˜ j∈N3 j̸=k,







|aR ej (t)| ˜ |q j |e kj

|bR ej (t − τkj ˜ (t))| ˜ |mj |e kj

− τkj ˜ (t))| +



j∈N2

−1 (2) |aR (t)∥ξ ˜ |mj ξj ∥X kj

+

|bIkj ej (t − τkj ˜ (t))| ˜ |β j |b



j∈N1

−1 (1) (t)∥ξ |aR ˜ |q j ξj ∥X kj

−1 (1) |aIkj (t)∥ζ ˜ |β j ζj ∥Y

− τkj ˜ (t))∥ξ +

− τkj ˜ (t))∥ζ +



j∈N1



j∈N2

−1 (1) |bR (t − τkj ˜ (t))∥ξ ˜ |q j ξj ∥X kj

−1 (1) |bIkj (t − τkj ˜ (t))∥ζ ˜ |β j ζj ∥Y

18

≥(−ck˜ + Sek˜k˜ )ξk˜−1 N2 (t) − ∑







  

−1 |bR ˜ |q j ξj N1 (t) kj

j∈N1





j∈N2



j∈N4



(−ck˜ + Sek˜k˜ )ξk˜−1 −

˜ j∈N3 j̸=k,

j∈N4

j∈N2



+





j∈N1

−1 |aIkj ˜ |σ j ζj

+

−1 |bR ˜ |q j ξj − kj

−1 |aR ˜ |mj ξj N2 (t) + kj

˜ j∈N3 j̸=k,

−1 |aIkj ˜ |σ j ζj N2 (t) +

j∈N4

+







j∈N4

−1 |aIkj ˜ |β j ζj N1 (t) −

−1 |bIkj ˜ |σ j ζj N2 (t)

−1 |aR ˜ |mj ξj + kj

−1 |aIkj ˜ |β j ζj





−1 |bIkj ˜ |σ j ζj +

j∈N3



j∈N2

+





j∈N1



j∈N3



j∈N2

j∈N1

−1 |aR ˜ |q j ξj N1 (t) kj

−1 |bR ˜ |mj ξj N2 (t) kj

−1 |bIkj ˜ |β j ζj N1 (t)

−1 |aR ˜ |q j ξj kj

−1 |bR ˜ |mj ξj kj

−1 |bIkj ˜ |β j ζj

  

N3 (t).

(36)

From condition (33a), one obtains that D− ∥X (2) (t)∥ξ is positive whenever N3 (t) ̸= 0, which infers that there exists a ϵ1 > 0 such that ∥X (2) (η)∥ξ > ∥X (2) (t)∥ξ for ∀η ∈ (t, t + ϵ1 ). (2) 2) There exists an index k¯ ∈ N4 depending on the given time point t such that ∥XY (t)∥ξ = ∥Y (2) (t)∥ζ = ζk¯ |ebk¯ (t)|. It immediately gets from (34b) that ζk¯−1 D− ∥Y (2) (t)∥ζ =D− |b ek¯ (t)|

≥(−ck¯ + Sbk¯k¯ )|b ek¯ (t)| − ∑



j∈N3





j∈N4





j∈N3

≥(−ck¯ + ∑



j∈N3



+



  

j∈N2



+

j∈N3



j∈N2

+



j∈N1

|aR ej (t)| + ¯ |σ j |b kj

|aIkj ej (t)| ¯ |q j |e

− τkj ¯ (t))| +

Sbk¯k¯ )ζk¯−1 N2 (t)



+





j∈N3





j∈N1



j∈N3



j∈N2

ej (t)| |aR ¯ |β j |b kj

|bR ej (t − τkj ¯ (t))| ¯ |β j |b kj

j∈N1

|bIkj ej (t − τkj ¯ (t))| ¯ |q j |e

−1 |aR ¯ |σ j ζj N2 (t) + kj

−1 |aIkj ¯ |q j ξj N1 (t)



−1 |bIkj ¯ |mj ξj N2 (t)

+

j∈N1



¯ j∈N4 j̸=k,

−1 |aIkj ¯ |mj ξj +

−1 |bR ¯ |β j ζj − kj





¯ j∈N4 j̸=k,

−1 |aIkj ¯ |mj ξj N2 (t) −1 |bR ¯ |β j ζj N1 (t) kj



j∈N2

|bIkj ej (t − τkj ¯ (t))| + ¯ |mj |e

(−ck¯ + Sbk¯k¯ )ζk¯−1 − ∑

¯ j∈N4 j̸=k,

|aIkj ej (t)| ¯ |mj |e |bR ej (t ¯ |σ j |b kj



−1 |aR ¯ |σ j ζj + kj

−1 |aIkj ¯ |q j ξj −

−1 |bIkj ¯ |mj ξj +



j∈N4



j∈N1



j∈N2



j∈N2



j∈N4



j∈N1

−1 |aR ¯ |β j ζj N1 (t) kj

−1 |bR ¯ |σ j ζj N2 (t) kj

−1 |bIkj ¯ |q j ξj N1 (t)

−1 |aR ¯ |β j ζj kj

−1 |bR ¯ |σ j ζj kj

−1 |bIkj ¯ |q j ξj

  

N3 (t).

(37)

By further considering condition (33b), it is derived that D− ∥Y (2) (t)∥ζ is positive whenever N3 (t) ̸= 0, which means that there exists a ϵ2 > 0 such that ∥Y (2) (η)∥ζ > ∥Y (2) (t)∥ζ for ∀η ∈ (t, t + ϵ2 ).

Summarizing those two situations in Case 1, one concludes that if at some time point t, N3 (t) = N2 (t) = (2) ∥XY (t)∥ξ , then N2 (·) will strictly increase when passing the time t. i.e., there exists a ϵ > 0 such that N3 (η) =

19

N2 (η) > N2 (t) for ∀η ∈ (t, t + ϵ), which directly infers that (2)

N3 (t) = N2 (t) = ∥XY (t)∥ξ ,

∀t ≥ 0

(38)

(2)

(1)

holds by further considering the initial constraints N2 (0) = ∥XY (0)∥ξ > N1 (0) = ∥XY (0)∥ξ > 0. Next, we deal with Case 2. If there exists t1 > 0 which is the first time point satisfying N3 (t1 ) = N2 (t1 ) = N1 (t1 ), (2) (2) from the above discussions one knows that N2 (t1 ) = ∥XY (t1 )∥ξ and D− ∥XY (t1 )∥ξ > 0. Moreover, one obtains (1) N1 (t1 ) = ∥XY (t1 )∥ξ . At this time, the following four cases will be discussed separately. (1) (2) (1) There exist indexes k1 ∈ N1 and e k1 ∈ N3 such that ∥X (t1 )∥ξ = ∥X (1) (t1 )∥ξ = ξk |eek (t1 )|, ∥X (t1 )∥ξ = Y

∥X (2) (t1 )∥ξ = ξk˜1 |eeke1 (t1 )|. It follows from (34a) that ξk−1 D− ∥X (1) (t)∥ξ |t=t1 = D− |e ek1 (t)||t=t1 1 ∑ ek k )|e ≤(−ck1 + λ ek1 (t1 )| − 1 1 ∑

+

j∈N4





j∈N1

≤(−ck1

j∈N4



+

j∈N3

+

ej (t1 |bR k1 j |q j |e



j∈N2

− τk1 j (t1 ))| +

|aIk1 j |σ j ζj−1 ∥Y (2) (t1 )∥ζ

−1 (2) (t1 |bR k1 j |mj ξj ∥X



|bIk1 j |σ j ζj−1 ∥Y (2) (t1



|aIk1 j |σ j ζj−1 N2 (t1 )

j∈N4

ek k )ξ −1 N1 (t1 ) − ≤(−ck1 + λ 1 1 k1 +

j∈N4



− ≤

  

j∈N1

−1 |bR k1 j |q j ξj N1 (t1 )

ek k )ξ −1 − (−ck1 + λ 1 1 k1

+





j∈N4



j∈N1

|aIk1 j |σ j ζj−1



−1 |bR k1 j |q j ξj +

j̸=k1 , j∈N1





j∈N2



j̸=k1 , j∈N1



j∈N2







j∈N4

j∈N3



j∈N3

j∈N1



j∈N2

− τk1 j (t1 ))| − +

|bIk1 j |σ j ζj−1 N2 (t1 )

+

|bIk1 j |σ j ζj−1 −



j∈N3



j∈N2

j∈N2



j∈N3

ej (t1 − τk1 j (t1 ))| |bIk1 j |β j |b −1 (2) |aR (t1 )∥ξ k1 j |mj ξj ∥X

|bIk1 j |β j ζj−1 ∥Y (1) (t1 − τk1 j (t1 ))∥ζ

|aIk1 j |β j ζj−1 N1 (t1 )

|aIk1 j |β j ζj−1



−1 (1) |bR (t1 − τk1 j (t1 ))∥ξ k1 j |q j ξj ∥X



−1 |aR k1 j |q j ξj N1 (t1 ) +

−1 |aR k1 j |q j ξj +

Y

|bR ej (t1 − τk1 j (t1 ))| k1 j |mj |e

−1 (1) |aR (t1 )∥ξ ˜ |q j ξj ∥X kj



1

|aR ej (t1 )| k1 j |mj |e

|aIk1 j |β j ζj−1 ∥Y (1) (t1 )∥ζ

− τk1 j (t1 ))∥ζ −

j∈N4

j∈N2

j∈N4



|bIk1 j |σ j |b ej (t1



j̸=k1 , j∈N1





− τk1 j (t1 ))∥ξ −



+

|aR ej (t1 )| + k1 j |q j |e

|aIk1 j |β j |b ej (t1 )| +

ek k )ξ −1 ∥X (1) (t1 )∥ξ − +λ 1 1 k1



+

j̸=k1 , j∈N1

|aIk1 j |σ j |b ej (t1 )| −

1

+

j∈N3

j∈N3

j∈N3









j∈N2

−1 |aR k1 j |mj ξj N2 (t1 )

−1 |bR k1 j |mj ξj N2 (t1 )

|bIk1 j |β j ζj−1 N1 (t1 )

−1 |aR k1 j |mj ξj

−1 |bR k1 j |mj ξj

|bIk1 j |β j ζj−1

  

N3 (t1 ).

(39)

By also considering condition (33c), it can be obtained that D− ∥X (1) (t1 )∥ξ is negative whenever N3 (t1 ) ̸= 0. (2) This together with the fact that D− ∥XY (t1 )∥ξ = D− ∥X (2) (t1 )∥ξ > 0 infer that there exists a δ1 > 0 such that ∥X (2) (η)∥ξ > ∥X (2) (t1 )∥ξ = ∥X (1) (t1 )∥ξ > ∥X (1) (η)∥ξ for all η ∈ (t1 , t1 +δ1 ), i.e., N3 (η) = N2 (η) > N1 (η) for all η ∈ (t1 , t1 + δ1 ). (1) (2) (2) There exist indexes k2 ∈ N2 and e k2 ∈ N3 such that ∥XY (t1 )∥ξ = ∥Y (1) (t1 )∥ζ = ζk2 |ebk2 (t1 )|, ∥XY (t1 )∥ξ =

20

∥X (2) (t1 )∥ξ = ξk˜2 |eeke2 (t1 )|. It follows from (34b) that

ζk−1 D− ∥Y (1) (t)∥ζ |t=t1 = D− |b ek2 (t)||t=t1 2 ∑ ∑ b ≤(−ck2 + λk2 k2 )|b ek2 (t1 )| − |aR ej (t1 )| + |aR ej (t1 )| k2 j |β j |b k2 j |σ j |b ∑

+

j∈N3





j∈N2

≤(−ck2

j∈N3



+

j∈N4



+



  

|bR ej (t1 k2 j |β j |b



j∈N3

+

j∈N1

|aIk2 j |q j |e ej (t1 )|

− τk2 j (t1 ))| +

|aIk2 j |mj ξj−1 ∥X (2) (t1 )∥ξ −1 (2) |bR (t1 k2 j |σ j ζj ∥Y



|aIk2 j |mj ξj−1



|bIk2 j |mj ξj−1 −

j∈N3

j∈N3



j∈N1



j∈N1

j∈N3







j∈N1

+

j∈N4

|bIk2 j |q j ξj−1

|bR ej (t1 − τk2 j (t1 ))| k2 j |σ j |b − τk2 j (t1 ))| −

|aIk2 j |q j ξj−1 ∥X (1) (t1 )∥ξ ∑

j∈N2



+

  



j∈N4

+



j∈N1



j∈N4

|bIk2 j |q j |e ej (t1 − τk2 j (t1 ))| −1 (2) |aR (t1 )∥ζ k2 j |σ j ζj ∥Y

−1 (1) |bR (t1 − τk2 j (t1 ))∥ζ k2 j |β j ζj ∥Y

j∈N1

−1 |aR k2 j |β j ζj +

|aIk2 j |q j ξj−1

j∈N4

−1 (1) |aR (t1 )∥ζ k2 j |β j ζj ∥Y

− τk2 j (t1 ))∥ξ −





|bIk2 j |mj |e ej (t1

j̸=k2 , j∈N2

j̸=k2 , j∈N2





− τk2 j (t1 ))∥ζ −

|bIk2 j |mj ξj−1 ∥X (2) (t1

bk k )ζ −1 − (−ck2 + λ 2 2 k2

+



bk k )ζ −1 ∥Y (1) (t1 )∥ζ − +λ 2 2 k2



+

j̸=k2 , j∈N2

|aIk2 j |mj |e ej (t1 )|

|bIk2 j |q j ξj−1 ∥X (1) (t1 − τk2 j (t1 ))∥ξ ∑

j∈N4

−1 |aR k2 j |σ j ζj

−1 |bR k2 j |σ j ζj





j∈N2

−1 |bR k2 j |β j ζj

N3 (t1 ).

(40)

Combining with condition (33d), the above inequality infers that D− ∥Y (1) (t1 )∥ζ is negative whenever N3 (t1 ) ̸= (2) 0, which together with D− ∥XY (t1 )∥ξ = D− ∥X (2) (t1 )∥ξ > 0 imply that there exists a δ2 > 0 such that ∥X (2) (η)∥ξ > ∥X (2) (t1 )∥ξ = ∥Y (1) (t1 )∥ζ > ∥Y (1) (η)∥ζ for all η ∈ (t1 , t1 +δ2 ), i.e., N3 (η) = N2 (η) > N1 (η) for all η ∈ (t1 , t1 + δ2 ). (1) (2) (3) There exist indexes k3 ∈ N1 and e k3 ∈ N4 such that ∥XY (t1 )∥ξ = ∥X (1) (t1 )∥ξ = ξk3 |eek3 (t1 )|, ∥XY (t1 )∥ξ = ∥Y (2) (t1 )∥ζ = ζke3 |ebke3 (t1 )|. Similar discussion as in (1) and one also gets that there exists a δ3 > 0 such that N3 (η) = N2 (η) > N1 (η) for all η ∈ (t1 , t1 + δ3 ). (1) (2) (4) There exist indexes k4 ∈ N2 and e k4 ∈ N4 such that ∥XY (t1 )∥ξ = ∥Y (1) (t1 )∥ζ = ζk4 |ebk4 (t1 )|, ∥XY (t1 )∥ξ = ∥Y (2) (t1 )∥ζ = ζke4 |ebke4 (t1 )|. Similar discussion as in (2) reveals that there exists a δ4 > 0 such that N3 (η) = N2 (η) > N1 (η) for all η ∈ (t1 , t1 + δ4 ).

Summarizing the above discussions, either Case 1 or Case 2 holds, we have that N3 (t) = N2 (t) ≥ N2 (0) > 0 holds for all t ≥ 0, which implies that there exists an increasing time sequence {tk }+∞ k=1 with limk→+∞ tk = +∞ satisfying N2 (tk ) = ∥X (2) (tk )∥ξ or N2 (tk ) = ∥Y (2) (tk )∥ζ . +∞ Therefore, there exist time subsequence {tkj }+∞ j=1 ⊂ {tk }k=1 and indexes kxy ∈ N3 (or N4 ) depending on the time tkj such that

N2 (tkj ) = ∥X (2) (tkj )∥ξ = ξkxy |eekxy (tkj )| ≥ N2 (0) > 0, j = 1, 2, . . .

(41)

21

or N2 (tkj ) = ∥Y (2) (tkj )∥ζ = ζkxy |ebkxy (tkj )| ≥ N2 (0) > 0, j = 1, 2, . . .

(42)

hold. It follows from (41) and (42) that {eekxy (tkj )}+∞ bkxy (tkj )}+∞ j=1 or {e j=1 would not converge to 0 when t → +∞, ∗ α which infers that the equilibrium point xy located in the given Θ is unstable. The proof is complete. D. Attraction basins of the equilibrium points In the following, the attraction basins of the equilibrium points for the complex-valued network (1) will be further investigated. It follows from (13) and (18) that F k (rk ) ≤ F k (rk ) < 0,

F k (sk ) ≥ F k (sk ) > 0;

k = 1, 2, . . . , n

which infer that there exists at least one zero point, respectively, for the function F k (·) and F k (·) in the domain (rk , sk ). Denote ϱek to be the minimal zero root of the function F k (·) in (rk , sk ) and ϱbk to be the maximal one for F k (·). By the similar analysis, one obtains that both Gk (·) and Gk (·) have at least one zero root in (rk , sk ). Let e ιk to be the minimal zero point of the function Gk (·) in the domain (rk , sk ) and b ιk to be the maximal one for Gk (·). Then, it is easy to see that ϱek ≤ ϱbk and e ιk ≤ b ιk . In the sequel, denote ω ek1 =

inf

η∈(−∞,rkl ]

bk1 = F k (η), ω

sup r k

η∈[p ,+∞)

F k (η); ω ek2 =

inf

η∈(−∞,rlk ]

bk2 = Gk (η), ω

sup

η∈[prk ,+∞)

Gk (η).

Theorem 4: If condition (18) in Lemma 3 holds, then the following statements hold: (1) (2) (3) (4) (5) (6) (7) (8)

Suppose Suppose Suppose Suppose Suppose Suppose Suppose Suppose

that ω bk1 < 0 and xk (0) ∈ [prk , +∞), then there exists t1 > 0 such that xk (t) ∈ (pk , prk ) for all that ω ek1 > 0 and xk (0) ∈ (−∞, rkl ], then there exists t2 > 0 such that xk (t) ∈ (rkl , rk ) for all that rk ≤ xk (0) < ϱek , then there exists t3 > 0 such that xk (t) ∈ (rkl , rk ) for all t ≥ t3 . that ϱbk < xk (0) ≤ sk , then there exists t4 > 0 such that xk (t) ∈ (sk , pk ) for all t ≥ t4 . that ω bk2 < 0 and yk (0) ∈ [prk , +∞), then there exists t5 > 0 such that yk (t) ∈ (pk , prk ) for all that ω ek2 > 0 and yk (0) ∈ (−∞, rlk ], then there exists t6 > 0 such that yk (t) ∈ (rlk , rk ) for all that rk ≤ yk (0) < e ιk , then there exists t7 > 0 such that yk (t) ∈ (rlk , rk ) for all t ≥ t7 . that b ιk < yk (0) ≤ sk , then there exists t8 > 0 such that yk (t) ∈ (sk , pk ) for all t ≥ t8 .

t ≥ t1 . t ≥ t2 .

t ≥ t5 . t ≥ t6 .

Proof: Firstly, the validity of statement (1) is demonstrated. If ω bk1 < 0 and xk (0) ∈ [prk , +∞), from the bk1 , one has that whenever xk (t) ≥ prk > 0, definition of function F k (·) and ω 

xk (t) ≤ F k (xk (t)) ≤ ω bk1 < 0

holds, which infers that function xk (t) is strictly decreasing. Therefore, there must exist t1 > 0 such that pk < xk (t1 ) < prk . Then from Lemma 3, it immediately follows that xk (t) ∈ (pk , prk ) for all t ≥ t1 . Statement (2) can be proved similarly and hence omitted here. Next, let ω bk3 = supη∈[rk ,ϱek −ϵ] F k (η) (ϵ is a positive number which is small enough), from the definition of scalar ϱek , it is easy to obtain that ω bk3 < 0. If rk ≤ xk (0) < ϱek , one gets that whenever xk (t) ∈ [rk , ϱek − ϵ), 

xk (t) ≤ F k (xk (t)) ≤ ω bk3 < 0

holds, which infers that xk (t) is also strictly decreasing. Therefore, there exists t3 > 0 such that rkl < xk (t3 ) < rk . It then follows immediately from Lemma 3 that xk (t) ∈ (rkl , rk ) for all t ≥ t3 . Statement (4) can be similarly verified and omitted for brevity.

22

Finally, by the similar way utilized as above, one could obtain that statements (5)-(8) also hold. The proof is complete. Remark 5: It follows from Theorem 4 that with more constraints conditions, the attraction basins of the equilibrium points could be extended and larger than the corresponding positive invariant sets Λα . Remark 6: In Ref. [27], the multistability of complex-valued recurrent neural networks without delay has been investigated, however, the results could not be utilized to analyze the multistability of the complex-valued network (1) with time-varying delays. On the other hand, in order to analyze the multistability of the network, the activation functions addressed in [27], [29] are required to be continuous, whereas for the discontinuous activation functions, the obtained results there can not be utilized. Remark 7: In Refs. [23], [44], multistability of real-valued neural networks has been investigated, where the activation functions utilized are in different types although they are all chosen to be discontinuous, non-monotonic and piecewise linear. From Assumptions 1 and 2, one knows that all of them are in special forms of ours, i.e., the activation functions utilized here are more general. On the other hand, in this paper, not only the existence, number and stability but also the attraction basins of the locally stable equilibrium points have been addressed, whereas the attraction basins topic has not taken into account there. Moreover, when the dimensions of the networks discussed are same, the number of the locally stable equilibrium points in this paper is much more than that in Refs. [23], which further demonstrates that the complex-valued networks are more suitable for the tasks of high-capacity associative memories. IV. N UMERICAL E XAMPLES In this section, one numerical example is given to illustrate the effectiveness of the obtained results. Example 1: Consider a two-neuron complex-valued neural network described as in (1) with C = diag{2, 2}, τ11 (t) = τ12 (t) = τ21 (t) = τ22 (t) = 1.3, L = (12.8 + 13.1i, 5.2 + 15.7i)T , [ ] [ ] 7 + 0.3i 0.5 + 0.2i 0.02 + 0.6i 0.03 + 0.2i A= , B= , 0.3 + 0.1i 7 + 0.7i −0.02 + 0.2i 0.03 + 0.6i and the activation functions are taken to be  113 − ∞ < η < −3 − 7 ,     132 η − 621 , −3≤η ≤6 63 63 f1R (η) = f2I (η) = 2 2  − η + 2η + 1, 6 < η ≤ 12    477 12 < η < +∞ 7 ,

 53 −7, − ∞ < η < −3     − 2 η 2 + 2η + 1, − 3 ≤ η ≤ 2 7 f2R (η) = f1I (η) = 40  − η + 269 2 < η ≤ 16  49 ,   5549 16 < η < +∞ 7 ,

whose images are depicted in Fig. 1.

23

10

10 8

5

6 4

f2R (η), f1I (η)

f1R (η), f2I (η)

0

−5

−10

2 0 −2 −4 −6

−15

−8 −20 −10

−5

0

5 η

10

15

−10 −10

20

−5

0

5

η

10

15

20

25

Fig. 1. Graphs of the activation functions fkR (η), fkI (η) for k = 1, 2. The upper bound functions F k (η), Gk (η) and the lower bound functions F k (η), Gk (η) defined in (5a)-(6b) can be obtained as follows: F 1 (η) = −2η + 7f1R (η) + 30.3700,

F 1 (η) = −2η + 7f1R (η) − 1.2929;

G1 (η) = −2η + 7f1I (η) + 26.0014,

G1 (η) = −2η + 7f1I (η) − 13.1643;

F 2 (η) = −2η + 7f2R (η) + 31.0300,

G2 (η) = −2η + 7f2I (η) + 30.6386,

F 2 (η) = −2η + 7f2R (η) − 11.0900;

G2 (η) = −2η + 7f2I (η) − 1.8986

with their graphs being illustrated in Fig. 2 and Fig. 3. Take r1l = −60, pr1 = 50, r2l = −40, pr2 = 50 and rl1 = −40, pr1 = 50, rl2 = −60, pr2 = 50 such that condition (7) holds. 60

100

F 1 (η)

40

0

40

−20

20

−40 −60

0 −20

−80

−40

−100

−60

−120 −140 −60

F 2 (η)

60

F 2 (η), F 2 (η)

F 1 (η), F 1 (η)

20

F 2 (η)

80

F 1 (η)

−80 −40

−20

0 η

20

40

−100 −60

60

−40

−20

0 η

20

40

60

Fig. 2. Graphs of the upper bound and the lower bound functions F k (η), F k (η) for k = 1, 2. 100

60

G1 (η)

80

−20

G2 (η), G2 (η)

G1 (η), G1 (η)

0

20 0 −20

−40 −60

−40

−80

−60

−100

−80

−120 −40

−20

0 η

20

40

G2 (η)

20

40

−100 −60

G2 (η)

40

G1 (η)

60

60

−140 −60

−40

−20

0 η

20

40

60

Fig. 3. Graphs of the upper bound and the lower bound functions Gk (η), Gk (η) for k = 1, 2. Firstly, it follows from the activation functions given above that Assumption 2 is satisfied with m1 = m1 = 6 34 10 40 26 σ 2 = σ 2 = 132 63 , q 1 = β 2 = − 7 , q 1 = β 2 = − 7 , q 2 = q 2 = β 1 = β 1 = − 49 , m2 = σ 1 = 7 and m2 = σ 1 = 7 . Moreover, it is easy to verify that condition (18) is satisfied. Therefore, according to Lemma 3, the complex-valued network (1) with parameters given above has at least 54 equilibrium points.

24

Secondly, for given activation functions, one can calculate that there exist positive scalars ξ1 = 7, ξ2 = 8, ζ1 = 9, ζ2 = 10 satisfying the conditions (22a) and (22b). It follows from Theorem 2 that 81 of the 54 equilibrium points are locally stable. However, it can be verified that the conditions in Theorem 1 of Ref. [27] are not satisfied, which implies that the criteria given in Ref. [27] could not ascertain whether the equilibrium points with their real and imaginary parts located in the positive invariant sets Λα are locally stable or not. From this point of view, the results given in our paper are more general than those in Ref. [27]. For simulation, take the following 6 positive invariant sets for example: Λ1 = (−60, −3) × (2, 16) × (−40, −3) × (12, 50), Λ2 = (6, 12) × (2, 16) × (2, 16) × (6, 12), Λ3 = (12, 50) × (16, 50) × (16, 50) × (−60, −3), Λ4 = (12, 50) × (−40, −3) × (2, 16) × (12, 50), Λ5 = (−60, −3) × (−40, −3) × (−40, −3) × (−60, −3) and Λ6 = (12, 50) × (16, 50) × (16, 50) × (12, 50). Fig. 4 and Fig. 5 illustrate the time responses of the states for the neural network (1), and Fig. 6 shows the phase plots among the real parts and the imaginary parts of the states for network (1). Fig. 4 to Fig. 6 further demonstrate that in each given positive invariant set Λi (i = 1, 2, . . . , 6), there exists one equilibrium point which is locally stable. 60

50 40

40 30 20

x2(t)

1

x (t)

20

0

10 0

−20

−10 −20

−40 −30 −60

0

1

2

3 t

4

5

−40

6

0

1

2

3 t

4

5

6

Fig. 4. Trajectories of the real part (x1 (t), x2 (t))T of the state u(t) for the network (1). 50

60

40 40 30 20

y2(t)

10

1

y (t)

20

0

0 −10

−20

−20 −40 −30 −40

0

1

2

3 t

4

5

−60

6

0

1

2

3 t

4

5

6

Fig. 5. Trajectories of the imaginary part (y1 (t), y2 (t))T of the state u(t) for the network (1).

60

60

40

40 20

y2(t)

y1(t)

20 0

0 −20

−20

−40

−40 50

−60 50 50 0

0

50 0

0

−50 x2(t)

−50

−100

x1(t)

−50 x2(t)

−50

−100

x1(t)

25

60

60

40

40

20

x2(t)

x1(t)

20 0

0

−20 −20

−40 −60 50

−40 50 60

0

60

0

40

40

20

−50

0 −100

y2(t)

20

−50

0

−20 −40

−100

y2(t)

y1(t)

−20 −40

y1(t)

100

100

80

80

60

60

40

40

20

20

x2(t)

x1(t)

Fig. 6. The phase plot among the real parts and the imaginary parts for the neural network. Next, we will estimate the attraction basins for the equilibrium points located in the positive invariant sets. Here, for brevity, only the attraction basins of the six equilibrium points are estimated which are located in the Λk (k = 1, 2, . . . , 6) given above. Calculation shows that ϱe1 = 3.1, ϱb1 = 5.5, ϱe2 = −2.3, ϱb2 = 0.4 and e ι1 = −2.1, b ι1 = 0.6, e ι2 = 2.9, b ι2 = 5.3. Moreover, it can be checked that all the conditions in Theorem 4 hold. Therefore, the attraction basins of the six equilibrium points located in the Λk (k = 1, 2, . . . , 6) are, respectively, Υ1 = (−∞, 3.1)×(0.4, 16)×(−∞, −2.1)×(12, +∞), Υ2 = (5.5, 12)×(0.4, 16)×(0.6, 16)×(5.3, 12), Υ3 = (12, +∞) × (16, +∞) × (16, +∞) × (−∞, 2.9), Υ4 = (12, +∞) × (−∞, −2.3) × (0.6, 16) × (12, +∞), Υ5 = (−∞, 3.1)×(−∞, −2.3)×(−∞, −2.1)×(−∞, 2.9) and Υ6 = (12, +∞)×(16, +∞)×(16, +∞)×(12, +∞), which infers that the estimated attraction basins Υk are really larger than Λk for k = 1, 2, . . . , 6. Simulation results with random initial values located in the estimated attraction basins Υk are depicted in Figs. 7-9.

0

0

−20

−20

−40

−40

−60

−60

−80

−80

−100

−100

0

1

2

3 t

4

5

6

0

1

2

3 t

4

5

6

100

100

80

80

60

60

40

40

20

20

y2(t)

y1(t)

Fig. 7. Trajectories of the real part x(t) of the state u(t) for the network (1) with random initial states in Υk .

0

0

−20

−20

−40

−40

−60

−60

−80

−80

−100

−100

0

1

2

3 t

4

5

6

0

1

2

3 t

4

5

6

Fig. 8. Trajectories of the imaginary part y(t) of u(t) for the network (1) with random initial states in Υk .

100

100

50

50

y2(t)

y1(t)

26

0

−50

−50

−100 100

−100 100 50

50

100 50

0

100

−100

0

−50

−50 −100

50

0

0

−50 x2(t)

100

100

50

50

0

−50

−50 −100

x2(t)

x1(t)

x2(t)

x1(t)

0

−100

x1(t)

0

−50

−100 100

−100 100 50

100

100

−100

0

−50

−50 −100

50

0

0

−50 y2(t)

50

50

0

y2(t)

y1(t)

−50 −100

−100

y1(t)

Fig. 9. The phase plot among the real parts and the imaginary parts for the network (1) with initial states in Υk . Finally, the initial value of u(0) is taken, respectively, from the estimated attraction basins Υk , then it follows from Lemma 3, Theorem 2 and Theorem 4 that the corresponding trajectory u(t) would enter into the corresponding positive invariant set Λk and converge to the equilibrium point located there. However, it can be checked that the criterion (Corollary 2) in Ref. [29] could not guarantee whether such a conclusion holds or not. Therefore, the results obtained here are less conservative than those in [29]. To be specific, in Ref. [23], an illustrative two-dimensional neural network example has been given, where the network is with time-varying delays and discontinuous activation functions and it has been proven that this neural system has only 9 locally stable equilibria, whereas it is ascertained in this paper that it has at least 81 locally stable equilibrium points. From this point of view, it would be better to choose the complex-valued networks instead of the real-valued ones for the high-capacity associative memory tasks. V. C ONCLUSIONS In this paper, the multistability issue of the complex-valued neural networks with time-varying delays and discontinuous activation functions has been investigated. Several sufficient criteria have been derived to ascertain the existence of at least 25n equilibrium points, where 9n equilibrium points are locally stable and 16n −9n equilibrium points are unstable. The relating attraction basins have also been estimated and enlarged. Finally, one numerical example has been illustrated to show the effectiveness of the obtained results. R EFERENCES [1] I. Cha and S. A. Kassam, Channel equalization using adaptive complex radial basis function networks, IEEE Journal on Selected Areas in Communications, Vol. 13, No. 1, pp. 122-131, 1995. [2] G. Tanaka and K. Aihara, Complex-valued multistate associative memory with nonlinear multilevel functions for gray-level image reconstruction, IEEE Transactions on Neural Networks, Vol. 20, No. 9, pp. 1463-1473, 2009. [3] M. F. Amin and K. Murase, Single-layered complex-valued neural network for real-valued classification problems, Neurocomputing, Vol. 72, Nos. 4-6, pp. 945-955, 2009.

27

[4] T. Nitta, Orthogonality of decision boundaries in complex-valued neural networks, Neural Computation, Vol. 16, No. 1, pp. 73-97, 2004. [5] S. L. Goh and D. P. Mandic, An augmented extended Kalman filter algorithm for complex-valued recurrent neural networks, Neural Computation, Vol. 19, No. 4, pp. 1039-1055, 2007. [6] T. Kim and T. Adali, Fully complex multi-layer perceptron network for nonlinear signal processing, Journal of VLSI Signal Processing, Vol. 32, pp. 29-43, 2002. [7] T. Kim and T. Adali, Approximation by fully complex multilayer perceptrons, Neural Computation, Vol. 15, No. 7, pp. 1641-1666, 2003. [8] R. Savitha, S. Suresh and N. Sundararajan, A fully complex-valued radial basis function network and its learning algorithm, International Journal of Neural Systems, Vol. 19, No. 4, pp. 253-267, 2009. [9] R. Savitha, S. Suresh and N. Sundararajan, Projection-based fast learning fully complex-valued relaxation neural network, IEEE Transactions on Neural Networks and Learning Systems, Vol. 24, No. 4, pp. 529-541, 2013. [10] A. Hirose, Dynamics of fully complex-valued neural networks, Electronics Letters, Vol. 28, No. 16, pp. 1492-1494, 1992. [11] S. Jankowski, A. Lozowski and J. M. Zurada, Complex-valued multistate neural associative memory, IEEE Transactions on Neural Networks, Vol. 7, No. 6, pp. 1491-1496, 1996. [12] M. Mostafa, W. G. Teich and J. Lindner, Local stability analysis of discrete-time, continuous-state, complex-valued recurrent neural networks with inner state feedback, IEEE Transactions on Neural Networks and Learning Systems, Vol. 25, No. 4, pp. 830-836, 2014. [13] A. Hirose, Complex-Valued Neural Networks, Springer-Verlag Berlin Heidelberg, 2006. [14] C. Li, X. Liao and J. Yu, Complex-valued recurrent neural network with IIR neuron model: training and applications, Circuits, Systems and Signal Processing, Vol. 21, No. 5, pp. 461-471, 2002. [15] V. S. H. Rao and G. R. Murthy, Global dynamics of a class of complex valued neural networks, International Journal of Neural Systems, Vol. 18, No. 2, pp. 165-171, 2008. [16] S. L. Goh and D. P. Mandic, A complex-valued RTRL algorithm for recurrent neural networks, Neural Computation, Vol. 16, No. 12, pp. 2699-2713, 2004. [17] W. Zhou and J. M. Zurada, Discrete-time recurrent neural networks with complex-valued linear threshold neurons, IEEE Transactions on Circuits and Systems-II: Express Briefs, Vol. 56, No. 8, pp. 669-673, 2009. [18] D. L. Lee, Relaxation of the stability condition of the complex-valued neural networks, IEEE Transactions on Neural Networks, Vol. 12, No. 5, pp. 1260-1262, 2001. [19] D.-L. Lee, Improvements of complex-valued Hopfield associative memory by using generalized projection rules, IEEE Transactions on Neural Networks, Vol. 17, No. 5, pp. 1341-1347, 2006. [20] S. V. Chakravarthy and J. Ghosh, A complex-valued associative memory for storing patterns as oscillatory states, Biological Cybernetics, Vol. 75, No. 3, pp. 229-238, 1996. [21] C.-Y. Cheng, K.-H. Lin and C.-W. Shih, Multistability in recurrent neural networks, SIAM Journal on Applied Mathematics, Vol. 66, No. 4, pp. 1301-1320, 2006. [22] G. Huang and J. Cao, Delay-dependent multistability in recurrent neural networks, Neural Networks, Vol. 23, No. 2, pp. 201-209, 2010. [23] X. Nie and W. X. Zheng, Multistability of neural networks with discontinuous non-monotonic piecewise linear activation functions and time-varying delays, Neural Networks, Vol. 65, pp. 65-79, 2015. [24] G. Bao and Z. Zeng, Multistability of periodic delayed recurrent neural network with memristors, Neural Computing & Applications, Vol. 23, No. 7, pp. 1963-1967, 2013. [25] L. Wang and T. Chen, Multiple µ-stability of neural networks with unbounded time-varying delays, Neural Networks, Vol. 53, pp. 109-118, 2014. [26] L. Wang and T. Chen, Multistability of neural networks with mexican-hat-type activation functions, IEEE Transactions on Neural Networks and Learning Systems, Vol. 23, No. 11, 2012. [27] Y. Huang, H. Zhang and Z. Wang, Multistability of complex-valued recurrent neural networks with real-imaginary-type activation functions, Applied Mathematics and Computation, Vol. 229, pp. 187-200, 2014. [28] L. Wang, W. Lu and T. Chen, Coexistence and local stability of multiple equilibria in neural networks with piecewise linear nondecreasing activation functions, Neural Networks, Vol. 23, No. 2, pp. 189-200, 2010. [29] R. Rakkiyappan, G. Velmurugan and J. Cao, Multiple µ-stability analysis of complex-valued neural networks with unbounded timevarying delays, Neurocomputing, Vol. 149, Part B, pp. 594-607, 2015. [30] J. Hu and J. Wang, Global stability of complex-valued recurrent neural networks with time-delays, IEEE Transactions on Neural Networks and Learning Systems, Vol. 23, No. 6, pp. 853-865, 2012. [31] X. Xu, J. Zhang and J. Shi, Exponential stability of complex-valued neural networks with mixed delays, Neurocomputing, Vol. 128, pp. 483-490, 2014. [32] G. Huang and J. Cao, Multistability of neural networks with discontinuous activation function, Communications in Nonlinear Science and Numerical Simulation, Vol. 13, No. 10, pp. 2279-2289, 2008.

28

[33] Y. Huang, H. Zhang and Z. Wang, Dynamical stability analysis of multiple equilibrium points in time-varying delayed recurrent neural networks with discontinuous activation functions, Neurocomputing, Vol. 91, pp. 21-28, 2012. [34] Y. Wang and L. Huang, Global stability analysis of competitive neural networks with mixed time-varying delays and discontinuous neuron activations, Neurocomputing, Vol. 152, pp. 85-96, 2015. [35] J. Liang, Z. Wang, Y. Liu and X. Liu, State estimation for two-dimensional complex networks with randomly occuring onlinearities and randomly varying sensor delays, International Journal of Robust and Nonlinear Control, Vol. 24, No. 1, pp. 18-38, 2014. [36] S. Khajanchi and S. Banerjee, Stability and bifurcation analysis of delay induced tumor immune interaction model, Applied Mathematics and Computation, Vol. 248, pp. 652-671, 2014. [37] X. Zeng, C. Li, T. Huang and X. He, Stability analysis of complex-valued impulsive systems with time delay, Applied Mathematics and Computation, Vol. 256, pp. 75-82, 2015. [38] Q. Song, Z. Zhao and Y. Liu, Stability analysis of complex-valued neural networks with probabilistic time-varying delays, Neurocomputing, Vol. 159, pp. 96-104, 2015. [39] T. Fang and J. Sun, Further investigate the stability of complex-valued recurrent neural networks with time-delays, IEEE Transactions on Neural Networks and Learning Systems, Vol. 25, No. 9, pp. 1709-1713, 2014. [40] Z. Wang, L. Liu, Q.-H. Shan and H. Zhang, Stability criteria for recurrent neural networks with time-varying delay based on secondary delay partitioning method, IEEE Transactions on Neural Networks and Learning Systems, Vol. 26, No. 10, pp. 2589-2595, 2015. [41] Z. Wang, S. Ding, Z. Huang and H. Zhang, Exponential stability and stabilization of delayed memristive neural networks based on quadratic convex combination method, IEEE Transactions on Neural Networks and Learning Systems, DOI:10.1109/TNNLS.2015.2485259. [42] Z. Zhang, C. Lin and B. Chen, Global stability criterion for delayed complex-valued recurrent neural networks, IEEE Transactions on Neural Networks and Learning Systems, Vol. 25, No. 9, pp. 1704-1708, 2014. [43] B. Zhou and Q. Song, Boundedness and complete stability of complex-valued neural networks with time delay, IEEE Transactions on Neural Networks and Learning Systems, Vol. 24, No. 8, pp. 1227-1238, 2013. [44] X. Nie and W. X. Zheng, Dynamical behaviors of multiple equilibria in competitive neural networks with discontinuous nonmonotonic piecewise linear activation functions, IEEE Transactions on Cybernetics, Vol. 46, No. 3, pp. 679-693, 2016. [45] J. Hu and J. Wang, Multistability and multiperiodicity analysis of complex-valued neural networks, Proc. 11th International Symposium on Neural Networks, Vol. 8866, pp. 59-68, 2014. [46] R. Rakkiyappan, R. Sivaranjani, G. Velmurugana and J. Cao, Analysis of global O(t−α ) stability and global asymptotical periodicity for a class of fractional-order complex-valued neural networks with time varying delays, Neural Networks, Vol. 77, pp. 51-69, 2016. [47] A. F. Filippov, Differential Equations with Discontinuous Right-hand Sides, Dordrecht: Kluwer, 1988.

Author 1: Jinling Liang Department of Mathematics, Southeast University, Nanjing 210096, China. Email: [email protected]§ Author 2: Weiqiang Gong Department of Mathematics, Southeast University, Nanjing 210096, China. Email: [email protected]§ Author 3: Tingwen Huang Science Program, Texas A&M University, Doha, Qatar. Email: [email protected]