Comparison principle and stability of stochastic delayed neural networks with Markovian switching

Comparison principle and stability of stochastic delayed neural networks with Markovian switching

Neurocomputing 123 (2014) 436–442 Contents lists available at ScienceDirect Neurocomputing journal homepage: www.elsevier.com/locate/neucom Letters...

338KB Sizes 0 Downloads 56 Views

Neurocomputing 123 (2014) 436–442

Contents lists available at ScienceDirect

Neurocomputing journal homepage: www.elsevier.com/locate/neucom

Letters

Comparison principle and stability of stochastic delayed neural networks with Markovian switching$ Dan Li a, Quanxin Zhu a,b,n a b

Department of Mathematics, Ningbo University, Ningbo 315211, Zhejiang, China School of Mathematical Sciences and Institute of Finance and Statistics, Nanjing Normal University, Nanjing 210023, Jiangsu, China

art ic l e i nf o

a b s t r a c t

Article history: Received 22 April 2013 Received in revised form 23 June 2013 Accepted 13 July 2013 Communicated by Y. Liu Available online 21 August 2013

This paper deals with the stability issue for a class of stochastic delayed neural networks with Markovian switching. The jumping parameters are determined by a continuous-time, discrete-state Markov chain. Different from the usual Lyapunov–Krasovskii functional and linear matrix inequality method, we first introduce and study a new comparison principle in the field of stochastic delayed neural networks. Then, we apply this new comparison principle to obtain several novel stability criteria of the suggested system. Moreover, an example is given to illustrate the theoretical results well. & 2013 Elsevier B.V. All rights reserved.

Keywords: Comparison principle Stochastic delayed neural network pth Moment stability Markovian switching Stable in probability

1. Introduction As is well known, an artificial neural network consists of an interconnected group of artificial neurons, and it processes information depending on a computation approach. During the past decades, artificial neural networks received a great deal of attention owing to the fact that neural networks can be applied to many areas, such as robotics, aerospace, associative memory, pattern recognition, signal processing, automatic control engineering, fault diagnosis, telecommunications, parallel computation and combinatorial optimization. Such applications depend on the existence and uniqueness of equilibrium points and the qualitative properties of stability, and so the stability analysis is important in the practical design and applications of neural networks. As a consequence, a large number of results have appeared in the literature, see e.g. [1–32] and references therein. On the other hand, noises and delays are two main factors of affecting the stability of neural networks. In fact, a real nervous system is usually affected by the external perturbation which in

☆ This work was jointly supported by the National Natural Science Foundation of China 61374080, (10801056), the Natural Science Foundation of Zhejiang Province (LY12F03010), the Natural Science Foundation of Ningbo (2012A610032) and K.C. Wong Magna Fund in Ningbo University. n Corresponding author at: School of Mathematical Sciences and Institute of Finance and Statistics, Nanjing Normal University, Nanjing 210023, Jiangsu, China. E-mail address: [email protected] (Q. Zhu).

0925-2312/$ - see front matter & 2013 Elsevier B.V. All rights reserved. http://dx.doi.org/10.1016/j.neucom.2013.07.039

many cases is of great uncertainty and hence may be treated as random. Also, time delays are unavoidable in neural networks because of various reasons such as the finite switching speed of amplifiers in circuit implementation of a neural network. Moreover, time delays could change a network from stable to unstable. Hence, noises and delays should be taken into consideration in modeling neural networks. As in [22–32], neural networks with noises and delays are called stochastic delayed neural networks. With the rapid development of the social and economic changes, there often appears the phenomenon of information latching, and the abrupt phenomena such as random failures or repairs of the components, sudden environmental changes, changing subsystem interconnections. To cope with such phenomena, a class of stochastic delayed neural networks with Markovian switching has been recognized to be the best system to model them. Generally speaking, this class of neural networks is a hybrid system with a state vector that has two components x (t) and r(t), where x(t) denotes the state and r(t) is a continuoustime Markov chain with a finite state space S ¼ f1; 2; …; Ng, which is usually regarded as the mode. In its operation, this class of neural networks will switch from one mode to another in a random way, which is determined by a continuous-time Markov chain r(t). Therefore, it is significant and challenging to investigate the stability of stochastic delayed neural networks with Markovian switching. Recently, a large amount of results on the stability of stochastic delayed neural networks with Markovian switching have been

D. Li, Q. Zhu / Neurocomputing 123 (2014) 436–442

reported in the literature, for instance, see [25–32] and references therein. In [25], Balasubramaniam and Rakkiyappan discussed the globally asymptotical stability problem for a class of Markovian jumping stochastic Cohen–Grossberg neural networks with discrete interval and distributed delays; Zhu and Cao studied the exponential stability for several new classes of Markovian jump stochastic neural networks with mixed time delays with/without impulse control in [26–28], and they investigated the asymptotical stability for a class of stochastic neural networks of neutral type with both Markovian switching and mixed time delays in [29]. In [30], Zhang et al. dealt with the asymptotical stability analysis of neutral-type impulsive neural networks with mixed time-varying delays and Markovian switching. It should be mentioned that the methods and techniques used in the previous literature (see [25– 32]) mainly depended on the Lyapunov–Krasovskii functional and linear matrix inequality method, and the comparison principle had not been applied in the previous literature. This situation motivates our present research. Inspired by the above discussion, in this paper we investigate the stability issue for a class of stochastic delayed neural networks with Markovian switching. Without using the usual Lyapunov– Krasovskii functional and linear matrix inequality method, instead, we first introduce and study a new comparison principle in the field of stochastic delayed neural networks. Then, we apply this new comparison principle to obtain several novel stability criteria of the suggested system. Moreover, an example is given to illustrate the theoretical results well. The rest of the paper is arranged as follows. In Section 2, we introduce the model of stochastic delayed neural networks, notations and several definitions. Section 3 presents our main results: we first obtain a new comparison principle, and then apply this new comparison principle to obtain several novel stability criteria of the suggested system, which includes the stability in probability, pth moment stability and pth moment exponential stability. In Section 4, an example is provided to illustrate the effectiveness of the proposed results. Finally, we conclude the paper with some general remarks in Section 5.

2. Model description and problem formulation Throughout this paper, unless otherwise specified, let ðΩ; F ; PÞ be a complete probability space with a filtration fF t gt Z 0 satisfying the usual conditions (i.e. it is right continuous and F t

In this paper, we consider the following stochastic delayed neural network with Markovian switching: dxðtÞ ¼ ½CðrðtÞÞxðtÞ þ AðrðtÞÞf ðxðtÞÞ þ BðrðtÞÞgðxðtτðtÞÞÞ dt þ sðxðtÞ; xðtτðtÞÞ; t; rðtÞÞ dwðtÞ:

matrix, its transpose is denoted by AT . If A is a matrix, its trace qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi norm is denoted by jAj ¼ trðAT AÞ while its operator norm is  denoted by J A J ¼ supfAxj: jxj ¼ 1g. If A is a symmetric matrix, we use λmax ðAÞ and λmin ðAÞ to denote its largest and smallest eigenvalue, respectively. Let frðtÞ; t Z 0g be a right-continuous Markovian chain on the probability space taking values in a finite state space S ¼ f1; 2; …; Ng with generator Q ¼ ðQ ij ÞNN given by ( if i aj qij Δt þ oðΔÞ Pfrðt þ ΔÞ ¼ jjrðtÞ ¼ ig ¼ 1 þ qij Δ þ oðΔÞ if i ¼ j; where Δ 4 0. Here, qij Z 0 is the transition rate from i to j if ia j while qii ¼ ∑j a i qij .

ð1Þ

T

where xðtÞ ¼ ½x1 ðtÞ; x2 ðtÞ; …; xn ðtÞ is the state vector associated with the n neurons, the diagonal matrix CðrðtÞÞ ¼ diagðc1 ðrðtÞÞ; c2 ðrðtÞÞ; …; cn ðrðtÞÞÞ has positive entries ci ðrðtÞÞ Z 0 ði ¼ 1; 2; …; nÞ. AðrðtÞÞ denotes the feedback matrix, and BðrðtÞÞ represents the delayed feedback matrix. Both f ðxðtÞÞ ¼ ½f 1 ðx1 ðtÞÞ; f 2 ðx2 ðtÞÞ; …; f n ðxn ðtÞÞT and gðxðtÞÞ ¼ ½g 1 ðx1 ðtÞÞ; g 2 ðx2 ðtÞÞ; …; g n ðxn ðtÞÞT are the neuron activation functions. The noise perturbation s : Rn  Rn  R þ  S-Rnm is a Borel measurable function. τðtÞ is the time-varying delay such that 0 r τðtÞ r τ. As usual, we assume that the Markovian chain r(t) is independent of the Brownian motion w(t). It is known that almost every sample path of r(t) is a right-continuous step function with a finite number of R þ . The matrices CðrðtÞÞ, AðrðtÞÞ and BðrðtÞÞ will be simply written by C i , Ai and Bi , respectively, if r(t) takes a value iA S, then we can rewrite (1) as the following form: dxðtÞ ¼ ½C i xðtÞ þAi f ðxðtÞÞ þ Bi gðxðtτðtÞÞÞ dt þ sðxðtÞ; xðtτðtÞÞ; t; iÞ dwðtÞ:

ð2Þ

ϕ Let Cð½τ; 0; R Þ denote the family of continuous functions  from ½τ; 0 to Rn with the uniform norm J ϕ J ¼ supτ r θ r 0 ϕðθÞj. For any p 40, denote by LpF 0 ð½τ; 0; Rn Þ the family of all F 0 measurable, Cð½τ; 0; Rn Þvalued  stochastic variables ϕ ¼ R0  fϕðsÞ : τ r s r0g such that τ EϕðsÞp ds o 1, where E½ stands for the correspondent expectation operator with respect to the given probability measure P. The main aim of this paper is to establish a new comparison principle to study the stability of system (1) (or (2)). To this end, we assume that s, f and g satisfy the local Lipschitz and linear growth conditions. Then, it is clear that for every initial data xðsÞ ¼ ϕðsÞ on τ r s r0 in LpF 0 ð½τ; 0; Rn Þ, system (1) (or (2)) has a unique solution, which is denoted by xðt; ϕÞ. Furthermore, we assume that f ð0Þ ¼ gð0Þ ¼ 0; sð0; 0; t; iÞ  0 for all iA S. Then system (1) (or (2)) admits a trivial solution or zero solution xðt; 0Þ  0 corresponding to the initial data ϕ ¼ 0. Let C 2;1 ðRn  R þ  S; R þ Þ denote the family of all nonnegative functions V on Rn  R þ  S which is continuously twice differential in x and once differential in t, then we define an operator LV from Rn  R þ  S to R by n

LVðx; t; iÞ ¼ V t ðx; t; iÞ þ V x ðx; t; iÞ½C i x þ Ai f ðxÞ þ Bi gðyÞ

T

contains all P-null sets). Let wðtÞ ¼ ðw1 ðtÞ; …; wm ðtÞÞ be an mdimensional Brownian motion defined on the probability space. Rn and Rnm denote the m-dimensional Euclidean space and the set of all n  m real matrix, respectively. R þ ¼ ½0; 1Þ and the superscript “T” denotes the transpose of a matrix or vector. Let jj denote the Euclidean norm in Rn . If A is a vector or

437

N 1 þ tr½sT ðx; y; t; iÞV xx ðx; t; iÞsðx; y; t; iÞ þ ∑ qij Vðx; t; jÞ; 2 j¼1

where ∂V ðx; t; iÞ ;  ∂t  ∂Vðx; t; iÞ ∂V ðx; t; iÞ V x ðx; t; iÞ ¼ ; …; ; ∂x1 ∂xn  2  ∂ Vðx; t; iÞ : V xx ðx; t; iÞ ¼ ∂xj ∂xk nn

V t ðx; t; iÞ ¼

Definition 1. A function φðuÞ is said to belong to the class K if φ is a continuous function such that φð0Þ ¼ 0 and φðuÞ is strictly increasing in u. A function is said to belong to the class VK, if φ belongs to K and φ is convex. Definition 2. The trivial solution of (1) (or (2)) is said to be (1) Stable in probability, if for each ε 4 0; η 4 0, there exists δ ¼ δðε; ηÞ 4 0 such that Pf J xðt; ϕÞ J Z εg o η; t Z τ when Eð J ϕ J Þ r δ.

438

D. Li, Q. Zhu / Neurocomputing 123 (2014) 436–442

(2) pth Moment stable, if for each ε 4 0, there exists δ ¼ δðεÞ 4 0 such that Eð‖xðt; ϕÞ‖p Þ o ε; t Z τ when Eð‖ϕ‖p Þ o δ, where p is a positive constant. (3) pth Moment exponentially stable, if there exist two constants K 4 0 and δ 4 0 such that Eð‖xðt; ϕÞ‖p Þ rKEð‖ϕ‖p ÞexpðδtÞ, t Z τ.

For sufficiently small ρ, by the above equality and assumptions ðH 1 Þ–ðH 2 Þ, we get Z t þρ ELVðxðuÞ; u; rðuÞÞ du mðt þ ρÞmðtÞ ¼ t Z tþρ E½CðrðuÞÞVðxðuÞ; u; rðuÞÞ þ AðrðuÞÞf ðVðxðuÞ; u; rðuÞÞÞ du r t

Z

Lemma 1. If A A ZNN , then the following statements are equivalent: (1) A is a nonsingular M-matrix. (2) A is semi-positive, that is, there exists x b 0 in RN such that Ax b 0. (3) A is inverse-positive, that is A1 exists and A1 b0. (4) All the leading principal minors of A are positive, i.e. 0 1  a11 … a1k    B ⋮  ⋱ ⋮ C @ A 4 0;    ak1 ⋯ akk 

þ

t tþρ

Z r

t þρ

t

Z

þ ¼

½CðrðuÞÞEVðxðuÞ; u; rðuÞÞ þ AðrðuÞÞf ðEVðxðuÞ; u; rðuÞÞÞ du

t þρ

t tþρ

Z

E½BðrðuÞÞgðV ðxðu þ sÞ; u þ s; rðu þ sÞÞÞ du

BðrðuÞÞgðEVðxðu þ sÞ; u þ s; rðu þsÞÞÞ du

½CðrðuÞÞmðuÞ þAðrðuÞÞf ðmðuÞÞ þ BðrðuÞÞgðmðu þsÞÞ du;

t

and so mðt þ ρÞmðtÞ

ρ

for every k ¼ 1; 2; …; N. (5) The new matrix, by the ith row of A with its jth row and then the ith column with the jth column is nonsingular M-matrix.

Z r

tþρ

½CðrðuÞÞmðuÞ þ AðrðuÞÞf ðmðuÞÞ þ BðrðuÞÞgðmðu þ sÞÞ du:

t

Letting ρ-0 in the above inequality, we obtain D þ mðtÞ rC i mðtÞ þ Ai f ðmðtÞÞ þ Bi gðmðt þ sÞÞ; 3. Comparison principle and stability In this section, we shall establish a new comparison principle to investigate the stability of system (1) (or (2)). First of all, let us consider the following deterministic neural network:

where D þ is denoted by the derivation of Dini. If mðsÞ ¼ EVðxðsÞ; s; rðsÞÞ r ψ ðsÞ;

s A ½τ; 0;

then by Theorem 8.1.4 in [33], we obtain _

duðtÞ ¼ C i uðtÞ þAi f ðuðtÞÞ þBi gðuðt þ sÞÞ; dt

uð0Þ ¼ ψ ;

ð3Þ

mðtÞ ¼ EVðxðtÞ; t; rðtÞÞ r uð0; ψ ÞðtÞ;

t Z τ:

This completes the proof of Theorem 1.



where ψ A Cð½τ; 0; R þ Þ; s A ½τ; 0; t Z 0. Denote uð0; ψ ÞðtÞ; t Z τ by the solutions of Eq. (3) with an initial value uð0Þ ¼ ψ . Let _ uð0; ψ ÞðtÞ denote the largest solution of (3).

Theorem 2. Let ðH 1 Þ and ðH 2 Þ hold, and assume that the following condition is satisfied.

Theorem 1. Assume that the following conditions are satisfied.

(H3) There exists a continuous positive definite function VðxðtÞ; t; rðtÞÞ A C 2;1 ðRn  R þ  S; R þ Þ, which satisfies LV ðxðtÞ; t; rðtÞÞ r cVðxðtÞ; t; rðtÞÞ; t Z τ; rðtÞ A S; xðtÞ A Rn ; c A R:

(H1) LVðxðtÞ; t; iÞ r C i VðxðtÞ; t; iÞ þ Ai f ðVðxðtÞ; t; iÞÞ þ Bi gðVðxðt þ sÞ; t þ s; rðt þ sÞÞÞ holds for all i A S and s A ½τ; 0; (H2) For each s A ½τ; 0; rðtÞ A S and t Z τ, the following holds:

Then the trivial solution of (3) is stable implies that the trivial solution of (1) (or (2)) is stable in probability.

E½C i VðxðtÞ; t; iÞ þ Ai f ðVðxðtÞ; t; iÞÞ þ Bi gðVðxðt þ sÞ; t þ s; rðt þ sÞÞÞ r C i EVðxðtÞ; t; iÞ þ Ai f ðEVðxðtÞ; t; iÞÞ þ Bi gðEVðxðt þsÞ; t þ s; rðt þ sÞÞÞ:

Also, assume that for the solution xðtÞ ¼ xðt; ϕÞ of (1) (or (2)), EVðxðtÞ; t; rðtÞÞ exists for t Z τ. If EVðxðsÞ; s; rðsÞÞ r ψ ðsÞ;

s A ½τ; 0;

i A S;

ð4Þ

Proof. Since VðxðtÞ; t; iÞ is a continuous positive definite function, then we have Vð0; t; iÞ ¼ 0. Moreover, there exists a function bA K such that bð J xðtÞ J Þ r VðxðtÞ; t; rðtÞÞ

ð6Þ

for all t Z 0. By using the generalized Itô's formula, we get Z t ELVðxðsÞ; s; rðsÞÞ ds: EV ðxðtÞ; t; rðtÞÞ ¼ EVðxð0Þ; 0; rð0ÞÞ þ

ð7Þ

0

Then _

EVðxðtÞ; t; iÞ r uð0; ψ ÞðtÞ;

t Zτ:

ð5Þ

Letting M be the maximum value of EV ðxðsÞ; s; rðsÞÞ on the interval of ½0; t, then it follows from ðH 3 Þ that Z t Z t ELVðxðsÞ; s; rðsÞÞ ds r cEVðxðsÞ; s; rðsÞÞ ds r cMt; 0

Proof. Let mðtÞ ¼ EVðxðtÞ; t; rðtÞÞ;

t Z 0:

Obviously, it is trivial when τ r t r 0. So we only need to prove the case when t 4 0. Then, it follows from the generalized Itô's formula that Z

ELVðxðsÞ; s; rðsÞÞ ds: 0

EV ðxðtÞ; t; rðtÞÞ r EV ðxð0Þ; 0; rð0ÞÞ þ cMt: Hence, we have EV ðxðsÞ; s; rðsÞÞ r EV ðxð0Þ; 0; rð0ÞÞ þ cMs: Let us choose

t

mðtÞmð0Þ ¼

0

which together with (7) yields

ψ ðsÞ ¼ EVðxð0Þ; 0; rð0ÞÞ þ cMs:

D. Li, Q. Zhu / Neurocomputing 123 (2014) 436–442

439

M(p), where

Then, we obtain EVðxðsÞ; s; rðsÞÞ r ψ ðsÞ;

s A ½τ; 0:

MðpÞ ¼ diagðπ 1 ðpÞ; …; π N ðpÞÞQ ;

ð8Þ

Now, assume that the trivial solution of (3) is stable. Let ε and η ðη o 1Þ be given positive numbers. Then for ε1 ¼ ηbðεÞ, there exists a positive number δ1 ¼ δ1 ðε; ηÞ such that J uð0; ψ Þ ðtÞ J o ε1 ; t Z τ, when J ψ J o δ1 . Thus, we have _

uð0; ψ ÞðtÞ o ηbðεÞ;

t Z τ:

Q ¼ ðqij ÞNN

π i ðpÞ ¼ pαi pβi pλi 12 pρi 12pðp2Þωi ;

and

such that "

1 1 ; …; pλ1 þ 12pθ1 þ 12pðp2Þη1 pλN þ 12pθN þ 12pðp2ÞηN

ð9Þ

#T

-

b M 1 1 ;

By Theorem 1 and (9), we obtain _

EVðxðtÞ; t; rðtÞÞ r uð0; ψ ÞðtÞ o ηbðεÞ;

t Z τ:

-

ð10Þ

Then, it follows from (10) and the Chebyshev inequality that     P J xðt; ϕÞ J Z ε ¼ P bð J xðt; ϕÞðtÞ J Þ Z bðεÞ   EV ðxð0Þ; 0; rð0ÞÞ r P Vðxð0Þ; 0; rð0ÞÞ Z bðεÞ r bðεÞ _ uð0; ψ ÞðtÞ ηbðεÞ o ¼ η; r bðεÞ bðεÞ

ð14Þ T

where 1 ¼ ½1; …; 1 . Then the trivial solution of (1) (or (2)) is pth moment exponentially stable. Proof. Let us consider the following Lyapunov–Krasovskii function: VðxðtÞ; t; iÞ ¼ γ i jxðtÞjp ;

which verifies that the trivial solution of (1) (or (2)) is stable in probability. This completes the proof of Theorem 2. □

where γ i 4 0; i A S. Taking q1 ¼ min1 r i r N γ i and q2 ¼ max1 r i r N γ i , it is clear that the following holds:

Theorem 3. Let ðH 1 Þ–ðH 3 Þ hold, and assume that the following condition is satisfied.

q1 jxðtÞjp r VðxðtÞ; t; iÞ r q2 jxðtÞjp :

(H4) There exists a function b A VK such that

1 LVðx; t; iÞ ¼ pγ i jxjp2 xT ½C i x þ Ai f ðxÞ þ Bi gðyÞþ pγ i jxjp2 jsðx; y; t; iÞj2 2 N 1 p4 T 2  pð2pÞγ i jxj jx sðx; y; t; iÞj þ ∑ qij γ j jxjp 2 j¼1

bð‖xðtÞ‖p Þ rV ðxðtÞ; t; iÞ for all t Z τ; i A S and xðtÞ A Rn . Then the trivial solution of (3) is stable implies that the trivial solution of (1) (or (2)) is pth moment stable.

A direct computation yields

¼ pγ i jxjp2 xT C i x þ pγ i jxjp2 xT Ai f ðxÞ 1 þ pγ i jxjp2 xT Bi gðyÞ þ pγ i jxjp2 jsðx; y; t; iÞj2 2 N 1 p4 T þ pðp2Þγ i jxj jx sðx; y; t; iÞj2 þ ∑ qij γ j jxjp 2 j¼1 " #  N 1 1 r pαi þ pβi þ pλi þ pρi þ pðp2Þωi γ i þ ∑ qij γ j jxjp 2 2 j¼1

Proof. As in the proof of Theorem 2, we have EVðxðsÞ; s; rðsÞÞ r ψ ðsÞ;

s A ½τ; 0:

Assume that the trivial solution of (3) is stable. We shall prove that the trivial solution of (1) (or (2)) is stable in the pth moment. Now, let ε be a given positive constant, and then there exists a positive constant δ1 ¼ δ1 ðεÞ such that J uð0; ψ ÞðtÞ J o bðεÞ;

1 1 þ ½pλi jxjp1 jyj þ pθi jxjp2 jyj2 þ pðp2Þηi jxjp4 jyj4 γ i 2 2 (" #  N 1 1 1 r pαi þ pβi þ pλi þ pρi þ pðp2Þωi γ i þ ∑ qij γ j VðxðtÞ; t; iÞ γi 2 2 j¼1

t Z τ;

when J ψ J o δ1 . Obviously, this fact yields _

uð0; ψ ÞðtÞ o bðεÞ;

t Z τ:

 1 1 þ pλi þ pθ i þ pðp2Þηi γ i max VðxðtsÞ; ts; rðtsÞÞ : 0rsrτ 2 2

ð11Þ

By Theorem 1, we obtain _

EVðxðtÞ; t; rðtÞÞ r uð0; ψ ÞðtÞ r bðεÞ;

t Z τ:

ð12Þ t Zτ:

ð13Þ

Combining (12) and (13), we get bðE‖xðtÞ‖p Þ r bðεÞ;

t Zτ:

Noting that the function b belongs to VK, we have E‖xðtÞ‖p r ε;

t Z τ:

Hence the trivial solution of (1) (or (2)) is pth moment stable. The proof of Theorem 3 is completed. □ Theorem 4. Let H 2 hold, and assume that the following condition is satisfied. (H5) There are several constants αi ; βi ; λi A R, ωi Z 0; θi Z 0; ρi Z 0; ηi Z 0 such that xT C i x r αi jxj2 ;

xT Ai f ðxÞ r βi jxj2 ;

jxT sðx; y; t; iÞj2 r ωi jxj4 þ ηi jyj4 ;

By (14), choose a sufficiently small constant ϑ 4 0, then we can get 

Using ðH 4 Þ, we have 0 r b½Eð‖ðxðtÞÞ‖p Þ r E½‖bðxðtÞÞ‖p  r EVðxðtÞ; t; rðtÞÞ;

ð15Þ

xT Bi gðyÞ r λi jxjjyj; jsðx; y; t; iÞj2 r ρi jxj2 þ θi jyj2 ;

moreover, for p 4 0, there exists an N  N nonsingular M-matrix

1 1 ; …; pλ1 þ ð1=2Þpθ1 þ ð1=2Þpðp−2Þη1 pλN þ ð1=2ÞpθN þ ð1=2Þpðp−2ÞηN

T

-

b ð1 þ ϑÞM−1 1 :

Noting that M(p) is an M-matrix, by Lemma 1, there exists γ b 0, such that MðpÞγ b 0. Let ð1 þ ϑÞM 1 1 ¼ γ ¼ ½γ 1 ; …; γ N T , and then we have MðpÞγ ¼ 1 þ ϑ b 0. This fact yields " #  N 1 1  pαi þ pβi þ pλi þ pρi þ pðp2Þωi γ i þ ∑ qij γ j ¼ 1 þ ϑ; 2 2 j¼1  1 1 pλi þ pθi þ pðp2Þηi γ i o1 þ ϑ; i ¼ 1; 2; …; N: 2 2 Thus, there exists a constant 0 r κ o 1 such that 

1 1 max pλi þ pθi þ pðp2Þηi γ i ¼ ð1 þ ϑÞκ : 2 2 1rirN Then we get LVðx; t; iÞ r

1

γi



ð1 þ ϑÞVðxðtÞ; t; iÞ þ ð1 þ ϑÞκ max VðxðtsÞ; ts; rðtsÞÞ : 0rsrτ

440

D. Li, Q. Zhu / Neurocomputing 123 (2014) 436–442

2

So for Eq. (1), the comparison function can be chosen as duðtÞ ¼ uðtÞ þ κ max uðtsÞ: dt 0rsrτ

ð16Þ

By Theorem 2 in [34], we see that the trivial solution of (16) is stable. Therefore, there exist two positive constants K 1 and δ 4 0, such that uðtÞ r K 1 uð0Þ expðδtÞ;

2

Let uðtÞ be the largest solution of Eq. (16). Then, we have _

uðtÞ r K 1 uð0Þ expðδtÞ;

6 sðx; y; t; 1Þ ¼ 4 2x y 2 x 6 sðx; y; t; 3Þ ¼ 4 y 0

t Z 0:

_

t Z 0:

ð17Þ

In particular, choosing uð0Þ ¼ EVðxð0Þ; 0; rð0ÞÞ, it follows from (17) and Theorem 1 that _

EVðxðtÞ; t; iÞ r uðtÞ r K 1 uð0Þ expðδtÞ;

t Z0:

ð18Þ

2

Taking K ¼ K 1 q2 =q1 , then we have EjxðtÞjp r KEjxð0Þjp expðδtÞ rKEð‖ϕ‖p ÞexpðδtÞ; which implies that the trivial solution of (1) (or (2)) is pth moment exponentially stable. □ Remark 1. This paper provides a new way to investigate the stability for a class of stochastic delayed neural network with Markovian switching. More explicitly, we first introduce and study a new comparison principle in the field of stochastic delayed neural networks. Then, we apply this new comparison principle to obtain several novel stability criteria of the suggested system. It is worth pointing out that our method can be applied to investigate the stability of other classes stochastic or deterministic neural networks with leakage delay or mixed delays (see [1–3,9,14,27, 34–37]) for instance) but some correspondent conditions should be changed according to the real system. 4. An example Consider the following three-dimensional stochastic neural network with Markovian switching: dxðtÞ ¼ ½C i xðtÞ þ Ai f ðxðtÞÞ þ Bi gðxðtτðtÞÞÞ dt ð19Þ

where xðtÞ ¼ ðx1 ðtÞ; x2 ðtÞ; x3 ðtÞÞ ; τðtÞ ¼ 0:5 þ 0:3 sin t, w(t) is a threedimensional Brownian motion, and r(t) is a right-continuous Markov chain taking values in S ¼ f1; 2; 3g with generator T

2

1

6 Q ¼ 4 1 1

1 2 2

0

5

3

7 1 5: 3

Take p¼ 3, then we get π i ð3Þ ¼ 3αi 3β i 3λi 32ρi 32ωi ; i ¼ 1; 2; 3. Also let 2 3 2 3 2 3 x x x 6 7 6 7 6 7 f 1 ðxÞ ¼ 4 x 5; f 2 ðxÞ ¼ 4 2x 5; f 3 ðxÞ ¼ 4 x 5; x x x 2 3 2 3 2 3 y y 2y 6 7 6 7 6 7 g 1 ðyÞ ¼ 4 y 5; g 2 ðyÞ ¼ 4 y 5; g 3 ðyÞ ¼ 4 y 5; y y y

1

0

0 x y

1

0

3

2

2

7 0 5;

0

5

1

1

1

6

y

7 2 5;

1

0 0

0

7 0 5;

0

6

4

0

6 A3 ¼ 4 1 1 2

2

2

0

6 C1 ¼ 4 0 0

1 2 0

3

7 1 5; 2

0

1

0

0

2

6 B3 ¼ 4 1 0

3

8

3

y 7 5; y

2

3

7 0 5; 0

0

x 0

3

1

1 1 0

6 C2 ¼ 4 0 0

0

1

6 B2 ¼ 4 0 1 2

x

6 sðx; y; t; 2Þ ¼ 4 0 x

6 A2 ¼ 4 0 1

3

6

2

y7 5; 0 3 y 07 5; x

3

7 0 5; 0

1 1

y

3

1

0

0

0 x

7 0 5;

1

2

6 C1 ¼ 4 0 0

r K 1 EVðxð0Þ; 0; rð0ÞÞ exp ðδtÞ

0

6 B1 ¼ 4 1 0 2

r K 1 q2 Ejxð0Þjp exp ðδtÞ:

þ sðxðtÞ; xðtτðtÞÞ; t; iÞ dwðtÞ;

1

6 A1 ¼ 4 2 2

Combining (15) and (18), we obtain q1 EjxðtÞjp r EVðxðtÞ; t; iÞ

x

0

0

3

7 0 5; 1

3

4

7 0 5:

0

4

It is easy to check that H 2 is satisfied. Using a simple calculation, we obtain 2 32 3 5 0 0 x   6 76 7 T x C 1 x ¼ ½x x x4 0 6 0 54 x 5 ¼ 16x2 ; 0 0 5 x 2 32 3 6 0 0 x   6 76 7 xT C 2 x ¼ ½x x x4 0 8 0 54 x 5 ¼ 20x2 ; 0 0 6 x 2 32 3 2 0 0 x   6 76 7 xT C 3 x ¼ ½x x x4 0 4 0 54 x 5 ¼ 10x2 ; 0 0 4 x 2 32 3 1 0 1 x   6 76 7 xT A1 f 1 ðxÞ ¼ ½x x x4 2 1 0 54 x 5 ¼ 6x2 ; 2 0 1 x 2 32 3 1 1 0 x   6 76 7 xT A2 f 2 ðxÞ ¼ ½x 2x x4 0 1 2 54 2x 5 ¼ 12x2 ; 1 1 0 x 2 32 3 4 0 0 x 6 76 7   xT A3 f 3 ðxÞ ¼ ½x x x4 1 2 1 54 x 5 ¼ x2 ; 1 0 2 x 2 32 3 y 2 0 1   6 76 7 xT B1 g 1 ðyÞ ¼ ½x x x4 1 1 0 54 y 5 ¼ 4xyj; y 0 1 0 2 32 3 y 1 1 0   6 76 7 xT B2 g 2 ðyÞ ¼ ½x x x4 0 1 0 54 y 5 ¼ 3xyj; y 1 1 0 2

1 6 x B3 g 3 ðyÞ ¼ ½x x x4 1 0 T

0 1 2

32

3 2y   76 y 7 0 54 5 ¼ 4xyj; y 1 0

02

0

2x

y

32

0

x

y

31

76 B6 7C jsðx; y; t; 1Þj2 ¼ trðsT1 s1 Þ ¼ tr@4 x 0 x 54 2x 0 y 5A y y 0 y x 0     ¼ 6x2 þ 3y2 ; 32 02 31 x 0 x x y 0 6 7 B 6 7C jsðx; y; t; 2Þj2 ¼ trðsT2 s2 Þ ¼ tr@4 y x 0 54 0 x y 5A 0 y y x 0 y  2  2 ¼ 3x þ 3y ;

D. Li, Q. Zhu / Neurocomputing 123 (2014) 436–442

02

x

y

B6 jsðx; y; t; 3Þj2 ¼ trðsT3 s3 Þ ¼ tr@4 0 y     ¼ 3x2 þ 3y2 ;

x 0

0

x

6 y7 54 y x 0

 T  x sðx; y; t; 1Þ2 ¼ trðsT xxT s1 Þ 1 32 3 02 2 0 2x y 0 x B6 x 0 x 76 7 6 ¼ tr@4 54 x 5½x x x4 2x y y 0 y x  T  x sðx; y; t; 2Þ2 ¼ trðsT xxT s2 Þ 2 32 3 02 2 x 0 x x x 76 7 B6 6 ¼ tr@4 y x 0 54 x 5½x x x4 0 0 y y x x

32

x

31

C 07 5A x

0

0

31

  C y 7 5A r 5x4 ;

x 0

 T  x sðx; y; t; 3Þ2 ¼ trðsT xxT s3 Þ 3 32 3 02 2 x y 0 x x B6 0 x y 76 7 6 y ¼ tr@4 54 x 5½x x x4 y 0 x 0 x

x y

y

31 y   C y7 5A r 4x4 ;

x 0

y

0

y

0

y

31

x

  C 07 5A r3x4 :

y

x

Take

α1 ¼ 16; α2 ¼ 20; α3 ¼ 10; β1 ¼ 6; β2 ¼ 12; β3 ¼ 1; λ1 ¼ 4; λ2 ¼ 3; λ3 ¼ 4; ρ1 ¼ 6; ρ2 ¼ 3; ρ3 ¼ 3; ω1 ¼ 4; ω2 ¼ 5; ω3 ¼ 3; and then it is easy to compute that π 1 ð3Þ ¼ 3; π 2 ð3Þ ¼ 3; π 3 ð3Þ ¼ 6. Thus, we have Mð3Þ ¼ diagðπ 1 ð3Þ; π 2 ð3Þ; π 3 ð3ÞÞQ 2 3 2 3 2 3 0 0 1 1 0 2 6 7 6 7 6 ¼ 4 0 3 0 54 1 2 1 5 ¼ 4 1 0

0

6

1

2

3

1

1

0

3

1

7 1 5:

2

3

Let γ ¼ ½1 1 2 , and then we derive that 2 32 3 2 3 2 1 0 1 3 6 76 7 6 7 Mð3Þγ ¼ 4 1 1 1 54 1 5 ¼ 4 4 5 ¼ ϑ: 1 2 3 2 9 T

Obviously, ϑi 4 0; i ¼ 1; 2; 3. Hence, Mð3Þ is an M-matrix. Therefore, by Theorem 4 we see that system (19) is 3rd moment exponentially stable.

5. Concluding remarks In this paper, we have studied the stability problem for a class of stochastic delayed neural networks with Markovian switching. Without using the usual Lyapunov–Krasovskii functional and linear matrix inequality method, instead, we first introduce and apply a new comparison principle to obtain several novel stability criteria of the suggested system. To the best of our knowledge, so far, few authors have used a comparison principle to discuss the stability problem for a class of stochastic delayed neural networks with Markovian switching. An example has been given to verify the theoretical results well. References [1] S. Arik, Stability analysis of delayed neural networks, IEEE Transactions on Circuits and Systems I 47 (2000) 1089–1092. [2] S. Arik, Global asymptotic stability of a larger class of neural networks with constant time delay, Physics Letters A 311 (2003) 504–511. [3] S. Arik, Global robust stability analysis of neural networks with discrete time delays, Chaos, Solitons & Fractals 26 (2005) 1407–1414.

441

[4] P. Balasubramaniam, M.S. Ali, Robust stability of uncertain fuzzy cellular neural networks with time-varying delays and reaction diffusion terms, Neurocomputing 74 (2010) 439–446. [5] R.L. Marichal, E.J. González, G.N. Marichal, Hopf bifurcation stability in Hopfield neural networks, Neural Networks 36 (2012) 51–58. [6] Y. Guo, S. Liu, Global exponential stability analysis for a class of neural networks with time delays, International Journal of Robust and Nonlinear Control 13 (2012) 1484–1494. [7] O.M. Kwon, J.H. Park, New delay-dependent robust stability criterion for uncertain neural networks with time-varying delays, Applied Mathematics and Computation 205 (2008) 417–427. [8] S. Blythe, X. Mao, X. Liao, Stability of stochastic delay neural networks, Journal of the Franklin Institute 338 (2001) 481–495. [9] B. Tojtovska, S. Jankovic, On a general decay stability of stochastic Cohen– Grossberg neural networks with time-varying delays, Applied Mathematics and Computation 219 (2012) 2289–2302. [10] X. Liao, G. Chen, E.N. Sanchez, Delay-dependent exponential stability analysis of delayed neural networks: an LMI approach, Neural Networks 15 (2002) 855–866. [11] S. Xu, J. Lam, A new approach to exponential stability analysis of neural networks with time-varying delays, Neural Networks 19 (2006) 76–83. [12] O. Faydasicok, S. Arik, Equilibrium and stability analysis of delayed neural networks under parameter uncertainties, Applied Mathematics and Computation 218 (2012) 6716–6726. [13] J. Zhang, Globally exponential stability of neural networks with variable delays, IEEE Transactions on Circuits and Systems I 50 (2003) 288–291. [14] Y. Liu, Z. Wang, X. Liu, On global exponential stability of generalized stochastic neural networks with mixed time-delays, Neurocomputing 70 (2006) 314–326. [15] Z. Wang, Y. Liu, X. Liu, On global asymptotic stability of neural networks with discrete and distributed delays, Physics Letters A 345 (2005) 299–308. [16] M.U. Akhmet, E. Yilmaz, Global exponential stability of neural networks with non-smooth and impact activations, Neural Networks 34 (2012) 18–27. [17] S. Mohamad, Global exponential stability in continuous-time and discretetime delayed bidirectional neural networks, Physica D 159 (2001) 233–251. [18] S. Mohamad, Exponential stability in Hopfield-type neural networks with impulses, Chaos, Solitons & Fractals 32 (2007) 456–467. [19] O. Faydasicok, S. Arik, Robust stability analysis of a class of neural networks with discrete time delays, Neural Networks 29–30 (2012) 52–59. [20] Q. Song, Z. Wang, Stability analysis of impulsive stochastic Cohen–Grossberg neural networks with mixed time delays, Physica A 387 (2008) 3314–3326. [21] P. Cheng, Z. Wu, L. Wang, New results on global exponential stability of impulsive functional differential systems with delayed impulses, Abstract and Applied Analysis 2012 (2012), Article ID 376464, 13 pp. [22] C. Huang, J. Cao, Stochastic dynamics of nonautonomous Cohen–Grossberg neural networks, Abstract and Applied Analysis 2011 (2011), Article ID 297147, 17 pp. [23] Q. Zhu, X. Li, Exponential and almost sure exponential stability of stochastic fuzzy delayed Cohen–Grossberg neural networks, Fuzzy Sets and Systems 203 (2012) 74–94. [24] R. Rakkiyappan, P. Balasubramaniam, Delay-dependent asymptotic stability for stochastic delayed recurrent neural networks with time varying delays, Applied Mathematics and Computation 198 (2008) 526–533. [25] P. Balasubramaniam, R. Rakkiyappan, Delay-dependent robust stability analysis for Markovian jumping stochastic Cohen–Grossberg neural networks with discrete interval and distributed time-varying delays, Nonlinear Analysis. Hybrid Systems 3 (2009) 207–214. [26] Q. Zhu, J. Cao, Robust exponential stability of Markovian jump impulsive stochastic Cohen–Grossberg neural networks with mixed time delays, IEEE Transactions on Neural Networks 21 (2010) 1314–1325. [27] Q. Zhu, J. Cao, Exponential stability of stochastic neural networks with both Markovian jump parameters and mixed time delays, IEEE Transactions on Systems, Man, and Cybernetics, Part B 41 (2011) 341–353. [28] Q. Zhu, J. Cao, Stability analysis of Markovian jump stochastic BAM neural networks with impulse control and mixed time delays, IEEE Transactions on Neural Networks and Learning Systems 23 (2012) 467–479. [29] Q. Zhu, J. Cao, Stability analysis for stochastic neural networks of neutral type with both Markovian jump parameters and mixed time delays, Neurocomputing 73 (2010) 2671–2680. [30] H. Zhang, M. Dong, Y. Wang, N. Sun, Stochastic stability analysis of neutraltype impulsive neural networks with mixed time-varying delays and Markovian jumping, Neurocomputing 73 (2010) 2689–2695. [31] H. Zhang, Y. Wang, Stability analysis of Markovian jumping stochastic Cohen– Grossberg neural networks with mixed time delays, IEEE Transactions on Neural Networks 19 (2008) 366–370. [32] Y. Liu, Z. Wang, X. Liu, On delay-dependent robust exponential stability of stochastic neural networks with mixed time delays and Markovian switching, Nonlinear Dynamics 54 (2008) 199–212. [33] V. Lakshmikantham, S. Leela, Differential and Integral Inequalities, vol. II, Academic Press, New York, 1969. [34] Y. Zhang, Stability of large-scale delay systems, Acta Mathematical Sinica 33 (1990) 381–387. (in Chinese). [35] Q. Song, Synchronization analysis in an array of asymmetric neural networks with time-varying delays and nonlinear coupling, Applied Mathematics and Computation 216 (2010) 1605–1613.

442

D. Li, Q. Zhu / Neurocomputing 123 (2014) 436–442

[36] Q. Song, Synchronization analysis of coupled connected neural networks with mixed time delays, Neurocomputing 72 (2009) 3907–3914. [37] Q. Song, J. Cao, Passivity of uncertain neural networks with both leakage delay and time-varying delay, Nonlinear Dynamics 67 (2012) 1695–1707.

Dan Li received her B.S. degree in mathematics and applied mathematics from Ningbo University, Zhejiang, China, in 2013. Her current research interests include stochastic differential equations and stochastic stability.

Quanxin Zhu received the Ph.D. degree from Sun Yatsen (Zhongshan) University, Guangzhou, China, in probability and statistic. From July 2005 to May 2009, he was with the South China Normal University. From May 2009 to August 2012, he was with the Ningbo University. He is currently a professor of Nanjing Normal University. Prof. Zhu is an associate editor of the Journal of Transnational Journal of Mathematical Analysis and Applications and he is a reviewer of Mathematical Reviews and Zentralblatt-Math. Prof. Zhu is a reviewer of more than 40 other journals and he is the author or coauthor of more than 50 journal papers. His research interests include random processes, stochastic control, stochastic differential equations, stochastic partial differential equations, stochastic stability, nonlinear systems, Markovian jump systems and stochastic complex networks.