Passivity and robust passivity of stochastic reaction–diffusion neural networks with time-varying delays

Passivity and robust passivity of stochastic reaction–diffusion neural networks with time-varying delays

Accepted Manuscript Passivity and robust passivity of stochastic reaction-diffusion neural networks with time-varying delays Yin Sheng, Zhigang Zeng ...

1MB Sizes 1 Downloads 92 Views

Accepted Manuscript

Passivity and robust passivity of stochastic reaction-diffusion neural networks with time-varying delays Yin Sheng, Zhigang Zeng PII: DOI: Reference:

S0016-0032(17)30151-5 10.1016/j.jfranklin.2017.03.014 FI 2946

To appear in:

Journal of the Franklin Institute

Received date: Revised date: Accepted date:

20 December 2016 7 March 2017 21 March 2017

Please cite this article as: Yin Sheng, Zhigang Zeng, Passivity and robust passivity of stochastic reaction-diffusion neural networks with time-varying delays, Journal of the Franklin Institute (2017), doi: 10.1016/j.jfranklin.2017.03.014

This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and review of the resulting proof before it is published in its final form. Please note that during the production process errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.

ACCEPTED MANUSCRIPT

Passivity and robust passivity of stochastic reaction-diffusion neural networks with time-varying delays✩ Yin Shenga,b , Zhigang Zenga,b,∗ a School of Automation, Huazhong University of Science and Technology, Wuhan 430074, China Laboratory of Image Processing and Intelligent Control of Education Ministry of China, Wuhan 430074, China

CR IP T

b Key

Abstract

AN US

In this paper, passivity and robust passivity for a general class of stochastic reaction-diffusion neural networks with Dirichlet boundary conditions and discrete time-varying delays are considered. With the help of inequality techniques and stochastic analysis, sufficient conditions are developed to guarantee passivity and robust passivity of the addressed neural networks. The obtained results in this study include some existing ones as special cases. A numerical example is carried out to illustrate the feasibility of the proposed theoretical criteria. Keywords: stochastic reaction-diffusion neural networks; passivity; parameter uncertainty; stochastic analysis.

1. Introduction

AC

CE

PT

ED

M

Nowadays, neural networks have captured increasing attention from various areas of science and engineering, owing to their valuable applications in associative content-addressable memory, combinatorial optimization problems, and image and signal processing [1, 2]. Some of these applications are closely associated with dynamical behaviors of those neural networks [3]. Consequently, it is highly important and indeed imperative to investigate dynamical properties of neural networks. To date, several remarkable achievements on this topic have been reported [4–9], and references cited therein. Passivity theory, which is tightly linked to circuit analysis, is a significant concept that describes the input and output characteristics of a neural network [10]. It should be emphasized that passive properties of a neural network can maintain its internal stability [11–13]. Over the past few decades, considerable efforts have been deployed to passivity analysis of various delayed neural networks [14–22], to name a few. Recall that diffusion phenomena should be taken into account in neural networks and electric circuits when electrons move in an asymmetric electromagnetic field [23–27]. Under this situation, state variables of a neuron are affected by time and space variables at the same time. Considering this fact, many accomplishments on qualitative analysis of dynamical behaviors for various reaction-diffusion neural networks have recently been proposed in [28–36]. On the other hand, parameter uncertainties often occur in real-world systems due to the existence of measurement errors, environmental noise, and parameter fluctuations, which may lead to divergence or instability [25]. Besides, stochastic perturbations do exist in real-world applications due to the existence of human disturbances and environmental noise [37]. Generally, deterministic neural network models fail to show the features of fluctuations. Parameter uncertainties and stochastic noise are therefore introduced to model recurrent neural networks, which can not only reveal more realistic dynamical characteristics of those neural networks, but also make it possible to imitate more intricate dynamical behaviors. ✩ The work was supported by the Natural Science Foundation of China under Grant 61673188, the National Key Research and Development Program of China under Grant 2016YFB0800402, the Science and Technology Support Program of Hubei Province under Grant 2015BHE013. ∗ Corresponding author at: School of Automation, Huazhong University of Science and Technology, Wuhan 430074, China. Fax: +86 27 87543130. Email addresses: [email protected] (Yin Sheng), [email protected] (Zhigang Zeng)

Preprint submitted to Journal of the Franklin Institute

March 27, 2017

ACCEPTED MANUSCRIPT

AN US

CR IP T

Currently, there are a great many prominent results on passivity analysis of various delayed neural networks [10–12, 14–17, 20–22] and relevant references therein. It should be noted that there are few results concerning passivity analysis of reaction-diffusion neural networks [13, 18, 19, 38], let alone stochastic passivity and robust stochastic passivity of stochastic reaction-diffusion neural networks with time-varying delays. How to deal with passivity, parameter uncertainties, and stochastic noise in a unified framework, and how to guarantee passivity and robust passivity of stochastic reaction-diffusion neural networks with Dirichlet boundary conditions and time-varying delays are some of the existing challenges. Motivated by the aforementioned discussion, in this study, we consider stochastic passivity and robust stochastic passivity for a class of stochastic reaction-diffusion neural networks with Dirichlet boundary conditions and discrete time-varying delays. By means of inequality techniques and stochastic analysis methods, sufficient criteria are proposed, which include Theorems 3.1 and 4.1 in [13] as special cases. The effectiveness of the obtained theoretical results is substantiated by a numerical simulation. The remainder of this paper is structured as follows. In Section 2, some preliminaries are given. Our main results are presented in Section 3. A numerical example is provided in Section 4. Conclusions are collected in Section 5. Notations: Let (Ω, F , P) be a completed probability space with the filtration {Ft }t≥0 satisfying the usual condition, that is, it is right continuous and increasing while F0 contains all P-null sets. E is the mathematical expectation with respect to the probability measure P. Let Bi (t), i = 1, 2, . . . , n be independent standard Brownian motions defined on this probability space. Rn and R are the n-dimensional Euclidean space and the set of real numbers, respectively. A = [ai j ]n×n ≤ 0 means that A is a matrix of order n × n and it is symmetric and negative semi-definite, the symmetric  terms are denoted by ∗, and AT denotes the transpose of A. X = x|x = [x1 , x2 , . . . , xm ]T , |xl | ≤ kl , l = 1, 2, . . . , m is a bounded compact set with smooth boundary ∂X and mesX > 0, where mesX denotes the measure of X, and kl , l = 1, 2, . . . , m are given scalars. X¯ is the closure of X. max{a, b} represents the maximum value of a and b. 2. Preliminaries

ED

M

Consider a class of stochastic reaction-diffusion neural networks with Dirichlet boundary conditions and timevarying delays (X n m X ∂  ∂wi (t, x)  Dil − ai hi (wi (t, x)) + bi j f j (w j (t, x)) dwi (t, x) = ∂xl ∂xl j=1 l=1 ) n X ci j g j (w j (t − τ j (t), x)) + ui (t, x) dt + j=1

PT

n X

+

j=1

di j (t, w j (t, x), w j (t − τ j (t), x))dB j (t), (1)

CE

vi (t, x) = pi wi (t, x) + qi ui (t, x),

AC

in which i = 1, 2, . . . , n, wi (t, x) is the state variable of the ith neuron at time t and in space x, Dil ≥ 0 denotes the P ∂wi (t,x)  ∂  transmission diffusion coefficient, m stands for a reaction-diffusion term, ai > 0 corresponds to the l=1 ∂xl Dil ∂xl rate with which the ith neuron will reset its potential to the resting state in isolation when disconnected from the neural networks, hi (·) represents an appropriately behaved function, bi j and ci j means connection weight coefficient and delayed connection weight coefficient, respectively, τ j (t) is a time-varying delay, f j (·) and g j (·) are activation functions without and with time delays, respectively, di j (t, ·, ·) corresponds to the noise intensity function, ui (t, x) and vi (t, x) are input and output functions, respectively, pi and qi are real constants. As to stochastic neural networks (1), initial value condition and boundary value condition are as follows wi (s, x) = ϕi (s, x), (s, x) ∈ [−τ, 0] × X, wi (t, x) = 0, (t, x) ∈ [−τ, +∞) × ∂X,

(2) (3)

where i = 1, 2, . . . , n, the real constant τ is the upper bound of τi (t) (please refer to Assumption 1 for details), and   Φ(s, x) = ϕ1 (s, x)), ϕ2 (s, x)), . . . , ϕn (s, x)) T is bounded and continuous. 2

ACCEPTED MANUSCRIPT

Remark 1. In [13], Wang et al. initiated researches of passivity analysis for reaction-diffusion neural networks. Yet, stochastic noise was not taken into consideration therein. Besides, let hi (w) = w, fi (·) = gi (·), and di j (t, ·, ·) = 0, i, j = 1, 2, . . . , n, then stochastic neural networks (1) in this study reduce to the neural network model in [13]. To investigate passivity and robust passivity of stochastic neural networks (1), some assumptions on time-varying delays τi (t) and functions hi (·), fi (·), gi (·), di j (t, ·, ·), i, j = 1, 2, . . . , n are given. Assumption 1. The time-varying delay τi (t) in stochastic neural networks (1) is bounded and differentiable, and there exist real constants τ and µ such that

CR IP T

0 ≤ τi (t) ≤ τ, τ˙ i (t) ≤ µ < 1, i = 1, 2, . . . , n.

(4)

Assumption 2. The behaved function hi (·) in stochastic neural networks (1) is differentiable and inf %∈R h˙ i (%) = Hi > 0, hi (0) = 0, i = 1, 2, . . . , n. Assumption 3. Activation functions f j (·) and g j (·), j = 1, 2, . . . , n in stochastic neural networks (1) are bounded, and there exist real constants F j , G j > 0 such that for w s ∈ R, s = 1, 2, 3, 4, | f j (w1 ) − f j (w2 )| ≤ F j |w1 − w2 |,

AN US

|g j (w3 ) − g j (w4 )| ≤ G j |w3 − w4 |. Furthermore, f j (0) = g j (0) = 0, j = 1, 2, . . . , n.

(5) (6)

Assumption 4. The noise intensity function di j (t, ·, ·) in stochastic neural networks (1) is Lipschitz continuous and satisfies the following condition, there exist real constants Λi j , ∆i j ≥ 0 such that for w5 , w6 ∈ R, di2j (t, w5 , w6 ) ≤ Λi j w25 + ∆i j w26 , i, j = 1, 2, . . . , n.

(7)

M

Before moving on, in view of the proposed definition of passivity for reaction-diffusion neural networks in [13, 18, 19, 38], a definition on stochastic passivity of stochastic reaction-diffusion neural networks is presented.

ED

Definition 1. A stochastic reaction-diffusion neural network with input u(t, x) and output v(t, x), where u(t, x), v(t, x) ∈ Rr , is said to be stochastic passive, if there exists a real constant ρ ∈ R such that Z tp Z E vT (s, x)u(s, x)dxds ≥ −ρ2 (8) X

PT

0

CE

for all t p ≥ 0, in which E is the mathematical expectation. Additionally, if there exist real constants $ ˜ 1, $ ˜ 2 ≥ 0 such that Z tp Z Z tp Z E vT (s, x)u(s, x)dxds ≥ − ρ2 + $ ˜ 1E uT (s, x)u(s, x)dxds 0 X 0 X Z tp Z +$ ˜ 2E vT (s, x)v(s, x)dxds (9)

AC

0

X

for all t p ≥ 0, then the stochastic reaction-diffusion neural network is stochastic input-strictly passive if $ ˜ 1 > 0 and stochastic output-strictly passive if $ ˜ 2 > 0. Remark 2. In [13, 18, 19, 38], a novel definition on passivity of deterministic reaction-diffusion neural networks has been proposed, which was a generalization of the classical definition of passivity in [20, 39, 40]. In Definition 1 of this study, a definition on stochastic passivity of stochastic reaction-diffusion neural networks is given. When stochastic noise is not considered, Definition 1 reduces to the ones in [13, 18, 19, 38]. To derive the main results, two lemmas are given, which play important roles in the proof process.

3

ACCEPTED MANUSCRIPT

¯ If f (x) ∈ C1 (X) ¯ and Lemma 1. [25, 26] Let f (x) = f (x1 , x2 , . . . , xm ) be a real-valued function defined on X. f (x)| x∈∂X = 0, then Z

X

Z

X

!2 ∂ f (x) dx. ∂xl

(10)

# M1 M2 Lemma 2. [37] Let M = be a symmetric matrix, in which M1 and M3 are both square and symmetric. If ∗ M3 M3 is negative definite, then the following properties are equivalent:

CR IP T

"

4k2 f (x)dx ≤ 2l π 2

1. M ≤ 0; 2. M1 − M2 M3−1 M2T ≤ 0. 3. Main Results

In this section, the main results are provided. Passivity and robust passivity of stochastic reaction-diffusion neural networks (1) are presented in Subsections 3.1 and 3.2, respectively.

AN US

3.1. Passivity Analysis In this subsection, we investigate passivity of stochastic reaction-diffusion neural networks (1). By utilizing inequality techniques and stochastic analysis methods, some sufficient results are developed.

M

Theorem 1. Given constants τ, µ, Hi , Fi , Gi , Λi j , and ∆i j , i, j = 1, 2, . . . , n. Suppose that Assumptions 1-4 are satisfied, stochastic neural networks (1) are input-strictly passive in the sense of Definition 1, if there exist positive constants mi , $, and nonnegative constants αi j , βi j , θi j , ηi j , i, j = 1, 2, . . . , n such that " # Υi mi − pi ≤ 0, (11) ∗ $ − 2qi

ED

where Υi = −

j=1

CE

+

2kl2

l=1

n X

PT

+

m X mi Dil π2

n X

− 2mi ai Hi + 2β ji

m j |b ji |2α ji Fi m j Λ ji +

j=1

+

n X j=1

1 1−µ

n X j=1

n X j=1

2(1−βi j )

mi |bi j |2(1−αi j ) F j 2(1−ηi j )

mi |ci j |2(1−θi j )G j

  2η m j |c ji |2θ ji Gi ji + ∆ ji .

(12)

AC

Proof. To prove this theorem, a nonnegative function is constructed as follows V(t) =

n Z X i=1

X

n

mi w2i (t, x)dx +

n

1 XX 1 − µ i=1 j=1

Z

t

t−τ j (t)

Z

X

  2η mi |ci j |2θi j G j i j + ∆i j w2j (s, x)dxds.

(13)

Calculating the time derivative of the nonnegative function (13) along the trajectories of stochastic neural networks (1) combining with Assumptions 1-4 give LV(t) =

n Z X i=1

+

X

n X j=1

2mi wi (t, x)

(X m l=1

bi j f j (w j (t, x)) +

∂  ∂wi (t, x)  Dil − ai hi (wi (t, x)) ∂xl ∂xl

n X j=1

) ci j g j (w j (t − τ j (t), x)) + ui (t, x) dx 4

ACCEPTED MANUSCRIPT

+

n Z X i=1

+

mi

X

1 1−µ

n X

di2j (t, w j (t, x), w j (t − τ j (t), x))dx

j=1

n X n Z X i=1 j=1

X

n X n Z X

  2η mi |ci j |2θi j G j i j + ∆i j w2j (t, x)dx

X

i=1 j=1

n Z n X X i=1 j=1

+

2mi wi (t, x)ui (t, x)dx

i=1 X n X n Z X i=1 j=1

+

n X n Z X i=1 j=1



1 1−µ

X

X

mi Λi j w2j (t, x)dx

mi ∆i j w2j (t − τ j (t), x)dx

n X n Z X i=1 j=1

n X n Z X



  2η mi |ci j |2θi j G j i j + ∆i j w2j (t, x)dx

X

2ηi j

mi |ci j |2θi j G j

ED

+

2mi |ci j |G j |wi (t, x)||w j (t − τ j (t), x)|dx

AN US

+

n Z X

X

M

+

CR IP T

  1 2η mi [1 − τ˙ j (t)] |ci j |2θi j G j i j + ∆i j w2j (t − τ j (t), x)dx 1 − µ i=1 j=1 X n Z n m X XZ X ∂  ∂wi (t, x)  Dil dx − 2mi ai Hi w2i (t, x)dx ≤ 2mi wi (t, x) ∂x ∂x l l i=1 X i=1 X l=1 n X n Z X 2mi |bi j |F j |wi (t, x)||w j (t, x)|dx + −

i=1 j=1

X

 + ∆i j w2j (t − τ j (t), x)dx.

(14)

Now we deal with the reaction-diffusion term in (14), from Lemma 1 and the boundary condition (3), n Z X

AC

CE

PT

m X ∂  ∂wi (t, x)  Dil dx ∂xl ∂xl i=1 X l=1 n X m Z  ∂w (t, x) 2 X i =− 2mi Dil dx ∂x l X i=1 l=1 n X m Z X mi Dil π2 2 ≤− wi (t, x)dx. 2kl2 i=1 l=1 X

2mi wi (t, x)

(15)

On the other hand, n X n Z X i=1 j=1

=

n X n Z X i=1 j=1



n X n Z X i=1 j=1

X

2mi |bi j |F j |wi (t, x)||w j (t, x)|dx

X

2mi |bi j |1−αi j +αi j F j

1−βi j +βi j

X

mi |bi j |

2(1−αi j )

|wi (t, x)||w j (t, x)|dx

2(1−β ) F j i j w2i (t,

x)dx +

n X n Z X i=1 j=1

5



X

mi |bi j |2αi j F j i j w2j (t, x)dx

ACCEPTED MANUSCRIPT

=

n Z n X X i=1 j=1

2(1−αi j )

X

mi |bi j |

2(1−β ) F j i j w2i (t,

x)dx +

n Z n X X i=1 j=1



X

m j |b ji |2α ji Fi ji w2i (t, x)dx,

(16)

n X n Z X i=1 j=1

=

n Z n X X i=1 j=1



n X n Z X i=1 j=1

X

2mi |ci j |G j |wi (t, x)||w j (t − τ j (t), x)|dx

X

2mi |ci j |1−θi j +θi j G j

X

mi |ci j |2(1−θi j )G j

1−ηi j +ηi j

2(1−ηi j )

|wi (t, x)||w j (t − τ j (t), x)|dx

w2i (t, x)dx +

n Z n X X i=1 j=1



X

mi |ci j |2θi j G j i j w2j (t − τ j (t), x)dx.

Substituting (15)-(17) into (14) yield n Z X i=1

X

Υi w2i (t, x)dx +

n Z X i=1

X

2mi wi (t, x)ui (t, x)dx,

AN US

LV(t) ≤

CR IP T

and

(17)

(18)

where Υi is defined in (12). In view of condition (11), one obtains n Z n Z X X LV(t) − 2 vi (t, x)ui (t, x)dx + $ u2i (t, x)dx X

i=1

i=1

" n Z X   Υi ≤ wi (t, x), ui (t, x) ∗ X i=1

X

#" # mi − pi wi (t, x) dx ≤ 0. $ − 2qi ui (t, x)

(19)

ED

M

Integrating both sides of (19) with respect to time variable t over the time interval [0, t p ] and applying the generalized Itˆo formula [37] yield n Z tp Z X 2E vi (t, x)ui (t, x)dxdt i=1

0

X

PT

≥ EV(t p ) − EV(0) + $E ≥ − EV(0) + $E

q

EV(0) 2 ,

i=1

tp

0

i=1

Z

X

tp

0

Z

X

u2i (t, x)dxdt

u2i (t, x)dxdt.

(20)

CE

Taking ρ =

n Z X

n Z X

the proof is now completed according to Definition 1.

AC

Remark 3. In [13], passivity of reaction-diffusion neural networks is analyzed. Recall that [[13], Lemma 2.1] is utilized therein to deal with reaction-diffusion terms. As pointed out in [26], Lemma 1 in this study is less conservative than [[13], Lemma 2.1]. Meanwhile, let hi (w) = w, fi (·) = gi (·), di j (t, ·, ·) = 0, and αi j = βi j = θi j = ηi j = 0.5, i, j = 1, 2, . . . , n, then the criterion in Theorem 1 reduces to the one in [[13], Theorem 3.1]. In Theorem 1, input strict passivity of stochastic neural networks (1) is considered. With similar methods, output strict passivity of stochastic neural networks (1) is derived in the following corollary. Corollary 1. Given constants τ, µ, Hi , Fi , Gi , Λi j , and ∆i j , i, j = 1, 2, . . . , n. Suppose that Assumptions 1-4 are satisfied, stochastic neural networks (1) are output-strictly passive in the sense of Definition 1, if there exist positive constants m ˆ i , $, ˆ and nonnegative constants αˆ i j , βˆ i j , θˆi j , ηˆ i j , i, j = 1, 2, . . . , n such that " # ˆi m Υ ˆ i − pi + $p ˆ i qi ≤ 0, (21) ∗ $q ˆ 2i − 2qi 6

ACCEPTED MANUSCRIPT

where

+

m X m ˆ i Dil π2 l=1

n X j=1

+

2kl2

n X

− 2m ˆ i ai Hi + 2βˆ ji

m ˆ j |b ji |2αˆ ji Fi m ˆ j Λ ji +

j=1

+

n X j=1

1 1−µ

n X j=1

n X j=1

2(1−βˆ i j )

m ˆ i |bi j |2(1−αˆ i j ) F j 2(1−ˆηi j )

ˆ

m ˆ i |ci j |2(1−θi j )G j

  2ηˆ ˆ m ˆ j |c ji |2θ ji Gi ji + ∆ ji + $p ˆ 2i .

CR IP T

ˆi =− Υ

(22)

Proof. To prove this corollary, a nonnegative function is constructed in the following forms ˆ = V(t)

n Z X i=1

n

X

m ˆ i w2i (t, x)dx +

n

1 XX 1 − µ i=1 j=1

Z

t

t−τ j (t)

Z

X

  2ηˆ ˆ m ˆ i |ci j |2θi j G j i j + ∆i j w2j (s, x)dxds.

Similar to the proof in Theorem 1, one has n Z X i=1

X

vi (t, x)ui (t, x)dx + $ ˆ

" n Z X ˆi   Υ wi (t, x), ui (t, x) ≤ ∗ X i=1

ˆ i is defined in (22). Therefore, where Υ n Z X i=1

0

tp

Z

X

i=1

X

v2i (t, x)dx

#" # m ˆ i − pi + $p ˆ i qi wi (t, x) dx ≤ 0, ui (t, x) $q ˆ 2i − 2qi

(24)

n Z X

(25)

vi (t, x)ui (t, x)dxdt ≥ − EV(0) + $E ˆ

M

2E

n Z X

AN US

ˆ −2 LV(t)

(23)

i=1

0

tp

Z

X

v2i (t, x)dxdt,

3.2. Robust Passivity Analysis

ED

which means that stochastic neural networks (1) are output-strictly passive according to Definition 1.

PT

It should be pointed out that parameter uncertainties generally exist in various neural networks [25]. Therefore, in this subsection, we turn to consider robust stochastic passivity of stochastic neural networks (1). The quantities Dil , ai , bi j , ci j , pi and qi , i, j = 1, 2, . . . , n, l = 1, 2, . . . , m in stochastic neural networks (1) may be internalized as follows:

AC

CE

 DI : = D = [Dil ]n×m : 0 ≤ D ≤ D ≤ D, i.e., 0 ≤ Dil ≤ Dil ≤ Dil , i = 1, 2, . . . , n, l = 1, 2, . . . , m, ∀D ∈ DI ,  AI : = A = diag{ai } : 0 ≤ A ≤ A ≤ A, i.e., 0 ≤ ai ≤ ai ≤ ai , i = 1, 2, . . . , n, ∀A ∈ AI ,  BI : = B = [bi j ]n×n : B ≤ B ≤ B, i.e., bi j ≤ bi j ≤ bi j , i, j = 1, 2, . . . , n, ∀B ∈ BI ,  C I : = C = [ci j ]n×n : C ≤ C ≤ C, i.e., ci j ≤ ci j ≤ ci j , i, j = 1, 2, . . . , n, ∀C ∈ C I ,  PI : = P = diag{pi } : P ≤ P ≤ P, i.e., p ≤ pi ≤ pi , i = 1, 2, . . . , n, ∀P ∈ PI , i  QI : = Q = diag{qi } : Q ≤ Q ≤ Q, i.e., q ≤ qi ≤ qi , i = 1, 2, . . . , n, ∀Q ∈ QI . (26) i

Definition 2. Stochastic reaction-diffusion neural networks (1) with parameter ranges defined in (26) are robust passive if stochastic reaction-diffusion neural networks (1) are passive for all D ∈ DI , A ∈ AI , B ∈ BI , C ∈ C I , P ∈ PI , and Q ∈ QI . Theorem 2. Given constants τ, µ, Hi , Fi , Gi , Λi j , and ∆i j , i, j = 1, 2, . . . , n. Suppose that Assumptions 1-4 are satisfied, stochastic neural networks (1) with parameter ranges defined in (26) are robust input-strictly passive, if 7

ACCEPTED MANUSCRIPT

there exist positive constants m ˇ i , $, ˇ and nonnegative constants αˇ i j , βˇ i j , θˇi j , ηˇ i j , i, j = 1, 2, . . . , n such that $ ˇ − 2q < 0 i and

+

m X m ˇ i Dil π2

2kl2

l=1

n X

− 2m ˇ i ai Hi + 2βˇ ji

m ˇ j (b?ji )2αˇ ji Fi

+

j=1

+

n X

n X

2(1−βˇ i j )

m ˇ i (b?i j )2(1−αˇ i j ) F j

j=1

2(1−ˇηi j )

ˇ

m ˇ i (c?i j )2(1−θi j )G j

+

j=1

m ˇ j Λ ji

j=1

 Θ2i 1 X  ? 2θˇ ji 2ηˇ ji m ˇ j (c ji ) Gi + ∆ ji − ≤ 0, 1 − µ j=1 $ ˇ − 2q n

n X

CR IP T



i

(27)

   ˇ i − pi | . in which b?i j = max |bi j |, |bi j | , c?i j = max |ci j |, |ci j | , and Θi = max |m ˇ i − p |, |m i

Proof. To prove this theorem, a nonnegative function is constructed as follows n Z X i=1

n

X

m ˇ i w2i (t,

n

1 XX x)dx + 1 − µ i=1 j=1

Z

t

t−τ j (t)

Z

X

  2ηˇ ˇ m ˇ i (c?i j )2θi j G j i j + ∆i j w2j (s, x)dxds.

(28)

AN US

ˇ = V(t)

Calculating the time derivative of the nonnegative function (28) along the trajectories of stochastic neural networks (1) combining with Assumptions 1-4 give n Z X i=1

− +

X

2m ˇ i wi (t, x)

n Z X i=1

X

m X ∂  ∂wi (t, x)  Dil dx ∂xl ∂xl l=1

2m ˇ i ai Hi w2i (t, x)dx

M

ˇ ≤ LV(t)

n X n Z X i=1 j=1

2m ˇ i b?i j F j |wi (t, x)||w j (t, x)|dx

ED

n X n Z X

X

+

i=1 j=1

PT

+

n Z X

AC

CE

+

2m ˇ i c?i jG j |wi (t, x)||w j (t − τ j (t), x)|dx

2m ˇ i wi (t, x)ui (t, x)dx

i=1 X n X n Z X i=1 j=1

+

X

n X n Z X i=1 j=1

X

X

m ˇ i Λi j w2j (t, x)dx m ˇ i ∆i j w2j (t − τ j (t), x)dx

n n Z   1 XX 2ηˇ ˇ m ˇ i (c?i j )2θi j G j i j + ∆i j w2j (t, x)dx 1 − µ i=1 j=1 X n X n Z   X 2ηˇ ˇ − m ˇ i (c?i j )2θi j G j i j + ∆i j w2j (t − τ j (t), x)dx.

+

i=1 j=1

X

Similar to the proof in Theorem 1, one obtains ˇ −2 LV(t)

n Z X i=1

X

vi (t, x)ui (t, x)dx + $ ˇ

n Z X i=1

8

X

u2i (t, x)dx

(29)

ACCEPTED MANUSCRIPT

#" # m ˇ i − pi wi (t, x) dx, $ ˇ − 2qi ui (t, x)

i=1

where

ˇi =− Υ +

m X m ˇ i Dil π2 l=1

n X

2kl2

− 2m ˇ i ai Hi + 2βˇ ji

m ˇ j (b?ji )2αˇ ji Fi

+

j=1

+

n X

n X

n X

2(1−βˇ i j )

m ˇ i (b?i j )2(1−αˇ i j ) F j

j=1

2(1−ηˇ i j )

ˇ

m ˇ i (c?i j )2(1−θi j )G j

j=1

m ˇ j Λ ji +

j=1

1 1−µ

n X j=1

(30)

CR IP T

" n Z X ˇi   Υ wi (t, x), ui (t, x) ≤ ? X

  2ηˇ ˇ m ˇ j (c?ji )2θ ji Gi ji + ∆ ji .

" ˇ Υ Recall that $ ˇ − 2qi ≤ $ ˇ − 2q < 0, i = 1, 2, . . . , n, based on Lemma 2, to guarantee i i ? 2 ˇ i − (mˇ i −pi ) ≤ 0. From (27), for i = 1, 2, . . . , n, require Υ

# m ˇ i − pi ≤ 0, we only $ ˇ − 2qi

AN US

$−2q ˇ i

(31)

Θ2i ˇ i − pi )2 ˇi − ˇ i − (m ≤Υ ≤ 0. Υ $ ˇ − 2qi $ ˇ − 2q

(32)

i

The following proof is similar to that of Theorem 1, hence, it is omitted here.

Remark 4. Note that in Theorem 2, let hi (w) = w, fi (·) = gi (·), di j (t, ·, ·) = 0, and αˇ i j = βˇ i j = θˇi j = ηˇ i j = 0.5, i, j = 1, 2, . . . , n, then the criterion in [[13], Theorem 4.1] can be obtained.

M

Corresponding to Corollary 1, robust output strict passivity of stochastic neural networks (1) is investigated in the following corollary.

ED

Corollary 2. Given constants τ, µ, Hi , Fi , Gi , Λi j , and ∆i j , i, j = 1, 2, . . . , n. Suppose that Assumptions 1-4 are satisfied, stochastic neural networks (1) with parameter ranges defined in (26) are robust output-strictly passive, if there exist positive constants m ´ i , $, ´ and nonnegative constants α´ i j , β´ i j , θ´i j , η´ i j , i, j = 1, 2, . . . , n such that $(q ´ ?i )2 − 2q < 0 and i

m X m ´ i Dil π2

PT



CE

+

l=1

n X

2kl2

− 2m ´ i a i Hi + 2β´ ji

m ´ j (b?ji )2α´ ji Fi

j=1

+

n X

AC

j=1

+

n X

n X

2(1−β´ i j )

m ´ i (b?i j )2(1−α´ i j ) F j

j=1

´

2(1−η´ i j )

m ´ i (c?i j )2(1−θi j )G j

j=1

m ´ j Λ ji +

1 1−µ

n X j=1

  2η´ ´ m ´ j (c?ji )2θ ji Gi ji + ∆ ji

´2 Θ i ≤ 0, + $(p ´ ?i )2 − ? 2 $(q ´ i ) − 2q

(33)

i

    ´ i = max |m where b?i j = max |bi j |, |bi j | , c?i j = max |ci j |, |ci j | , p?i = max |p |, |pi | , q?i = max |q |, |qi | , and Θ ´ i + $p ´ q− i i i i pi |, |m ´ i + $p ´ qi − pi |, |m ´ i + $p ´ i q − pi |, |m ´ i + $p ´ i qi − pi |, |m ´ i + $p ´ q − p |, |m ´ i + $p ´ qi − p |, |m ´ i + $p ´ i q − p |, |m ´ i + $p ´ i qi − p | . i

i i

i

i

i

i

i

i

i

Proof. To prove this corollary, a nonnegative function is constructed in the following forms ´ = V(t)

n Z X i=1

n

X

m ´ i w2i (t,

n

1 XX x)dx + 1 − µ i=1 j=1

Z

t

t−τ j (t)

9

Z

X

  2η´ ´ m ´ i (c?i j )2θi j G j i j + ∆i j w2j (s, x)dxds.

(34)

ACCEPTED MANUSCRIPT

0

CR IP T

w1(t,x)

0.5

−0.5 1 0.5 0 −0.5 −1

x

20

0

80

60

40 t

100

AN US

Figure 1: State trajectories of w1 (t, x) in Example 1.

Similar to the proof in Theorem 1, one has n Z n Z X X ´ −2 LV(t) vi (t, x)ui (t, x)dx + $ ´ v2i (t, x)dx i=1

X

i=1

" n Z X ´i   Υ ≤ wi (t, x), ui (t, x) ∗ X

where

m X m ´ i Dil π2

2kl2

l=1

− 2m ´ i ai Hi +

ED

´i =− Υ +

n X

2β´ ji

m ´ j (b?ji )2α´ ji Fi

j=1

n X

PT +

#" # m ´ i − pi + $p ´ i qi wi (t, x) dx ≤ 0, ui (t, x) $q ´ 2i − 2qi

M

i=1

m ´ j Λ ji +

j=1

1 1−µ

X

+

n X

n X

(35)

2(1−β´ i j )

m ´ i (b?i j )2(1−α´ i j ) F j

j=1

2(1−´ηi j )

´

m ´ i (c?i j )2(1−θi j )G j

j=1

n X j=1

  2η´ ´ m ´ j (c?ji )2θ ji Gi ji + ∆ ji + $p ´ 2i .

(36)

CE

The following proof is similar to that of Theorem 2, so it is omitted here.

AC

4. Numerical Example

In this section, a numerical example is performed to illustrate the efficiency of the proposed theoretical criteria in this study. Example 1. Consider the following stochastic reaction-diffusion neural networks (

2

X ∂2 wi (t, x) − a h (w (t, x)) + bi j f j (w j (t, x)) i i i ∂x2 j=1 ) 2 X + ci j g j (w j (t − τ j (t), x)) + ui (t, x) dt

dwi (t, x) = Di

j=1

10

ACCEPTED MANUSCRIPT

0.4

w2(t,x)

0.2 0

−0.4 1 0.5 0 −0.5 −1

x

20

0

40

60

t

CR IP T

−0.2

80

100

100 90 e1(t)

80

e2(t)

70 60

M

50

AN US

Figure 2: State trajectories of w2 (t, x) in Example 1.

40 30

ED

20 10

0

20

PT

0

40

60

80

100

t

+

2 X j=1

di j (t, w j (t, x), w j (t − τ j (t), x))dB j (t),

vi (t, x) =pi wi (t, x) + qi ui (t, x),

AC

CE

Figure 3: State trajectories of e1 (t) and e2 (t) in Example 1.

(37)

where D1 = D2 = 2, a1 = 1, a2 = 0.8, b11 = 0.6, b12 = −0.8, b21 = 0.9, b22 = 0.5, c11 = −0.8, c12 = 1, c21 = 1.1, c22 = 0.5, X = {x| |x| ≤ 1}, fi (w) = gi (w) = tanh(w), τi (t) = 0.5 + 0.2 sin(t), p1 = p2 = 1, q1 = 0.9, q2 = 0.8, and di j (t, w j (t, x), w j (t − τ j (t), x)) = 0.1w j (t, x) + 0.1w j (t − τ j (t), x), i, j = 1, 2. Due to the existence of stochastic noise, the proposed criteria in [13] are invalid to ascertain passivity of stochastic neural networks (37). Choosing mi = $ = 1, αi j = βi j = θi j = ηi j = 0.5, i, j = 1, 2, then one has " # " # Υ1 m1 − p1 −4.3446 0 = , ∗ $ − 2q1 0 −0.8 " # " # Υ2 m2 − p2 −4.8446 0 = . (38) ∗ $ − 2q2 0 −0.6 11

ACCEPTED MANUSCRIPT

0

−0.5 1 0.5 0 −0.5 −1

x

0

20

40

60

t

CR IP T

w1(t,x)

0.5

80

100

AN US

Figure 4: State trajectories of w1 (t, x) in Example 1 with parameter uncertainties.

0.4

0

M

w2(t,x)

0.2

−0.2 −0.4 1

ED

0.5

0

−0.5 −1

0

20

40

80

100

t

PT

x

60

Figure 5: State trajectories of w2 (t, x) in Example 1 with parameter uncertainties.

AC

CE

Based on Theorem 1, stochastic reaction-diffusion neural networks (37) are input-strictly passive. It is easy to prove that condition (21) is also satisfied if m ˆi = $ ˆ = 1 and αˆ i j = βˆ i j = θˆi j = ηˆ i j = 0.5, i, j = 1, 2. Hence, stochastic reactiondiffusion neural networks (37) are output-strictly passive. Figs. 1 and 2 are state trajectories of w1 (t, x) and w2 (t, x) with input ui (t, x) = sin(πtx), i = 1, 2, respectively. Meanwhile, in view of (20) and (25) along with Definition 1, define P2 R t R P2 R t R P2 R t R 2 e1 (t) = E i=1 0 X vi (s, x)ui (s, x)dxds − 0.5E i=1 0 X ui (s, x)dxds and e2 (t) = E i=1 0 X vi (s, x)ui (s, x)dxds − P RtR 0.5E 2i=1 0 X v2i (s, x)dxds. Figs. 3 are state trajectories of e1 (t) and e2 (t), respectively. Now we turn to consider robust passivity of stochastic reaction-diffusion neural networks (37) with parameter uncertainties as follows: 1.9 ≤ Di ≤ 3, 0.9 ≤ a1 ≤ 2, 0.75 ≤ a2 ≤ 2, 0.5 ≤ b11 ≤ 0.7, −0.9 ≤ b12 ≤ −0.7, 0.8 ≤ b21 ≤ 1, 0.4 ≤ b22 ≤ 0.6, −0.9 ≤ c11 ≤ −0.7, 0.9 ≤ c12 ≤ 1.1, 1 ≤ c21 ≤ 1.2, 0.4 ≤ c22 ≤ 0.6, 0.9 ≤ pi ≤ 1.2, 0.8 ≤ q1 ≤ 1, 0.7 ≤ q2 ≤ 1, i = 1, 2. Choosing m ˘i = $ ˇ = 1, αˇ i j = βˇ i j = θˇi j = ηˇ i j = 0.5, i, j = 1, 2, then one has 1 − 2 × 0.8 = −0.6, 1 − 2 × 0.7 = −0.4,

2.3 0.04 −1.9π2 − 2 × 0.9 + 0.7 + 0.9 + 0.7 + 1 + 0.9 + 1.1 + 0.1 + 0.1 + + = −2.0345, 2 0.8 0.6 12

ACCEPTED MANUSCRIPT

100 90

e1(t)

80

e2(t)

70 60 50

30 20 10 0

0

20

40

60 t

80

CR IP T

40

100

AN US

Figure 6: State trajectories of e1 (t) and e2 (t) in Example 1 with parameter uncertainties.

−1.9π2 1.9 0.04 − 2 × 0.75 + 1 + 0.6 + 0.9 + 0.6 + 1.2 + 0.6 + 0.1 + 0.1 + + = −3.3011. 2 0.8 0.4

ED

M

Based on Theorem 2, stochastic reaction-diffusion neural networks (37) with parameter uncertainties are robust inputstrictly passive. Meanwhile, it is easy to verify that condition (33) is satisfied if m ´i = $ ´ = 1 and α´ i j = β´ i j = θ´i j = η´ i j = 0.5, i, j = 1, 2. Hence, stochastic reaction-diffusion neural networks (37) with parameter uncertainties are robust output-strictly passive. Figs. 4 and 5 are state trajectories of w1 (t, x) and w2 (t, x) Rwith parameter uncertainP tR ties and input ui (t, x) = sin(πtx), i = 1, 2, respectively. Similarly, define e1 (t) = E 2i=1 0 X vi (s, x)ui (s, x)dxds − P RtR P RtR P RtR 0.5E 2i=1 0 X u2i (s, x)dxds and e2 (t) = E 2i=1 0 X vi (s, x)ui (s, x)dxds − 0.5E 2i=1 0 X v2i (s, x)dxds. Figs. 6 are state trajectories of e1 (t) and e2 (t) with parameter uncertainties, respectively. 5. Conclusions

CE

PT

Passivity theory has significant applications in many scientific fields including stability, signal processing, fuzzy control, and chaos control and synchronization [11]. In this study, stochastic passivity and robust stochastic passivity for a class of stochastic reaction-diffusion neural networks with Dirichlet boundary conditions and time-varying delays have been investigated by using inequality techniques and theories of stochastic analysis. An illustrative example has been provided to substantiate the correctness of the obtained theoretical criteria. References

AC

[1] L. O. Chua, L. Yang, Cellular neural networks: Applications, IEEE Trans. Circuits Systs. 35 (10) (1988) 1273–1290. [2] Z. G. Zeng, J. Wang, Associative memories based on continuous-time cellular neural networks designed using space-invariant cloning templates, Neural Netw. 22 (5) (2009) 651–657. [3] Z. G. Zeng, J. Wang, Design and analysis of high-capacity associative memories based on a class of discrete-time recurrent neural networks, IEEE Trans. Syst., Man Cybern. B, Cybern. 38 (6) (2008) 1525–1536. [4] Z. G. Zeng, J. Wang, X. X. Liao, Global exponential stability of a general class of recurrent neural networks with time-varying delays, IEEE Trans. Circuits Syst. I: Fundam. Theory Appl. 50 (10) (2003) 1353–1358. [5] Z. G. Zeng, W. X. Zheng, Multistability of two kinds of recurrent neural networks with activation functions symmetrical about the origin on the phase plane, IEEE Trans. Neural Netw. Learn. Syst. 24 (11) (2013) 1749–1762. [6] Z. G. Wu, J. H. Park, H. Su, J. Chu, Stochastic stability analysis of piecewise homogeneous Markovian jump neural networks with mixed time-delays, J. Frankl. Inst. 349 (6) (2012) 2136–2150. [7] Q. X. Zhu, J. D. Cao, Stability analysis of Markovian jump stochastic BAM neural networks with impulse control and mixed time delays, IEEE Trans. Neural Netw. Learn. Syst. 23 (3) (2012) 467–479. [8] Y. Sheng, Y. Shen, M. F. Zhu, Delay-dependent global exponential stability for delayed recurrent neural networks, IEEE Trans. Neural Netw. Learn. Syst.Doi: 10.1109/TNNLS.2016.2608879.

13

ACCEPTED MANUSCRIPT

AC

CE

PT

ED

M

AN US

CR IP T

[9] Q. C. Liu, J. H. Qin, C. B. Yu, Event-based agreement protocols for complex networks with time delays under pinning control, J. Frankl. Inst. 353 (15) (2016) 3999–4015. [10] Z. G. Wu, P. Shi, H. Y. Su, J. Chu, Passivity analysis for discrete-time stochastic Markovian jump neural networks with mixed time delays, IEEE Trans. Neural Netw. 22 (10) (2011) 1566–1575. [11] C. G. Li, X. F. Liao, Passivity analysis of neural networks with time delay, IEEE Trans. Circuits and Syst. II, Exp. Briefs 52 (8) (2005) 471–475. [12] Z. G. Wu, J. H. Park, H. Y. Su, J. Chu, New results on exponential passivity of neural networks with time-varying delays, Nonlinear Anal., RWA 13 (4) (2012) 1593–1599. [13] J. L. Wang, H. N. Wu, L. Guo, Passivity and stability analysis of reaction-diffusion neural networks with Dirichlet boundary conditions, IEEE Trans. Neural Netw. 22 (12) (2011) 2105–2116. [14] S. Y. Xu, W. X. Zheng, Y. Zou, Passivity analysis of neural networks with time-varying delays, IEEE Trans. Circuits and Syst. II, Exp. Briefs 56 (4) (2009) 325–329. [15] H. Y. Li, H. J. Gao, P. Shi, New passivity analysis for neural networks with discrete and distributed delays, IEEE Trans. Neural Netw. 21 (11) (2010) 1842–1847. [16] Z. Y. Guo, J. Wang, Z. Yan, Passivity and passification of memristor-based recurrent neural networks with time-varying delays, IEEE trans. Neural Netw. Learn. Syst. 25 (11) (2014) 2099–2109. [17] R. Samidurai, S. Rajavel, Q. X. Zhu, R. Raja, H. Zhou, Robust passivity analysis for neutral-type neural networks with mixed and leakage delays, Neurocomputing 175 (2016) 635–643. [18] J. L. Wang, H. N. Wu, T. W. Huang, Passivity-based synchronization of a class of complex dynamical networks with time-varying delay, Automatica 56 (2015) 105–112. [19] J. L. Wang, H. N. Wu, T. W. Huang, S. Y. Ren, Passivity and synchronization of linearly coupled reaction-diffusion neural networks with adaptive coupling, IEEE Trans. Cybern. 45 (9) (2015) 1942–1952. [20] B. Y. Zhang, W. X. Zheng, S. Y. Xu, Passivity analysis and passive control of fuzzy systems with time-varying delays, Fuzzy Sets Syst. 174 (1) (2011) 83–98. [21] P. Balasubramaniam, G. Nagamani, R. Rakkiyappan, Passivity analysis for neural networks of neutral type with Markovian jumping parameters and time delay in the leakage term, Commun. Nonlinear Sci. Numer. Simulat. 16 (11) (2011) 4422–4437. [22] S. P. Wen, Z. G. Zeng, T. W. Huang, Y. R. Chen, Passivity analysis of memristor-based recurrent neural networks with time-varying delays, J. Frankl. Inst. 350 (8) (2013) 2354–2370. [23] Z. S. Wang, H. G. Zhang, P. Li, An LMI approach to stability analysis of reaction–diffusion Cohen–Grossberg neural networks concerning Dirichlet boundary conditions and distributed delays, IEEE Trans. Syst., Man Cybern. B, Cybern. 40 (6) (2010) 1596–1606. [24] Q. Ma, G. Feng, S. Y. Xu, Delay-dependent stability criteria for reaction–diffusion neural networks with time-varying delays, IEEE Trans. Cybern. 43 (6) (2013) 1913–1920. [25] J. P. Zhou, S. Y. Xu, B. Y. Zhang, Y. Zou, H. Shen, Robust exponential stability of uncertain stochastic neural networks with distributed delays and reaction-diffusions, IEEE Trans. Neural Netw. Learn. Syst. 23 (9) (2012) 1407–1416. [26] W. H. Chen, S. X. Luo, W. X. Zheng, Impulsive synchronization of reaction-diffusion neural networks with mixed delays and its application to image encryption, IEEE Trans. Neural Netw. Learn. Syst. 27 (12) (2016) 2696–2170. [27] C. Hu, H. J. Jiang, Z. D. Teng, Impulsive control and synchronization for delayed neural networks with reaction–diffusion terms, IEEE Trans. Neural Netw. 21 (1) (2010) 67–81. [28] Z. S. Wang, H. G. Zhang, Global asymptotic stability of reaction–diffusion Cohen–Grossberg neural networks with continuously distributed delays, IEEE Trans. Neural Netw. 21 (1) (2010) 39–49. [29] X. S. Yang, J. D. Cao, Z. C. Yang, Synchronization of coupled reaction-diffusion neural networks with time-varying delays via pinningimpulsive controller, SIAM J. Control Optim. 51 (5) (2013) 3486–3510. [30] W. L. He, F. Qian, J. D. Cao, Pinning-controlled synchronization of delayed neural networks with distributed-delay coupling via impulsive control, Neural Netw. 85 (2017) 1–9. [31] C. X. Huang, X. S. Yang, Y. G. He, Stability analysis of stochastic reaction-diffusion Cohen-Grossberg neural networks with time-varying delays, Discrete Dynamics Nat. Soc. 2009 (2009) 1–19. [32] Q. Ma, S. Y. Xu, Y. Zou, G. D. Shi, Synchronization of stochastic chaotic neural networks with reaction-diffusion terms, Nonlinear Dyn. 67 (3) (2012) 2183–2196. [33] C. Hu, J. Yu, H. J. Jiang, Z. D. Teng, Exponential synchronization for reaction–diffusion networks with mixed delays in terms of p-norm via intermittent driving, Neural Netw. 31 (2012) 1–11. [34] Q. X. Zhu, J. Cao, Exponential stability analysis of stochastic reaction-diffusion Cohen–Grossberg neural networks with mixed delays, Neurocomputing 74 (17) (2011) 3084–3091. [35] Q. T. Gan, Adaptive synchronization of stochastic neural networks with mixed time delays and reaction–diffusion terms, Nonlinear Dyn. 69 (4) (2012) 2207–2219. [36] Q. T. Gan, Exponential synchronization of stochastic Cohen–Grossberg neural networks with mixed time-varying delays and reaction– diffusion via periodically intermittent control, Neural Netw. 31 (2012) 12–21. [37] X. R. Mao, C. G. Yuan, Stochastic differential equations with Markovian switching, London: Imperial College Press, 2006. [38] J. L. Wang, H. N. Wu, Passivity of delayed reaction–diffusion networks with application to a food web model, Appl. Math. Comput. 219 (24) (2013) 11311–11326. [39] S. I. Niculescu, R. Lozano, On the passivity of linear delay systems, IEEE Trans. Autom. Control 46 (3) (2001) 460–464. [40] J. Yao, H. O. Wang, Z. H. Guan, W. S. Xu, Passive stability and synchronization of complex spatio-temporal switching networks with time delays, Automatica 45 (7) (2009) 1721–1728.

14