Finite-time stability for memristor based switched neural networks with time-varying delays via average dwell time approach

Finite-time stability for memristor based switched neural networks with time-varying delays via average dwell time approach

Accepted Manuscript Finite-time stability for memristor based switched neural networks with time-varying delays- via average dwell time approach M. S...

815KB Sizes 4 Downloads 58 Views

Accepted Manuscript

Finite-time stability for memristor based switched neural networks with time-varying delays- via average dwell time approach M. Syed Ali, S. Saravanan PII: DOI: Reference:

S0925-2312(17)31628-4 10.1016/j.neucom.2017.10.003 NEUCOM 18982

To appear in:

Neurocomputing

Received date: Revised date: Accepted date:

20 February 2017 24 August 2017 3 October 2017

Please cite this article as: M. Syed Ali, S. Saravanan, Finite-time stability for memristor based switched neural networks with time-varying delays- via average dwell time approach, Neurocomputing (2017), doi: 10.1016/j.neucom.2017.10.003

This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and review of the resulting proof before it is published in its final form. Please note that during the production process errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.

ACCEPTED MANUSCRIPT

M. Syed Ali∗, S.Saravanan†

CR IP T

Finite-time stability for memristor based switched neural networks with time-varying delays- via average dwell time approach

Department of Mathematics, Thiruvalluvar University, Vellore - 632 115, Tamilnadu, India

M

AN US

Abstract: In this paper we investigated the problem of the finite-time stability for a class of memristor based switched neural networks with time-varying delays. By constructing proper Lyapunov functionals. Based on the average dwell time technique, mode-dependent average dwell time technique and using a free-matrix-based integral inequality, Jensen’s inequality are used to estimate the upper bound of the derivative of the LKF, several sufficient conditions are given to ensure the finite-time stability of the memristor-based switched neural networks with discrete and distributed delays in the sense of feasible solutions. The finite-time stability conditions here are presented in terms of linear matrix inequalities, which can be easily solved by using Matlab Tools. Finally, the numerical examples are provided to verify the effectiveness and benefit of the proposed criterion.

1

Introduction

ED

Key Words: Average dwell time approach, Finite-time stability, Lyapunov-Krasivskii functional, Memristor, Switched neural networks.

AC

CE

PT

Memristor (a contraction of memory resistor) was postulated by Chua in 1971 [1], and a prototype of the memristor was first physical form to by Hewlett-Packard Lab team on nanotechnology in 2008 [2]. It is expected that the memristive neural networks will help us to build a brain-like machine to implement the synapses of biological brains because the conventional neural networks have only limited success. Hence, it is necessary and important to develop the memristive neural networks. The fourth basic circuit element was named as the memristor in order to be distinguished from the other three elements the resistor, the capacitor and the inductor. The memristor should have important and distinctive ability according to Chua’s theory. Memristor is not only sharing many properties of resistors but also gives the equal unit of the measurement, the memristor is a two terminal element whose characteristic lies in its variable resistance called memristance. Obvioulsy, the memristor’s distinctive ability is the memristance which depends on either how much electric charge has passed through the memristor in a special direction or memorizing the passed quantity of electronic charge. The research and application of memristor have been becoming more and more famous in many fields as such new generation computer, powerful brain like neural computer and so on. After the discovery of the memristor, there is no doubt that it has higher level the global attention [3]-[5]. ∗ Corresponding

author, E-mail address: [email protected] (M. Syed Ali) Saravanan)

[email protected](S.

1

ACCEPTED MANUSCRIPT

2

AN US

CR IP T

The memristor is said to be nonlinear circuit element and its values are called memristance (or, memductance) it is not unique. This is because memristance depends on the magnitude and polarity of the voltage applied to it and the length of the time that the voltage has been applied. When the voltage is turned off, the memristor remembers its most recent value until next time it is turned on. Memristor-based neural networks can be view as a unique occurrence of switched networks where the switching rule relies on the network state, but its sensitivity analysis species is different from that of general switched systems due to its special characteristics. So the memristor-based neural networks with discontinuous right-hand sides studies are not an easy work. The problems of general class of uncertain memristor-based neural networks with time-delay is formulated and studied in [6]. Exponential stability for a class of memristor-based neural networks with time-varying delays have been investigated in [7]. The problem of fractional-order complex-valued memristive neural networks with finite-time stability and time delays have been addressed in [8]. The study of time-delay systems plays an paramount role, because the existence of delays is a main source of systems unreliability or expanding or oscillation and poor performance. Thus, in the former years genuine efforts have been made to the delayed memristor neural networks, (see [9]-[15] and references therein). In addition to many potential applications, the researchers indicated that the stability of neural networks and dynamical systems are established in [16]-[24].

ED

M

However, practical applications, we will always be able to get faster or even finite-time convergent speed. Therefore, the concept of finite-time stability occurs naturally. Recently, finite-time stability and synchronization of neural networks have been studied in [25]-[28], whereas systems in [29] is without delays, and systems in [31, 32], and [33]-[36] are with continuous right-hand sides. So far, few authors have been studied regarding the finite-time stability of memristor-based neural networks. This is because memristor are due to the switches based on networks states. The switch behavior brings difficulties in studying the finite-time stability of the system. Moreover, as shown in [33], it is difficult to find a Lyapunov-functional satisfying the derivative condition of the finite-time stability of system with delays, because the delayed systems are almost have more complex dynamic manners than the systems without delays. Recently, exponential synchronization of memristor-based Cohen-Grossberg neural networks with discrete and distributed delays, various sufficient conditions are studied in [37].

AC

CE

PT

There are many types of stability conditions including stochastic stability, asymptotic stability, pinning stability, globally exponential stability, exponential stability, finite-time stability, etc.. Unfortunately, only the asymptotic stability or exponential stability whose convergence time is sufficiently large can be recognized for memristor neural networks in most literature [38, 39]. Finite-time stability or finite-time boundedness is different with exponential or asymptotic stability, finite-time stability means that the describe systems whose state approaches zero in a finite-time then put them in there. Thus, the convergence time of finite-time stability can be compress for promising quick response. Moreover, the finite-time stabilization could be sensational by many control approaches such as adaptive feedback control, sliding mode control, linear feedback control, and so on. More recently, the finite-time stability and finite-time synchronization problem of complex networks, continuous and discontinuous neural networks, still other nonlinear systems were investigated and more strong results were regulated in [40]-[43]. However, there are very few papers finite-time stability of memristor-based neural networks. It is still need to investigate more. Motivated by this published on the finite-time stability of memristor based switched neural networks with time-varying delays using average dwell time approach and mode-dependent average dwell time approach. The finite-time stability conditions by using the free weighting matrices. As mentioned, so far, many kinds of sufficient conditions have been investigated for switched neural networks severally. The main contribution of this paper is the following three aspects: The memristor based switched neural networks with time-varying delays are introduced; The main results are derived by using the LyapunovKrasovskii stability theory and LMI technique together with the new free-matrix-

ACCEPTED MANUSCRIPT

3 based integral inequality combined with other inequalities stability criterion are obtained. Finite-time stability analysis is investigated using average and mode-dependent average dwell time technique. Finally, two numerical examples are given to illustrate the potential of the accomplished conditions.

2

CR IP T

Notation: Rn denotes the n dimensional Euclidean space and Rm×n is the set of all m × n real matrices. The superscript 0 T 0 denotes matrix transposition and the notation A ≥ B (respectively, A < B) where A and B are symmetric matrices (respectively, positive definite). k . k denotes the Euclidean norm in Rn . If Q is a square matrix, denote by λmax (Q) (respectively, λmin (Q)) means the largest(respectively, smallest) eigenvalue of Q. The asterisk * in a symmetric matrix is used to denote term that is induced by symmetry; diag(·) stands for the diagonal matrix;

Problem Formulation and Preliminaries

AN US

According to the feature of the memristor and the current voltage characteristic given in [11], we propose a simplified mathematical model of the memristance as follows: ( M 0 , u(t) ˙ ≥ 0, M (u(t)) = , (1) 00 M , u(t) ˙ < 0, where M (u(t)) is the memristance of the memristor, u(t) is the voltage of the menristor, u(t) ˙ is the derivative of u(t) with respect to time t. From this represrntaion we can see that the voltage applied to the memristor.

ED

M

Using Kirchhoff’s current law, the equation of the ith neuron can be written in the following form: X  n n X 1 dyi (t) signij (M1ij + M2ij + M3ij ) + =− yi (t) + signij gj (yj (t))M1ij Ci dt Ri j=1 j=1 n Z t n X X signij gj (xj (t − τj (t)))M2ij + signij gj (yj (s))dsM3ij + Ii (t), (2) + j=1

j=1

t−ρj (t)

CE

PT

where Ci (·) represents the capacitance; yi (t) is the voltage of Ci ; τj (t) is the discrete time-varying delay, which is considered to satisfy the following condition: τj (t) is a differentiable function and ρj (t) is distributed time-varying delay which satisfies that for all t ≥ 0, 0 ≤ τj (t) ≤ τ, τ˙j (t) ≤ h.  1, i 6= j signij = i = 1, 2, ..., n. −1, i = j

AC

is the sign function; gj (yj (t)) and gj (yj (t − τj (t))) denote the activation functions of yj (t) and yj (t−τj (t)); M1ij , M2ij and M3ij denote the memductances of the functions gj (xj (t)), gj (xj (t−τ (t))) Rt and t−ρj (t) gj (y(s)ds respectively; Ii (t) describes the external input. The overall the system (2) can be rewritten as follows:

y˙ i (t) = −ai (t)yi (t) +

n X

bij (t)gj (yj (t)) +

j=1

n X j=1

cij gj (yj (t − τj (t))) +

n X

dij (t)

j=1

Z

t

gj (yj (s))ds + Ii (t),

1−ρj (t)

where ai (t) =

n  X 1 j=1

Ci

signij M1ij +

1 1 signij M2ij + signij M3ij Ci Ci



+

1 , Ri Ci

ACCEPTED MANUSCRIPT

4

bij (t) =

1 1 1 signij M1ij , cij (t) = signij M2ij , dij (t) = signij M3ij . Ci Ci Ci

For convenience, memristor neural networks can be written as follows: Z t y(t) ˙ = −Ay(t) + B(t)g(y(t)) + C(t)g(y(t − τ (t))) + D(t)

g(y)(s)ds + I(t),

(3)

where y(t) = [y1 (t), y2 (t), ..., yn (t)]T ,

CR IP T

t−ρ(t)

g(y(t)) = g1 (y(t)), g2 (y(t)), ..., gn (y(t))]T ,

g(y(t − τ (t))) = g1 (y(t − τ1 (t))), g2 (y(t − τ2 (t))), ..., gn (y(t − τn (t)))]T , Z t Z t Z t Z t g1 (y1 (s))ds, g2 (y2 (s))ds, ..., g(y(s))ds = t−ρ1 (t)

t−ρ(t)

t−ρ2 (t)

A(t) = diag{a1 (t), a2 (t), ..., an (t)},

t−ρn (t)

C(t) = (cij (t))n×n ,

AN US

D(t) = (dij (t))n×n ,

B(t) = (bij (t))n×n ,

 gn (yn (s))ds ,

ED

M

(bij (t)), (cij (t)), (dij (t)) are piecewise right continuous, and can be described as   b0ij , signij dfj (xj (t)) − dxi (t) ≥ 0, dt dt bij (t) =  b00 , sign dfj (xj (t)) − dxi (t) < 0, ij ij dt dt   c0ij , signij dfj (xj (t−τj (t))) − dxi (t) ≥ 0, dt dt cij (t) =  c00 , sign dfj (xj (t−τj (t))) − dxi (t) < 0, ij ij dt dt  df (x (t−τ (t))) dx j i (t)  d0ij , signij j j − dt ≥ 0, dt dij (t) =  d00 , sign dfj (xj (t−τj (t))) − dxi (t) < 0, ij ij dt dt

PT

where b0ij , b00ij , c0ij , c00ij , d0ij and d00ij are constants. ai (t) =

n

X 1 + (bij (t) + cij (t) + dij (t)), Ri Ci j=1

Memristor Switched Neural Networks

AC

2.1

CE

the connection weights ai (t) are also piecewise right continuous.

Consider the memristor based switched neural networks based on the system (3) as follows: Z t y(t) ˙ = −Aσ(t) (t)y(t) + Bσ(t) (t)g(y(t)) + Cσ(t) (t)g(y(t − τ (t))) + Dσ(t) (t) g(y)(s)ds + I(t), t−ρ(t)

(4)

where τ (t) and ρ(t) are time-varying differential functions that satisfies: 0 ≤ τ (t) ≤ τ¯, τ˙ (t) ≤ h, ρ(t) ≤ ρ¯, H = max[¯ τ , ρ¯] and σ(·) : [0, +∞) → N = {1, 2, ..., m} is a piecewise right continuous constant function, called a switching law. Corresponding to the switching signal σ(t), we get the following switching sequence: Σ = {x0 ; (i0 , t0 ), ..., (ik , tk ), ..., | ik ∈ N , k = 0, 1, ...},

ACCEPTED MANUSCRIPT

5 where t0 is the initial time when tk ∈ [tk , tk+1 ), x(t0 ) is the initial state and σ(t) = i, ith k subsystem is active. Throughout this paper, we assume the state of the switched neural networks does not jump at the switching instants, that is, the trajectory x(t) is everywhere continuous. Moreover, the switching signal σ(t) has finite number of switching on any finite interval time.

δj− ≤

gj (x1 ) − gj (x2 ) ≤ δj+ . x1 − x2

CR IP T

Assumption 1. The activation function f is bounded, g(0) = 0 and there exist constants δj+ , δj− (j = 1, 2, ..., n) such that for all x1 , x2 ∈ R, x1 6= x2 such that

AN US

For presentation convenience, in the following, we denote  X1 = diag δ1− δ1+ , δ2− δ2+ , ..., δn− δn+ ,  −  δ + δ1+ δ2− + δ2+ δ − + δn+ X2 = diag 1 , , ..., n . 2 2 2 Now, let y ∗ = [y1∗ , y1∗ , ...yn∗ , ] be an equilibrium point of (2) and set, x(t) = y(t) − y ∗ It is easy to see that (4) can be transformed to

M

x(t) ˙ = −Aσ(t) (t)x(t) + Bσ(t) (t)f (x(t)) + Cσ(t) (t)f (x(t − τ (t))) + Dσ(t) (t)

Z

t

f (x)(s)ds,

(5)

t−ρ(t)

ED

where x(t) = [x1 (t), x2 (t), ..., xn (t)]T is the state vector transformed system and f (x(t)) = [f1 (x(t)), f1 (x(t)), ..., fn (x(t))]T with fj (xj (t)) = gj (xj (t) + yj∗ ) − gj (yj∗ ) and fj (0) = 0, for j = 1, 2, ..., n. It is noted that fi (·) satisfies that is δj− ≤

PT

for x1 , x2 ∈ R, x1 6= x2 .

fj (x1 ) − fj (x2 ) ≤ δj+ , x1 − x2

Definition 2.1. [30](Finite-time stability). For given time constant T , switched neural networks (5) is said to be finite-time stable with respect to (c1 , c2 , T, R, σ(t)) if

CE

lim sup {xT (t0 )Rx(t0 ), x˙ T (t0 )Rx(t ˙ 0 )} ≤ c1 ⇒ xT (t)Rx(t) < c2 , t ∈ (0, T ],

−H≤t0 ≤0

where c2 > c1 > 0, R is a positive definite matrix and σ(t) is a switching signal.

AC

Definition 2.2. [46] For any T2 > T1 ≥ 0, let Ni (T1 , T2 ) denote the switching number of σ(t) on an interval (T1 , T2 ). If Ni (T1 , T2 ) ≤ N0 +

T2 − T1 τa

holds for given N0 ≥ 0, τa > 0, then the constant τa is called the average dwell time and N0 is the chatter bound. Without loss of generality, we choose N0 = 0 throughout this paper. Remark 1. Definition 2.2 means that if there exists a positive number τa such that a switching signal has the average dwell time property, the average dwell time between any two consecutive switching is no smaller than a common constant τa for all system modes.

ACCEPTED MANUSCRIPT

6

Nij (T1 , T2 ) ≤ N0j +

Tj (T1 , T2 ) . τaj

CR IP T

Definition 2.3. [15] For any T1 < T2 , let Nσj (t), j ∈ N be the switching numbers that the jth subsystem is activated over the interval (T1 , T2 ) and Tj (T1 , T2 ) denote the total running time of the jth subsystem over the interval (T1 , T2 ). We say that switching signal σ(t) has a mode-dependent average dwell time τaj if there exist positive numbers N0j and τaj such that

− Furthermore, for sufficient large T2 , if there exist positive numbers κ+ j , κi such that

κ− i ≤

Tj (T1 , T2 ) ≤ κ+ j , T2 − T1

we say that the switching signal σ(t) has a mode-dependent average dwell time τaj and running time ratios κ+ , κ− .

AN US

Remark 2. Definition 2.3 constructs a novel set of switching signals with a mode dependent average dwell time technique, it means that if there exist positive numbers τai , i ∈ N such that a switching signal has the mode dependent average dwell time technique, we only require the average time among the intervals associated with the ith subsystem is larger than τai (Note that, the intervals here are not adjacent).

M

Lemma 2.4. [44](Jensen’s Inequality) For any constant matrix M ∈ Rm×m , M = M T > 0, any scalars a and b with the integral concerned are well defined then the following inequality hold. !T ! Z Z Z b

(b − a)

2

(b − a )

Z

b

a

Z

θ

b

!

ED

2

a

xT (s)M x(s)ds ≥

T

x (s)M x(s)dsdθ

≥2

b

b

x(s)ds

M

x(s)ds ,

a

a

Z

a

b

Z

b

x(s)dsdθ

θ

!T

M

Z

a

b

Z

b

!

x(s)dsdθ .

θ

AC

CE

PT

Lemma 2.5. [45] (Free-Matrix-Based Integral Inequality) Let x be a differentiable function [α, β] → ¯ ∈ Rn×n , and Z¯1 , Z¯3 ∈ R3n×3n , and any matrices Z¯2 ∈ R3n×3n , and Rn . For symmetric matrices R 3n×n ¯ ¯ N1 , N2 ∈ R satisfying   ¯1 Z¯1 Z¯2 N    ¯2  Z¯2 N Π= ∗ (6)  ≥ 0,   ¯ ∗ ∗ R

the following inequality holds;

where"

$ = xT (β) xT (α)



1 β−α

Rβ α

Z

#T

x˙ T (s)ds

β

α

¯ x(s)ds x˙ T (s)R ˙ ≤ $T Ω$,

, Ω = (β − α) Z¯1 + 13 Z¯3

!

(7)

¯ 1 Π1 + N ¯2 Π2 ), + sym(N

Π1 = e¯1 − e¯2 , Π2 = 2¯ e3 − e¯1 − e¯2 , e¯1 = [1 0 0], e¯2 = [0 1 0], e¯3 = [0 0 1]. Lemma 2.6. [47] R uAssuming that function h(s) satisfies inequality Assumption 1, then the following inequality holds: v (h(s) − h(v))ds ≤ (u − v)(h(u) − h(v))

ACCEPTED MANUSCRIPT

7

3

Main Results

CR IP T

In this section, we will derive some sufficient LMIs conditions for the finite-time stability of memristor based switched neural networks . We define σ(t) = i means that ith subsystem is activated ∀ i ∈ N , then (5) becomes; ) Rt x(t) ˙ = −Ai (t)x(t) + Bi (t)f (x(t)) + Ci (t)f (x(t − τ (t))) + Di (t) t−ρ(t) f (x)(s)ds + I(t), (8) x(θ) = φ(θ), θ ∈ [−H, 0].

AN US

where φ(θ) is a continuous vector-valued initial function and we defined the following vectors.  ξ T (t) = col xT (t) xT (t − τ (t)) xT (t − τ¯) f T (x(t)) f T (x(t − τ (t))) i Rt f T (x(t − τ¯)) v1T (t) v2T (t) t−ρ(t) xT (s)ds ,   ek = col 0(k−1)n×n , In 0(k−1)n×n , k = (1, 2, ...9). Z t−τ (t) Z t x(s) x(s) T T ds, v2 (t) = ds, v1 (t) = τ (t) τ ¯ − τ (t) t−τ (t) t−¯ τ

M

Theorem 3.1. For given scalars τ¯, ρ¯, h, α, β, µ > 1, c1 , c2 and T , the diagonal matrices ∆1 = diag{δ1− , δ2− , ..., δn− } and ∆2 = diag{δ1+ , δ2+ , ..., δn+ } memristive switched neural networks (8) is finitetime stable for any switching signal σ(t) with average dwell time, if there exist positive symmetric matrices Pi , Q1i , Q2i , Z1i , Z2i , Y1i , Y2i and Wi any matrices with appropriate dimensions U1 ,U2 , U3 , ¯1 , N ¯2 , N ¯3 , N ¯4 , N ¯5 , N ¯6 and diagonal matrices Γ1i , Γ2i and Γ3i , such that V1 , V2 , V3 , W1 , W2 , W3 , N the following LMIs holds: (9)

M0 + τ¯M2 < 0,

(10)

AC

CE

PT

ED

M0 + τ¯M1 < 0,



U1

 Υ1 =   ∗ ∗ 

V1

 Υ2 =   ∗ ∗ 

W1

 Υ3 =   ∗ ∗

U2 U3 ∗ V2 V3 ∗ W2 W3 ∗

¯1 N



(11)



(12)

 ¯2  ≥ 0, N  Z1i ¯3 N

 ¯4  ≥ 0, N  Z1i ¯5 N



 ¯6  ≥ 0, N  Y1i

λ1 c2 e−βT > Λc1 . The network is finite-time stable with respect to (c1 , c2 , T, R, σ(t)), where µ>1 satisfying Pi < µPj , Q1i < µQ1j , Q2i < µQ2j , Z1i < µZ1j ,

(13)

(14)

ACCEPTED MANUSCRIPT

8 Z2i < µZ2j , Y1i < µY1j , Y2i < µY2j , Wi < µWj

∀i, j ∈ N .

(15)

Then under the following average dwell time scheme τa > τa∗ =

T ln µ , ln c2 e−βT − ln Λc1

(16)

CR IP T

where M0 = Ψ1 + Ψ2 + Ψ3 + Ψ4 + Ψ5 + Ψ6 + Ψ7 , M1 = Φ1 + Φ2 , M2 = Φ3 + Φ4 , Ψ1 = 2e1 Pi eTd + 2αe1 Pi eT1 ,

Ψ2 = e2α¯τ [e1 e4 ](Q1i + Q2i )[e1 e4 ]T − (1 − µ)[e2 e5 ]Q1i [e2 e5 ]T − [e3 e6 ]Q2i [e3 e6 ]T ,

AN US

Ψ3 = 4α(e4 − ∆1 e1 )L1 eT1 + 2(e4 − ∆1 e1 )L1 eTd + 4α(∆2 e1 − e4 )L2 eT1 + 2(∆2 e1 − e4 )L2 eTd , o n ¯1 (e1 − e2 )T + [e1 e2 e7 ]N ¯2 (2e7 − e1 − e2 )T Ψ4 = τ¯ed Z1 eTd + τ¯e1 Z2 eT1 + e−2α¯τ Sym [e1 e2 e7 ]N n o ¯3 (e2 − e3 )T + [e2 e3 e8 ]N ¯4 (2e8 − e2 − e3 )T , + e−2α¯τ Sym [e2 e3 e8 ]N

1 τ¯2 ed Y1i eTd − 2e2α¯τ (e1 − e7 )Y1i (e1 − e7 )T − 2e2α¯τ (e2 − e8 )Y1i (e2 − e8 )T − e9 Y2i eT9 , 2 ρ¯ Ψ6 = µeT1 Wi eT1 , Ψ7 = −e1 X1 Γ1i eT1 + e1 X2 Γ1i eT4 − e4 Γ1i eT4 − e2 X1 Γ2i eT2 + e2 X2 Γ2i eT5 − e5 Γ2i eT5 Ψ5 =

Φ4 ed

M

Φ3

ED

Φ1

− e3 X1 Γ3i eT3 + e3 X2 Γ3i eT6 − e6 Γ3i eT6 , 1 = e−2α¯τ [e1 e2 e7 ] − (U1 + U3 )[e1 e2 e7 ]T − e−2α¯τ e7 Z2i eT7 , Φ2 = 2αe1 Wi eT1 + 2e1 Wi eTd , 3 1 = e2α¯τ [e2 e3 e8 ](V1 + V3 )[e2 e3 e8 ]T − e2α¯τ e8 Z2i eT8 , 3 n o 1 ¯5 (e1 − e2 )T + [e1 e2 e7 ]N ¯6 (2e7 − e1 − e2 )T , = e2α¯τ τ¯[e1 e2 e7 ](W1 + W3 )[e1 e2 e7 ] + e−2α¯τ Sym [e1 e2 e7 ]N 3 = [−Ai 0 0 Bi Ci 0 0 0 Di ]T ,

consider,

CE

PT

n o n o n o n o ¯ 1i ) , λ4 = max λmax (Q ¯ 2i ) , λ1 = min λmin (P¯i ) , λ2 = max λmax (P¯i ) , λ3 = max λmax (Q i∈N i∈N i∈N i∈N n o n o n o n o λ5 = max λmax (L1 ) , λ6 = max λmax (L2 ) , λ7 = max λmax (Z¯1i ) , λ8 = max λmax (Z¯2i ) , i∈N i∈N i∈N i∈N n o n o n o ¯ i) . λ9 = max λmax (S¯1i ) , λ10 = max λmax (S¯2i ) , λ11 = max λmax (W i∈N

i∈N

i∈N

AC

Proof. Consider the following Lyapunov-Krasovskii functional: V (xt , t) =

6 X

Vk (x(t))

(17)

k=1

where

V1 (xt , t) = e2αt xT (t)Pi x(t), Z t Z V2 (xt , t) = e2α¯τ e2αs η T (s)Q1i η(s)ds + e2α¯τ t−τ (t)

V3 (xt , t) = 2

n X j=1

l1j e2αt

Z

0

t

e2αs η T (s)Q2i η(s)ds,

t−¯ τ

xj

(fj (s) − δj− s)ds + 2

n X j=1

l2j e2αt

Z

0

xj

(δj+ s − fj (s))ds,

ACCEPTED MANUSCRIPT

9

V4 (xt , t) = V5 (xt , t) =

Z

0

−¯ τ Z t

Z

t−¯ τ

t

e2αs x˙ T (s)Z1i x(s)dsdθ ˙ +

Z

0

−¯ τ

t+θ Z tZ t

Z

e2αs x˙ T (s)Y1i x(s)dsdudθ ˙ +

u

θ

t

e2αs xT (s)Z2i x(s)dsdθ,

t+θ Z 0

t−ρ¯

V6 (xt , t) = τ (t)e2αt xT (t)Wi x(t).

Z

t

e2αs f T (x(s))Y2i f (x(s))dsdθ,

t+θ

CR IP T

and η(t) = [xT (t) f T (x(t))]T . Then we calculating the time derivative, V˙ 1 (xt , t) ≤ 2αe2αt xT (t)Pi x(t) + 2e2αt xT (t)Pi x(t) ˙ = e2αt ξ T (t)Ψ1 ξ(t),

V˙ 2 (xt , t) ≤ e

2αt 2α¯ τ T 2αt T

(18)

T

(1 − µ)η (t − τ (t))Q1i η(t − τ (t)) − e

2αt T

ξ (t)Ψ2 ξ(t) n X V˙ 3 (xt , t) = 4αe2αt l1j [fj (x(t)) − δj− x(t)]T x(t) + 2e2αt [f (x(t)) − ∆1 x(t)]T L1 x(t) ˙ j=1 n X 2αt

+ 4αe

j=1

η (t − τ¯)Q2i η(t − τ¯),

AN US

=e

η (t)(Q1i + Q2i )η(t) − e

e

2αt

l2j [δj+ x(t) − fj (x(t))]T x(t) + 2e2αt [∆2 x(t) − f (x(t))]T L2 x(t).

Using Lemma 2.6, we can get

(19)

(20)

M

V˙ 3 (xt , t) ≤ 4αe2αt [f T (x(t)) − ∆1 x(t)]T L1 x(t) + 2e2αt [f (x(t)) − ∆1 x(t)]T L1 x(t) ˙

+ 4αe2αt [∆2 x(t) − f (x(t))]T L2 x(t) + 2e2αt [∆2 x(t) − f (x(t))]T L2 x(t), ˙ Z

ED

= e2αt ξ T (t)Ψ3 ξ(t).

V˙ 4 (xt , t) = τ¯e2αt x˙ T (t)Z1i x(t) ˙ −

PT CE

≤e

2αt T

ξ

(t)[¯ τ ed Z1i eTd

2α(t−¯ τ)

−e

Z

+

e2αs x˙ T (s)Z1i x(s)ds ˙

t−¯ τ Z t

+ τ¯e2αt xT (t)Z2i x(t) −

(21)

t

e2αs xT (s)Z2i x(s)ds,

t−¯ τ

τ¯e1 Z2i eT1 ]ξ(t)

t

−e

2α(t−¯ τ)

Z

t

xT (s)Z1i x(s)ds

t−¯ τ

xT (s)Z2i x(s)ds,

(22)

t−¯ τ

AC

By using Lemma 2.5, it is clear that if (11) and (12) hold, the estimation of the first integral terms in (22) can be get as follows Z t Z t Z t−τ (t) − x(s)Z ˙ ˙ =− x˙ T (s)Z1 x(s)ds ˙ − x˙ T (s)Z1 x(s)ds, ˙ 1 x(s)ds t−¯ τ

t−τ (t)

t−¯ τ

  1  ≤ ξ T (t) τ (t)[e1 e2 e7 ] U1 + U3 [e1 e2 e7 ]T 3  1  + (¯ τ − τ (t))[e2 e3 e8 ] V1 + V3 [e2 e3 e8 ]T 3 o n ¯ ¯2 (2e7 − e1 − e2 )T × Sym [e1 e2 e7 ]N1 (e1 − e2 )T + [e1 e2 e7 ]N n o ¯3 (e2 − e3 )T + [e2 e3 e8 ]N ¯4 (2e8 − e2 − e3 )T ξ(t). × Sym [e2 e3 e8 ]N

(23)

ACCEPTED MANUSCRIPT

10 Using Lemma 2.4 second integral in (22) it yields, t T

t−¯ τ

x (s)Z2 x(s)ds = − ≤−

Z

t T

t−τ (t)

x (s)Z2 x(s)ds −

Z

1 τ (t)

t

t−τ (t)

Z

t−τ (t)

xT (s)Z2 x(s)ds,

t−¯ τ

T  Z x(s)ds Z2

 x(s)ds

t

t−τ (t)

  Z t−τ (t) T  Z t−τ (t) 1 − x(s)ds , x(s)ds Z2 τ¯ − τ (t) t−¯ τ t−¯ τ i h T T = ξ (t) − τ (t)e7 Z2 e7 − (¯ τ − τ (t))e8 Z2i eT8 ξ(t).

From (11) and (12) we have

CR IP T



Z

n o V˙ 4 (xt , t) ≤ e2αt ξ T (t) Ψ4 + τ (t)Φ1 + (¯ τ − τ (t))Φ3 ξ(t).

AN US

The time derivative of V5 (xt , t) is

τ 2 2αt T e x˙ (t)Y1i x(t) ˙ − V˙ 5 (xt , t) = 2

Z

t

t−¯ τ

Z

t

e2αs x˙ T (s)Y1i x(s)dsdθ ˙

θ

+ ρ¯e2αt f T (x(t))Y2i f (x(t)) −

Z

+ ρ¯e

M

τ¯2 2αt T e x˙ (t)Y1i x(t) ˙ − e2α(t−¯τ ) ≤ 2 2αt T

(24)

f (x(t))Y2i f (x(t)) − e

t

e2αs f T (x(s))Y2i f (x(s))ds,

t−ρ¯ Z t

t−¯ τ

Z

2α(t−ρ) ¯

t

x˙ T (s)Y1i x(s)dsdθ ˙

θ

Z

t

f T (x(s))Y2i f (x(s))ds.

(25)

t−ρ¯

Z

t

t−¯ τ

Z

θ

t

x˙ T (s)Y1i x(s)dsdθ ˙ ≤−

Z

t

t−τ (t)

PT



ED

Using Lemmas 2.4 and 2.5 first integral terms in (25) using convex combination approach [48] yields, Z

t

θ

AC

CE

− (¯ τ − τ (t))

≤−

2 τ 2 (t)

Z

x˙ T (s)Y1i x(s)dsdθ ˙ −

Z

t

Z

t−τ (t)

t−¯ τ

t−τ (t)

x˙ T (s)Y1i x(s)dsdθ ˙

θ

x˙ T (s)Y1i x(s)ds ˙

t−τ (t)

t

t−τ (t)

2 − (¯ τ − τ (t))2

Z

Z

Z

t

x˙ T (s)dsdθ

θ t

t−τ (t)

Z

θ

t−τ (t)

T

Y1i

Z

t

t−τ (t)

x˙ T (s)dsdθ

T

Z

t

z(s)dsdθ ˙

θ

Y1i

Z

t

t−τ (t)

Z



t−τ (t)

x(s)dsdθ ˙

θ

   1 + (¯ τ − τ (t))ξ T (t) τ (t)[e1 e2 e7 ] W1 + W3 [e1 e2 e7 ]T 3  n ¯5 (e1 − e2 )T + [e1 e2 e7 ]N ¯6 (2e7 − e1 − e2 )T ξ(t), + Sym [e1 e2 e7 ]N

 T  Y1i x(t) − v1 (t) − 2 x(t − τ (t)) − v2 (t) Y1i x(t − τ (t)) − v2 (t)    1 + (¯ τ − τ (t))ξ T (t) τ¯[e1 e2 e7 ] W1 + W3 [e1 e2 e7 ] 3 n o ¯5 (e1 − e2 )T + [e1 e2 e7 ]N ¯6 (2e7 − e1 − e2 )T ξ(t), + Sym [e1 e2 e7 ]N

≤ 2 x(t) − v1 (t)

T



(26)

ACCEPTED MANUSCRIPT

11 Z

t

1 − f (x(s))Y2i f (x(s))ds ≤ − ρ¯ t−ρ¯ T

Z

!T

t

f (x(s))ds

t−ρ(t)

Y2i

Z

t

!

f (x(s))ds .

t−ρ(t)

(27)

Thus, we have (28)

CR IP T

V˙ 6 (xt , t) is calculated as

h i V˙ 5 (xt , t) ≤ e2αt ξ T (t) Ψ5 + (¯ τ − τ (t))Φ4 ξ(t).

h i V˙ 6 (xt , t) = τ (t) 2αe2αt xT (t)Wi x(t) + 2e2αt xT (t)Wi x(t) + τ˙ (t)e2αt xT (t)Wi x(t), h i ≤ τ (t) 2αe2αt xT (t)Wi x(t) + 2e2αt xT (t)Wi x(t) + he2αt xT (t)Wi x(t), h i = e2αt ξ T (t) Ψ6 + τ (t)Φ2 ξ(t).

[fj (xj (t − τ (t))) −

δj− xj (t

AN US

By assumption 1,

[fj (xj (t)) − δj− xj (t)][fj (xj (t)) − δj+ xj (t)] ≤ 0,

− τ (t))][fj (xj (t − τ (t))) −

It can be compactly written as 

δj+ xj (t

− τ (t))] ≤ 0,

(29)

j = 1, 2, ..., n, j = 1, 2, ..., n.

M

T    x(t) X1 −X2 x(t) ≤ 0, f (x(t)) ∗ I f (x(t))  T    x(t − τ (t)) X1 −X2 x(t − τ (t)) ≤ 0. f (x(t − τ (t))) ∗ I f (x(t − τ (t)))

ED

Then for any positive matrices Γ1i = diag{γ1i , γ2i , ..., γni } and Γ2i = diag{ˆ γ1i , γˆ2i , ..., γˆni }, the following inequalities hold: T    x(t) X1 Γ1i −X2 Γ1i x(t) ≤ 0, f (x(t)) ∗ Γ1i f (x(t))  T    x(t − τ (t)) X1 Γ2i −X2 Γ2i x(t − τ (t)) ≤ 0, f (x(t − τ (t))) ∗ Γ2i f (x(t − τ (t)))  T    x(t − τ¯) X1 Γ3i −X2 Γ3i x(t − τ¯) ≤ 0. f (x(t − τ¯)) ∗ Γ3i f (x(t − τ¯))

CE

PT



AC

Thus, adding the terms on the right of V˙ (x(t)) yields, h i V˙ (x(t)) ≤ e2αt ξ T (t) M0 + τ (t)M1 + (¯ τ − τ (t))M2 ξ(t)

(30) (31) (32)

(33)

where M0 , M1 and M2 are defined in (9) and (10). We can know that if (9)-(13) hold, then V˙ (x(t)) < 0 for any ξ(t) 6= 0. From (9) and (10), for any constant β we have, V˙ (xt , t) − βV (xt , t) < 0.

(34)

d −βt (e V (xt , t)) < 0. dt

(35)

Notice that

ACCEPTED MANUSCRIPT

12 Integrating (35) from tk to t, V (xt , t) < eβ(t−tk ) V (xtk , tk ).

(36)

For any t ∈ [tk , tk+1 ), where tk is the kth switching instant and from (15) we have, V (xtk , tk ) ≤ µVσ(tk−1) (xtk , tk ). Then,

CR IP T

(37)

V (xtk , tk ) < eβ(tk −tk−1 ) Vσ(tk−1 ) (xtk−1 , tk−1 ). Thus, V (xt , t) ≤ eβ(t−tk ) Vσ(tk ) (xtk , tk ),

≤ µeβ(t−tk ) Vσ(tk−1 ) (xtk , tk ),

(38)

AN US

≤ µeβ(t−tk−1 ) Vσ(tk−1 ) (xtk−1 , tk−1 ),

≤ µ2 eβ(t−tk−2 ) Vσ(tk−2 ) (xtk−2 , tk−2 ),

≤ ... ≤ µNσ (0,t) eβt Vσ(0) (x0 , 0), = µNσ (0,t) eβt Vσ(0) (x0 , 0),

≤ µNσ (0,T ) eβT Vσ(0) (x0 , 0), T

≤ µ τa eβT Vσ(0) (x0 , 0),

M

≤ µNσ (0,t) eβT Vσ(0) (x0 , 0), T

V (xt , t) = µ τa eβT Vσ(0) (x0 , 0).

(39)

i∈N

PT

ED

¯ 1i = R−1/2 Q1i R−1/2 , Q ¯ 2i = R−1/2 Q2i R−1/2 , Z¯1i = R−1/2 Z1i R−1/2 , Define P¯i = R−1/2 Pi R−1/2 , Q −1/2 −1/2 ¯ −1/2 −1/2 ¯ ¯ L1i = R L1i R , L2i = R L2i R , Z2i = R−1/2 Z2i R−1/2 , Y¯1i = R−1/2 Y1i R−1/2 , Y¯2i = ¯ i = R−1/2 Wi R−1/2 . R−1/2 Y1i R−1/2 , W Note that Z 0 T 2α¯ τ ¯ ¯ V (x0 , 0) = max λmax (Pi )x (0)Rx(0) + max λmax (Q1i )e e2αs xT (s)Rx(s)ds ¯ 2i )e2α¯τ + max λmax (Q

CE

i∈N

i∈N

Z

−τ (0)

0

e2αs xT (s)Rx(s)ds

−¯ τ

¯ 1i )[max(|δ − , δ + |2 − ∆1 )]xT (0)Rx(0) + 2 max λmax (L i i i∈N

AC

¯ 2i )[max(∆2 − |δ − , δ + |2 )]xT (0)Rx(0) + 2 max λmax (L i i i∈N Z 0 Z 0 + max λmax (Z¯1i )e2α¯τ x˙ T (s)Rx(s)dsdθ ˙ i∈N

+ max λmax (Z¯2i )e2α¯τ i∈N

+ max λmax (Y¯1i )e2α¯τ i∈N

−¯ τ 0

Z

Z

θ

Z

0

−¯ τ θ 0 Z 0

−¯ τ

θ

xT (s)Rx(s)dsdθ Z

0

x˙ T (s)Rx(s)dsdθdu ˙

u

+ max λmax (Y¯2i )max(|δi− , δi+ |2 )e2αρ¯ i∈N

¯ i )xT (0)Rx(0), + τ (0) max λmax (W i∈N

Z

0

−ρ(0)

Z

θ

0

xT (s)Rx(s)dsdθ

ACCEPTED MANUSCRIPT

13





    ¯ 1i ) + τ¯e2α¯τ max λmax (Q ¯ 2i ) max λmax (P¯i ) + τ¯e2α¯τ max λmax (Q i∈N

i∈N

i∈N

i∈N

×



i∈N

T

sup {x (s)Rx(s), x˙ (s)Rx(s)}, ˙

−H≤s≤0

λ2 + τ¯e2α¯τ λ3 + τ¯e2α¯τ λ4 + (∆2 − ∆1 )λ5 + (δ2 − ∆2 )λ6 + τ¯2 e2α¯τ λ7  1 3 2α¯τ 2 2αρ¯ 2 2 2α¯ τ ˙ + τ¯ e λ8 + τ¯ e λ9 + ρ¯ e ∆ λ10 + τ¯λ11 × sup {xT (s)Rx(s), x˙ T (s)Rx(s)}, 2 −H≤s≤0

AN US



T

CR IP T

¯ 1i )[max(|δ − , δ + |2 − ∆1 )] + max λmax (L ¯ 2i )[max(∆2 − |δ − , δ + |2 )] + max λmax (L i i i i i∈N i∈N     1 + τ¯2 e2α¯τ max λmax (Z¯1i ) + max λmax (Z¯2i ) + τ¯3 e2α¯τ max λmax (Y¯1i ) i∈N i∈N i∈N 2    − + 2 2 2αρ¯ ¯ ¯ + ρ¯ e max λmax (Y2i ) [max(|δi , δi |)] + τ¯ max λmax (Wi )

where ∆ = max(|δi− , δi+ |).  V (x0 , 0) ≤ λ2 + τ¯e2α¯τ λ3 + τ¯e2α¯τ λ4 + (∆2 − δ1 )λ5 + (δ2 − ∆2 )λ6 + τ¯2 e2α¯τ λ7  1 + τ¯2 e2α¯τ λ8 + τ¯3 e2α¯τ λ9 + ρ¯2 e2αρ¯X 2 λ10 + τ¯λ11 c1 , 2 = Λc1 ,

(41)

M

where

(40)

ED

1 ρ2 e2αρ¯λ10 +¯ τ λ11 . Λ = λ2 +¯ τ e2α¯τ λ3 +¯ τ e2α¯τ λ4 +(X 2 −X1 )λ5 +(X2 −X 2 )λ6 +¯ τ 2 e2α¯τ λ7 +¯ τ 2 e2α¯τ λ8 + τ¯3 e2α¯τ λ9 +¯ 2

 T V (xt , t) ≤ µ τa eβT Λc1 ,  = e(β+ln µ/τa )T Λc1 .

(42)

V (xt , t) ≥ λmin (P¯i )xT (t)Rx(t) = λ1 xT (t)Rx(t).

(43)

PT

Thus,

CE

On the other hand,

AC

From (42) and (43), we obtain xT (t)Rx(t) ≤

Λc1 (β+ln µ/τa )T e . λ1

(44)

When µ > 1, from (15), ln(λ1 c2 ) − ln[Λc1 ] − βT > 0,

and T ln(λ1 c2 ) − ln[Λc1 ] − βT < , τa ln µ ln(λ1 c2 e−βT /(Λc1 )) = . ln µ

(45)

ACCEPTED MANUSCRIPT

14 Substituting (45) into (44) gives xT (t)Rx(t) < c2 .

(46)

By Definition 2.1, the system is finite time stable. This completes the proof.

CR IP T

Now, we study stability of memristor switched neural networks by using the mode-dependent average dwell time method.

j∈N

where, κ ¯=

(

κ+ j , κ− j ,

AN US

Theorem 3.2. For given scalars τ¯, ρ¯, h, α βi , µi > 1, c1 , c2 and T , the diagonal matrices ∆1 = diag{δ1− , δ2− , ..., δn− } and ∆2 = diag{δ1+ , δ2+ , ..., δn+ }, memristive switched neural networks (8) is finitetime stable for any switching signal σ(t) with mode-dependent average dwell time τaj and running − time ratios κ+ j , κj satisfying, i, j ∈ N i 6= j  X ln µj κ ¯ > 0, (47) βj − τaj

if βj − else.

ln µj τaj

< 0,

(48)

M

if there exist positive symmetric matrices Pi , Q1i , Q2i , Z1i , Z2i , Y1i , Y2i and Wi any matrices with ¯1 , N ¯2 , N ¯3 , N ¯4 , N ¯5 , N ¯6 and diagonal appropriate dimensions U1 ,U2 , U3 , V1 , V2 , V3 , W1 , W2 , W3 , N matrices Γ1i , Γ2i and Γ3i , such that the following LMIs holds: (49)

M0 + τ¯M2 < 0,

(50)

AC

CE

PT

ED

M0 + τ¯M1 < 0,



U1

 Υ1 =   ∗ ∗ 

V1

 Υ2 =   ∗ ∗ 

W1

 Υ3 =   ∗ ∗

U2 U3 ∗ V2 V3 ∗ W2 W3 ∗

¯1 N



(51)



(52)

 ¯2  ≥ 0, N  Z1i ¯3 N

 ¯4  ≥ 0, N  Z1i ¯5 N



 ¯6  ≥ 0, N  Y1i

λ1 c2 e−βi T > Λc1 , The network is finite-time stable with respect to (c1 , c2 , T, R, σ(t)), where µi >1 satisfying Pi < µi Pj , Q1i < µi Q1j , Q2i < µi Q2j , Z1i < µi Z1j ,

(53)

(54)

ACCEPTED MANUSCRIPT

15 Z2i < µi Z2j , Y1i < µi Y1j , Y2i < µi Y2j , Wi < µi Wj

∀i, j ∈ N .

(55)

Then under the following mode-dependent average dwell time scheme τai > τa∗i =

T ln µi , ln c2 e−βi T − ln Λc1

CR IP T

where M0 = Ψ1 + Ψ2 + Ψ3 + Ψ4 + Ψ5 + Ψ6 + Ψ7 , M1 = Φ1 + Φ2 ,

(56)

M2 = Φ3 + Φ4 ,

Ψ1 = 2e1 Pi eTd + 2αe1 Pi eT1 , Ψ2 = e2α¯τ [e1 e4 ](Q1i + Q2i )[e1 e4 ]T − (1 − µ)[e2 e5 ]Q1i [e2 e5 ]T

− [e3 e6 ]Q2i [e3 e6 ]T , Ψ3 = 4α(e4 − X1 e1 )L1 eT1 + 2(e4 − X1 e1 )L1 eTd + 4α(X2 e1 − e4 )L2 eT1 n ¯1 (e1 − e2 )T + 2(X2 e1 − e4 )L2 eTd , Ψ4 = τ¯ed Z1 eTd + τ¯e1 Z2 eT1 + Sym [e1 e2 e7 ]N o o n ¯4 (2e8 − e2 − e3 )T , ¯3 (e2 − e3 )T + [e2 e3 e8 ]N ¯2 (2e7 − e1 − e2 )T + Sym [e2 e3 e8 ]N + [e1 e2 e7 ]N

AN US

τ¯2 1 ed Y1i eTd − 2(e1 − e7 )Y1i (e1 − e7 )T − 2(e2 − e8 )Y1i (e2 − e8 )T − e9 Y2i eT9 , 2 ρ¯ Ψ6 = µeT1 Wi eT1 , Ψ7 = −e1 X1 Γ1i eT1 + e1 X2 Γ1i eT4 − e4 Γ1i eT4 − e2 X1 Γ2i eT2 + e2 X2 Γ2i eT5 − e5 Γ2i eT5 Ψ5 =

M

− e3 X1 Γ3i eT3 + e3 X2 Γ3i eT6 − e6 Γ3i eT6 , 1 Φ1 = [e1 e2 e7 ] − (U1 + U3 )[e1 e2 e7 ]T − e7 Z2i eT7 , Φ2 = 2αe1 Wi eT1 + 2e1 Wi eTd , 3 1 1 Φ3 = [e2 e3 e8 ](V1 + V3 )[e2 e3 e8 ]T − e8 Z2i eT8 , Φ4 = [e1 e2 e7 ](W1 + W3 )[e1 e2 e7 ] 3 n o 3 ¯5 (e1 − e2 )T + [e1 e2 e7 ]N ¯6 (2e7 − e1 − e2 )T , ed = [−Ai 0 0 Bi Ci 0 0 0 Di ]T . + Sym [e1 e2 e7 ]N

ED

Proof. By using LKF and the similar lines as that in the proof of Theorem 3.1. d −βi t (e V (xt , i)) < 0. dt

(57)

CE

PT

For any T > 0, let t0 = 0 and we denote t0 , t1 , t2 ,...,ti ,...,tNσ (0,T ) as the switching times on the interval [0, 1], where Nσ (0, T ) =

m X

Nσi (0, T ).

i

AC

Integrate (57) for t ∈ [ti , ti+1 ], we get V (xt , σ(t)) ≤ eβσ(ti ) (t−ti ) V (xti , σ(ti )).

(58)

− V (xti , σ(ti )) ≤ µV (xt− , σ(t− i )), ∀(σ(ti ) = i, σ(ti ) = j) ∈ N × N, i 6= j.

(59)

From (55) we obtain i

From (58) into (59), we can get V (xt , σ(t)) ≤ exp{−βσ(tNσ (0,t)) (t − tNσ (0, t))}V (xtNσ (0,t) , σ(tNσ (0,t) )), ≤ µσ(tNσ (0,t)) exp{−βσ(tNσ (0,t)) (t − tNσ (0, t))}V (xt−

Nσ (0,t)

, σ(tNσ (0,t)−1 )),

≤ µσ(tNσ (0,T )) exp{−βσ(tNσ (t,T )) (t − tNσ (0,T ) ) − βσ(tNσ (0,T )−1) (tNσ (0,T )

ACCEPTED MANUSCRIPT

16 − tNσ (0,T )−1 )}V (xtNσ (0,T )−1 , σ(tNσ (0,T )−1 )), Nσ (0,T )−1

µσ(tl+1 ) exp

l=0

≤ exp ≤ exp

N nX i=1

n Nσ (0,T o X)−1 (βσ(tl+1 ) − βσ(tl ) )tl+1 − βσ(tNσ (0,T ) ) T + βσ(t0 ) t0 V (xt0 , σ(t0 )), l=0

N o X Ti (0, T ) ln µi − βi Ti (0, T ) V (xt0 , σ(t0 )), τai i=1

N  nX ln µ

i

i=1

τai

CR IP T

Y



 o − βi T V (xt0 , σ(t0 )).

(60)

i∈N

i∈N

¯ 2i )e2α¯τ + max λmax (Q i∈N

Z

0

AN US

¯ 1i = R−1/2 Q1i R−1/2 , Q ¯ 2i = R−1/2 Q2i R−1/2 , Z¯1i = R−1/2 Z1i R−1/2 , Define P¯i = R−1/2 Pi R−1/2 , Q ¯ 1i = R−1/2 L1i R−1/2 , L ¯ 2i = R−1/2 L2i R−1/2 , Z¯2i = R−1/2 Z2i R−1/2 , Y¯1i = R−1/2 Y1i R−1/2 , Y¯2i = L ¯ i = R−1/2 Wi R−1/2 . R−1/2 Y1i R−1/2 , W Note that Z 0 T 2α¯ τ ¯ ¯ Vσ(0) (x0 , 0) = max λmax (Pi )x (0)Rx(0) + max λmax (Q1i )e e2αs xT (s)Rx(s)ds −τ (0)

e2αs xT (s)Rx(s)ds

−¯ τ

¯ 1i )[max(|δ − , δ + |2 − ∆1 )]xT (0)Rx(0) + 2 max λmax (L i i i∈N

i∈N

i∈N

+ max λmax (Y¯1i )e2α¯τ

PT

i∈N

−¯ τ 0

Z

θ

Z

0

ED

+ max λmax (Z¯2i )e2α¯τ

M

¯ 2i )[max(∆2 − |δ − , δ + |2 )]xT (0)Rx(0) + 2 max λmax (L i i i∈N Z 0 Z 0 + max λmax (Z¯1i )e2α¯τ x˙ T (s)Rx(s)dsdθ ˙ −¯ τ θ Z 0 Z 0 −¯ τ

θ

xT (s)Rx(s)dsdθ Z

0

x˙ T (s)Rx(s)dsdθdu ˙

u

+ max λmax (Y¯2i )max(|δi− , δi+ |2 )e2αρ¯ i∈N

Z

0

−ρ(0)

Z

0

xT (s)Rx(s)dsdθ

θ

CE

¯ i )xT (0)Rx(0), + τ (0) max λmax (W i∈N      ¯ 1i ) + τ¯e2α¯τ max λmax (Q ¯ 2i ) ≤ max λmax (P¯i ) + τ¯e2α¯τ max λmax (Q

AC

i∈N

i∈N

×



i∈N

i∈N

+ max λmax (L¯1i )[max(|δi− , δi+ |2 − ∆1 )] + max λmax (L¯2i )[max(∆2 − |δi− , δi+ |2 )] i∈N i∈N     1 3 2α¯τ 2 2α¯ τ ¯ ¯ ¯ max λmax (Y1i ) + τ¯ e max λmax (Z1i ) + max λmax (Z2i ) + τ¯ e i∈N i∈N i∈N 2    − + 2 2 2αρ¯ ¯ ¯ + ρ¯ e max λmax (Y2i ) [max(|δi , δi |)] + τ¯ max λmax (Wi ) T

i∈N

T

sup {x (s)Rx(s), x˙ (s)Rx(s)}, ˙

−H≤s≤0

 λ2 + τ¯e2α¯τ λ3 + τ¯e2α¯τ λ4 + (∆2 − ∆1 )λ5 + (∆2 − ∆2 )λ6 + τ¯2 e2α¯τ λ7  1 3 2α¯τ 2 2α¯ τ 2 2αρ¯ 2 ˙ + τ¯ e λ8 + τ¯ e λ9 + ρ¯ e ∆ λ10 + τ¯λ11 × sup {xT (s)Rx(s), x˙ T (s)Rx(s)}, 2 −H≤s≤0

ACCEPTED MANUSCRIPT

17

= Λc1 ,

where

(61)

(62)

CR IP T

where ∆ = max(|δi− , δi+ |)  Vσ(0) (x0 , 0) ≤ λ2 + τ¯e2α¯τ λ3 + τ¯e2α¯τ λ4 + (∆2 − ∆1 )λ5 + (∆2 − ∆2 )λ6 + τ¯2 e2α¯τ λ7  1 + τ¯2 e2α¯τ λ8 + τ¯3 e2α¯τ λ9 + ρ¯2 e2αρ¯X 2 λ10 + τ¯λ11 c1 , 2

1 ρ2 e2αρ¯λ10 +¯ τ λ11 . Λ = λ2 +¯ τ e2α¯τ λ3 +¯ τ e2α¯τ λ4 +(∆2 −∆1 )λ5 +(∆2 −∆2 )λ6 +¯ τ 2 e2α¯τ λ7 +¯ τ 2 e2α¯τ λ8 + τ¯3 e2α¯τ λ9 +¯ 2 Thus,

On the other hand,

AN US

 T Vσ(t) (xt , t) ≤ µ τa e2βT Λc1 ,  = e(β+ln µ/τa )T Λc1 .

(63)

Vσ(t) (xt , t) ≥ λmin (P¯i )xT (t)Rx(t) = λ1 xT (t)Rx(t). From (63) and (64), we obtain

Λc1 (β+ln µ/τa )T e . λ1

(65)

M

xT (t)Rx(t) ≤

(64)

When µ > 1, from(55) we have,

ED

ln(λ1 c2 ) − ln[Λc1 ] − βT > 0,

PT

and

ln(λ1 c2 ) − ln[Λc1 ] − βT T < , τa ln µ ln(λ1 c2 e−βT /(Λc1 )) = . ln µ

(66)

CE

Substituting (66) in (65) we get

xT (t)Rx(t) < c2 .

(67)

AC

Thus by Definition 2.1, the considered system is finite time stable. This completes the proof.

4

Numerical Examples

In this section, numerical examples are presented to demonstrate the effectiveness of the results derived in this paper. Example 4.1. Consider the following memristor based switched neural networks with time varying delays:

x(t) ˙ = −Aσ(t) (t)x(t) + Bσ(t) (t)f (x(t)) + Cσ(t) (t)f (x(t − τ (t))) + Dσ(t) (t)

Z

t

t−ρ(t)

f (x)(s)ds,

(68)

ACCEPTED MANUSCRIPT

18

b111 (t) =

b112 (t) =

b121 (t) =

dfj (xj (t)) dt

signij

dfj (xj (t)) dt

signij

dfj (xj (t−τj (t))) dt



dxi (t) dt

≥ 0,

dxi (t) dt



< 0, dxi (t) dt

≥ 0,

 0.4,   1.5,

signij

dfj (xj (t)) dt



dxi (t) dt

df (x (t−τ (t))) signij j j dt j

< 0,

dxi (t) dt



≥ 0,

 1.7, sign dfj (xj (t−τj (t))) − dxi (t) < 0, ij dt dt  df (x (t−τ (t))) j j j i (t)  −0.4, signij − dxdt ≥ 0, dt  −0.2,   −1.1,

signij

dfj (xj (t−τj (t))) dt

signij

 −1.2,   −0.01,

dfj (xj (t−τj (t))) dt

df (x (t−τ (t))) signij j j dt j



dxi (t) dt

< 0,



dxi (t) dt

≥ 0,



df (x (t−τ (t))) signij j j dt j

dxi (t) dt



< 0,

dxi (t) dt

≥ 0,

 −0.02, sign dfj (xj (t−τj (t))) − dxi (t) < 0, ij dt dt  df (x (t−τ (t))) dx (t) j j j i  0.3, signij − dt ≥ 0, dt

PT

c111 (t) =



 5, sign dfj (xj (t−τj (t))) − dxi (t) < 0, ij dt dt   0.6, signij dfj (xj (t)) − dxi (t) ≥ 0, dt dt

ED

b122 (t) =

signij

AN US

a2 (t) =

 4,   4,

 b111 (t) b112 (t) B1 (t) = , b121 (t) b122 (t)  1  d11 (t) d112 (t) D1 (t) = , d121 (t) d122 (t)

M

a1 (t) =

  5,



CR IP T

The parameters of first subsystem are  1  a1 (t) 0 A1 (t) = , 0 a12 (t)  1  c11 (t) c112 (t) C1 (t) = , c121 (t) c122 (t)

c112 (t) =

AC

CE

c121 (t) =

c122 (t) =

d111 (t) =

d112 (t) =

d121 (t) =

 0.6, sign dfj (xj (t−τj (t))) − dxi (t) < 0, ij dt dt   −0.4, signij dfj (xj (t−τj (t))) − dxi (t) ≥ 0, dt dt  −0.3, sign dfj (xj (t−τj (t))) − dxi (t) < 0, ij dt dt   0.1, signij dfj (xj (t−τj (t))) − dxi (t) ≥ 0.2, dt dt  0.2, sign dfj (xj (t−τj (t))) − dxi (t) < 0, ij dt dt  df (x (t−τ (t))) j j j i (t)  −0.8, signij − dxdt ≥ 0, dt  −0.7, sign dfj (xj (t−τj (t))) − dxi (t) < 0, ij dt dt   0.6, signij dfj (xj (t−τj (t))) − dxi (t) ≥ 0, dt dt  0.7,   0.8,  0.9,

signij

dfj (xj (t−τj (t))) dt



dxi (t) dt

< 0,

signij

dfj (xj (t−τj (t))) dt



dxi (t) dt

≥ 0,

df (x (t−τ (t))) signij j j dt j



dxi (t) dt

< 0,

ACCEPTED MANUSCRIPT

19

 −1.0,

signij

dfj (xj (t−τj (t))) dt

signij

dfj (xj (t−τj (t))) dt

The parameters of second subsystem are  2  a1 (t) 0 A2 (t) = , 0 a22 (t)  2  c11 (t) c212 (t) C2 (t) = , 2 2 c21 (t) c22 (t)

b211 (t) =

signij

dfj (xj (t)) dt

signij

dfj (xj (t)) dt

b221 (t) =

=

CE

c211 (t) =

AC

c212 (t) =

c221 (t) =

c222 (t) =

d211 (t) =



< 0,

dxi (t) dt

≥ 0,



dxi (t) dt



< 0,

dxi (t) dt

≥ 0,

 6, sign dfj (xj (t−τj (t))) − dxi (t) < 0, ij dt dt  df (x (t)) dx (t) j j i  0.7, signij − dt ≥ 0, dt  0.5,   1.7,

signij

dfj (xj (t)) dt

signij



dxi (t) dt

dfj (xj (t−τj (t))) dt



< 0,

dxi (t) dt

≥ 0,

 1.9, sign dfj (xj (t−τj (t))) − dxi (t) < 0, ij dt dt   −0.6, signij dfj (xj (t−τj (t))) − dxi (t) ≥ 0, dt dt  −0.4,   −1.3,

PT

b222 (t)



df (x (t−τ (t))) signij j j dt j

ED

b212 (t) =

dxi (t) dt

≥ 0,

AN US

a22 (t) =

 5,   5,



dxi (t) dt

 b211 (t) b212 (t) , b221 (t) b222 (t)  2  d11 (t) d212 (t) D2 (t) = , 2 2 d21 (t) d22 (t)

B2 (t) =

M

a21 (t) =

  6,



CR IP T

d122 (t) =

  −0.9,

 −1.5,   −0.1,

signij

dfj (xj (t−τj (t))) dt



dxi (t) dt

< 0,

signij

dfj (xj (t−τj (t))) dt

dxi (t) dt

≥ 0,

signij

dfj (xj (t−τj (t))) dt



signij

dfj (xj (t−τj (t))) dt



dxi (t) dt



dxi (t) dt

< 0, ≥ 0,

 −0.02, sign dfj (xj (t−τj (t))) − dxi (t) < 0, ij dt dt   0.5, signij dfj (xj (t−τj (t))) − dxi (t) ≥ 0, dt dt  0.7, sign dfj (xj (t−τj (t))) − dxi (t) < 0, ij dt dt  df (x (t−τ (t))) j i (t)  −0.6, signij j j − dxdt ≥ 0, dt  −0.4, sign dfj (xj (t−τj (t))) − dxi (t) < 0, ij dt dt  df (x (t−τ (t))) dx (t) j j j i  0.3, signij − dt ≥ 0, dt  0.5, sign dfj (xj (t−τj (t))) − dxi (t) < 0, ij dt dt   −0.9, signij dfj (xj (t−τj (t))) − dxi (t) ≥ 0, dt dt  −0.8,

signij

dfj (xj (t−τj (t))) dt



dxi (t) dt

< 0,

ACCEPTED MANUSCRIPT

20

d221 (t) =

d222 (t) =

X1 =



0 0

µ = 1.5,

0 0



,

dfj (xj (t−τj (t))) dt

dxi (t) dt



≥ 0,

 0.6, sign dfj (xj (t−τj (t))) − dxi (t) < 0, ij dt dt   0.7 signij dfj (xj (t−τj (t))) − dxi (t) ≥ 0, dt dt

 0.8, sign dfj (xj (t−τj (t))) − dxi (t) < 0, ij dt dt   −1.2, signij dfj (xj (t−τj (t))) − dxi (t) ≥ 0, dt dt  −1.4,

X2 =

T = 6,

signij



0.4 0

c1 = 1.7,

signij

0 0.8



dfj (xj (t−τj (t))) dt

, τ¯ = 2.7,

C1 = C2 = 1,



CR IP T

d212 (t) =

  0.5,

dxi (t) dt

ρ¯ = 1.32,

< 0,

α = 0.005,

β = 0.5,

R1 = R2 = 500.

ED

M

AN US

and feasible solutions to Theorem 3.1, are       21.2997 3.6080 13.8433 1.0937 18.4975 −0.9870 P1 = , Q11 = , Q21 = , 3.6080 −18.6773 1.0937 19.1034 −0.9870 19.0447       7.8133 −1.9561 4.2250 −1.5970 7.9668 −0.7951 Z11 = , Z21 = , Y11 = , −1.9561 6.2355 −1.5970 7.5304 −0.7951 6.6715     54.3823 28.6859 347.8765 −25.1818 Y21 = , W1 = , 28.6859 23.7376 −25.1818 715.7500       −23.0712 14.8090 18.0006 2.5504 19.1460 −0.8689 P2 = , Q12 = , Q22 = , 14.8090 −28.1483 2.5504 24.9713 −0.8689 22.5692       −6.4039 0.6161 2.6536 −0.7364 8.9618 −0.0196 5 Z12 = , Z22 = 10 × , Y12 = , 0.6161 1.7146 −0.7364 4.3162 −0.0196 8.0413     57.2794 −8.4789 −1.3281 0.3685 Y22 = , W2 = 106 × , c2 = 20.7. −8.4789 41.1902 0.3685 −2.1570

AC

CE

PT

By Theorem 3.1, system (68) is finite time stable with average dwell time satisfying τa > 0.8109. The switched neural networks state trajectories are shown in Fig.1, Fig.2 and Fig.3. Figure 1: Switching signal

ACCEPTED MANUSCRIPT

21

Figure 2: State responses of the considered neural networks first subsystem for Example 4.1 3 x1 x2

2

0

−1

−2

0

1

2

3 t/sec

4

5

6

AN US

−3

CR IP T

1

Figure 3: State responses of the considered neural networks second subsystem for Example 4.1 3

x1 x2

2

1

0

M

−1

−2

0

1

ED

−3

2

3 t/sec

4

5

6

PT

Example 4.2. Consider the following memristor based switched neural networks (8) with Dσ(t) = 0 time varying delays: Each subsystem has two memristors.

CE

x(t) ˙ = −Aσ(t) (t)x(t) + Bσ(t) (t)f (x(t)) + Cσ(t) (t)f (x(t − τ (t))),

(69)

AC

For the above system goes to mode dependent average dwell time approach, consider the known matrices k k and let σ(t) = 1, 2. So, we determine memtristors as follows, Akσ(t) (t), Bσ(t) (t) and Cσ(t) (t) k=1,2,3,4; 

1.4 A11 =  0 0

0 1.21 0

  0 −0.5 0  , B11 = B12 =  0.005 2.2 0.2

0.1 −0.3 0.3

 0.1 0.1  , −0.5

Table 1: Feasibility LMIs for different τ¯ in Example 4.1. τ¯ [49](τ ) Theorem 3.1

0.6 Feasible Feasible

0.7 Feasible Feasible

0.8 Feasible Feasible

0.9 Feasible Feasible

1.0 Feasible Feasible

1.1 Infeasible Feasible

2.7 Infeasible Feasible

ACCEPTED MANUSCRIPT

22

Table 2: Feasibility LMIs for different ρ¯ in Example 4.1.

A12

C21 = C23

C22 = C24



1.1 Feasible Feasible

1.3 Infeasible Feasible

   −0.1 0.3 0.1 1.4 0 0 =  0.5 −0.005 0.1  , A11 = A31 =  0 2.205 0  , 0.2 0.3 −0.5 0 0 2.2     −0.1 0.3 0.1 −0.5 0.1 0.1 −0.3 0.1  , =  0.5 −1 0.1  , B13 = B14 =  1 0.2 0.3 −0.5 0.2 0.3 −0.5     −0.1 0.3 0.1 1.4 0 0 =  0.5 −0.005 0.1  , A41 =  0 3.2 0  , 0.2 0.3 −0.5 0 0 2.2     −0.005 0.1 0.1 1.705 0 0 −0.3 0.1  , 2.8 0  , B21 = B22 =  0.5 = 0 0 0 2.705 0.2 0.3 −0.5     1.705 0 0 −0.1 0.3 0.1 2.8 0  , −0.3 0.5  , A22 =  0 =  0.1 0.5 0.005 −0.2 0 0 3.7     2.7 0 0 −0.1 0.3 0.1 0 , =  0.1 −0.3 0.5  , A32 =  0 2.8 0.5 1 −0.2 0 0 2.705     2.7 0 0 −1 0.1 0.1 =  0.5 −0.3 0.1  , C24 =  0 2.8 0  , τ¯ = 0.2, h = 0.8 0 0 3.7 0.2 0.3 −0.5

PT

B23 = B24

1.0 Feasible Feasible

AN US

C13

0.9 Feasible Feasible

M

C12 = C14

0.8 Feasible Feasible

ED

C11 = C13

0.7 Feasible Feasible

CR IP T

ρ¯ [49](η) Theorem 3.1

CE

By Assumption 1, we consider δ1− = δ2− = 0, δ1+ = δ2+ = 1 The first subsystem C1 = C2 = 0.01,

R1 = R2 = 100.

AC

Second subsystem

C1 = C2 = 0.01,

R1 = R2 = 20000.

Then it can be seen that the considered system is stable. As stated in Remark 5, we can consider the impact of different of βσ(t) , µσ(t) on the minimum of τai . A more detailed comparison for different values of βσ(t) , µσ(t) is shown in Table 1, which shows that the higher bound of the derivative of the signal transmission delay gives rise to less conservative results. The system (69) is stable for any switching signal σ(t) with mode dependent average dwell time with τa1 ≥ 0.030, τa1 ≥ 0.021.

Remark 3. According to Theorem 2 in [15], the parameters µ and β are actually the special case of mode-dependent average dwell time for µ = maxµi and β = max βi in Theorem 3.2. Therefore, by fully using the mode-dependent information of µi ≤ µ and βi ≤ β, the conservativeness of the admissible switching signals is reduced. From system (68) consider Dσ (t) = 0 and compared with

ACCEPTED MANUSCRIPT

23

Table 3: Minimum τaσ(t) (σ(t) = 1, 2) for different values βσ(t) , µσ(t) (σ(t) = 1, 2) in example 4.2. (1.01),(1.2,-1) (0.05,0.05) (0.030,0.021)

(1.05),(1.3,-0.9) (0.036,0.028)

(1.1), (1.4,-0.8)) (0.04,0.035)

CR IP T

(µ1 = µ2 ),(β1 , β2 ) [15] Theorem 3.2

Figure 4: State responses of the considered neural networks first subsystem for Example 4.2 4

x1 x2 x3

3.5 3 2.5

1 0.5 0 −0.5

0

1

2

AN US

2 1.5

3

4

5

6

7

t/sec

M

[15] we have the less conservative result.

ED

Remark 4. It can be seen from Theorem 3.1 that the parameter µ, is mode-independent. However, the parameter µi prescribed in Theorem 3.2 is mode-dependent, therefore, we can conclude that ∗ τai ≤ τa∗ from (15),(16) and (55),(56) and the mode-dependent features would reduce the conservativeness existed in Theorem 3.1.

CE

PT

Remark 5. In this paper from (68) with Dσ (t) = 0, in [15] the researchers have used the wellRt known Jensen’s inequality and Newton-Leibniz formula to deal the integral term t−¯τ x˙ T (s)Z1i x(s)ds. ˙ Recently, [45] proposed a new type of inequality based on Free matirx inequality. Compared to Jensen’s inequality the Free matrix method used in our paper gives less conservative results and

AC

Figure 5: State responses of the considered neural networks second subsystem for Example 4.2 5

x1 x2 x3

4.5 4 3.5 3 2.5 2 1.5 1 0.5 0

0

1

2

3

4 t/sec

5

6

7

ACCEPTED MANUSCRIPT

24 mode-dependent average dwell time in [15] is Ta1 ≥ 0.05, Ta2 ≥ 0.05. we have minimal value of mode-dependent average dwell time satisfying τa1 ≥ 0.030, τa2 ≥ 0.021 for the switching signal σ(t).

CR IP T

Remark 6. The finite time stability conditions in Theorem 3.1 and Theorem 3.2 can be easily verified by solving a finite number of LMIs (9)-(16) and (47)-(55) numerically using the free weight matrix method [45]. The Average dwell time approach can refer to [15]. Remark 7. In general, if the number of decision variables and/or size of the LMIs increase then the computation burden will also increase. However, large size of LMIs yield better system performances. In this paper, the proposed criteria are employed by free weight matrix method; as a result, some computational complexity can occur in the proposed mode dependent average dwell time approach.

AN US

Remark 8. In order to illustrate that the finite-time stability criterion proposed in this paper is less conservative than the existing results, we consider the neural network with same parameters as above except the upper bounds of time-varying delays. For a specific parameter ρ¯ = 1.32, Table 1 shows the feasibility of LMIs proposed in [49] and Theorem 3.1. For a specific parameter τ¯ = 2.7, Table 2 shows the feasibility of LMIs proposed in [49] and Theorem 3.1. This demonstrates that our result is less conservative than the one given in [49].

5

M

Remark 9. Since the selection of some parameters, such as c1 , c2 , α, β, µ and so on, has an impact on the feasibility of related LMIs. The positive scalars α and β are exponential convergence rates they are considered as given values.

Conclusion

CE

PT

ED

In this article we investigated the finite-time stability of memristor based switched neural networks with discrete and distributed delays. Both the discrete and distributed time-varying delays are dependent on network modes. By utilizing an appropriate Lyapunov-Krasovskii functional, we shown that the system is finite-time stable if a set of linear matrix inequalities are feasible. The average and mode dependent dwell time approach have been developed to establish sufficient conditions for the switched neural networks be finite-time stable. Simulation examples are provided to show the usefulness of the proposed method. The idea and approach developed in this paper can be further generalized to deal the problems on BAM neural networks, Cohen-Grossberg neural networks, etc.

AC

References

[1] L.O. Chua, Memristor-The missing circuit element, IEEE Trans. on Circuit Theory 18 (1971) 507-519.

[2] D.B. Strukov, G.S. Snider, D.R. Stewart, R.S. Williams, The missing memristor found, Nature 453 (2008) 80-83.

[3] F. Corinto, A. Ascoli, M. Gilli, Nonlinear dynamics of memristor oscillators, IEEE Trans. Circuits Syst. I 58 (2011) 1323-1336. [4] A.L. Wu, Z.G. Zeng, X. Zhu, J. Zhang, Exponential synchronization of memristor-based recurrent neural networks with time delays, Neurocomputing 74 (2011) 3043-3050.

ACCEPTED MANUSCRIPT

25 [5] M.H. Jiang, S. Wang, J. Mei, Y. Shen, Finite-time synchronization control of a class of memristorbased recurrent neural networks, Neural Netw. 63 (2015) 133-140. [6] X. Wang, C. Li, T. Huang, Delay-dependent robust stability and stabilization of uncertain memristive delay neural networks, Neurocomputing 140 (2014) 155-161.

CR IP T

[7] X. Wang, C. Li, T. Huang, S.K. Duan, Global exponential stability of a class of memristive neural networks with time-varying delays, Neural Comput. Appl. 24 (2014) 1707-1715. [8] R. Rakkiyappan, G. Velmurugan, J. Cao, Finite-time stability analysis of fractional-order complex-valued memristor-based neural networks with time delays, Nonlinear Dyn. 78 (2014) 2823-2836. [9] Z. Meng, Z. Xiang, Passivity analysis of memristor-based recurrent neural networks with mixed time-varying delays, Neurocomputing 165 (2015) 270-279.

AN US

[10] K. Zhong, S. Zhu, Q.Yang, Dissipativity results for memristor-based recurrent neural networks with mixed delays, Intelligent Control Inform. Process. (ICICIP), 2015 Sixth International Conference on (2016) 10.1109/ICICIP.2015.7388205. [11] J. Hu, J. Wang, Global uniform asymptotic stability of memristor-based recurrent neural networks with time delays , Neural Netw. (IJCNN), The 2010 Internat. Joint Conference on (2010) 1-8. [12] J. Chen, Z. Zeng, P. Jiang, Global exponential almost periodicity of a delayed memristor-based neural networks, Neural Netw. 60 (2014) 33-43.

M

[13] X. Wang, C. Li, T. Huang, Delay-dependent robust stability and stabilization of uncertain memristive delay neural networks, Neurocomputing 140 (2014) 155-161.

ED

[14] S. Wen, Z. Zeng, T. Huang, Exponential stability analysis of memristor-based recurrent neural networks with time-varying delays, Neurocomputing 97 (2012) 233-240. [15] Y. Xin, Y. Li, Z. Cheng, X. Huang, Global exponential stability for switched memristive neural networks with time-varying delays, Neural Netw. 80 (2016) 34-42.

PT

[16] O.A. Arqub, The reproducing kernel algorithm for handling differential algebraic systems of ordinary differential equations, Mathematical Methods in the Applied Sciences 39 (2016) 45494562.

CE

[17] O.A. Arqub, Approximate solutions of DASs with nonclassical boundary conditions using novel reproducing kernel algorithm, Fundamenta Informaticae 146 (2016) 231-254.

AC

[18] O.A. Arquba, Z.A. Hammour, Numerical solution of systems of second-order boundary value problems using continuous genetic algorithm, Information Sciences 279 (2014) 396-415. [19] M.S. Mahmoud, Y. Xia, LMI-based exponential stability criterion for bidirectional associative memory neural networks, Neurocomputing 74 (2010) 284-290. [20] M.S. Mahmoud, A. Ismail, Improved results on robust exponential stability criteria for neutraltype delayed neural networks, Applied Mathematics and Computation 217 (2010) 3011-3019. [21] M.S. Mahmoud, Y. Xia, Improved exponential stability analysis for delayed recurrent neural networks, Journal of the Franklin Institute 348 (2012) 201-211. [22] J. Qiu, K. Lu, P. Shi, M.S. Mahmoud, Robust exponential stability for discrete-time interval BAM neural networks with delays and Markovian jump parameters, International Journal of Adaptive-Control and Signal Processing 24 (2010) 760-785.

ACCEPTED MANUSCRIPT

26 [23] J. Liu, S. Vazquez, L. Wu, A. Marquez, H. Gao, L.G. Franquelo, Extended state observerbased sliding-mode control for three-phase power converters, IEEE Transactions on Industrial Electronics 64 (2017) 22-31. [24] J. Liu, W. Luo, X. Yang, L. Wu, Robust model-based fault diagnosis for PEM fuel cell air-feed system, IEEE Transactions on Industrial Electronics 63 (2016) 3261-3270.

CR IP T

[25] Y. Hong, Z.P. Jiang, Finite-time stabilizability and instabilizability of delayed memristive neural networks with nonlinear discontinuous controller, IEEE Trans. Neural Netw. Learn. Syst. 26 (2015) 2914-2924. [26] Z. Cai, L. Huang, M. Zhu, D. Wang, Finite-time stabilization control of memristor-based neural networks, Nonlinear Anal: Hybrid Syst. 20 (2016) 37-54. [27] A. Abdurahman, H. Jiang, Z. Teng, Finite-time synchronization for memristor-based neural networks with time-varying delays, Neural Netw. 69 (2015) 20-28.

AN US

[28] L. Wang, Y. Shen, G. Zhang, Finite-time stabilization and adaptive control of memristor-based delayed neural networks, IEEE Trans. Neural Netw. Learn. Syst. (2016) 1-12. [29] X. Chen, L. Huang, Finite time stability of periodic solution for Hopfield neural networks with discontinuous activations, Neurocomputing 103 (2013) 43-49. [30] Y. Orlov, Finite time stability and robust control synthesis of uncertain switched systems. SIAM Journal on Control and Optimization, Siam J. Control Optim. 43 (2006) 1253-1271.

M

[31] R. Yang, Y. Wang, Finite-time stability and stabilization of a class of nonlinear time-delay systems, SIAM J. Control Optim. 50 (2012) 3113-3131.

ED

[32] X. Zhang, G. Feng, Y. Sun, Finite-time stabilization by state feedback control for a class of time-varying nonlinear systems, Automatica 48 (2012) 499-504. [33] E. Moulay, M. Dambrine, N. Yeganefar, W. Perruquetti, Finite-time stability and stabilization of time-delay systems, Syst. Control Lett. 57 (2008) 561-566.

PT

[34] D. Efimov, A. Polyakov, E. Fridman, W. Perruquetti, J.P. Richard, Comments on finite-time stability of time-delay systems, Automatica 50 (2014) 1944-1947.

CE

[35] S.P. Bhat, D.S. Bernstein, Finite-time stability of continuous autonomous systems, SIAM J. Control Optim. 38 (2000) 751-766. [36] W.M. Haddad, S.G. Nersesov, L. Du, Finite-time stability for time-varying nonlinear dynamical systems, Proc. Amer. Control Conf. (2008) 4135-4139.

AC

[37] X. Yang, Exponential synchronization of memristive Cohen-Grossberg neural networks with mixed delays, Cogn. Neurodyn. 8 (2014) 239-249. [38] A.L. Wu, Z.G. Zeng, Exponential stabilization of memristive neural networks with time delays, IEEE Trans. Neural Netw. Learn. Syst. 23 (2012) 1919-1929.

[39] Z.Y. Guo, J. Wang, Z. Yan, Global exponential dissipativity and stabilization of memristor-based recurrent neural networks with time-varying delays, Neural Netw. 48 (2013) 158-172. [40] X.Y. Liu, Ju.H. Park, N. Jiang, J. Cao, Nonsmooth finite-time stabilization of neural networks with discontinuous activations, Neural Netw. 52 (2014) 25-32. [41] X.Y. Liu, D.W.C. Ho, W.W. Yu, J. Cao, A new switching design to finite-time stabilization of nonlinear systems with applications to neural networks, Neural Netw. 57 (2014) 94-102.

ACCEPTED MANUSCRIPT

27 [42] X. Yang, J. Cao, Finite-time stochastic synchronization of complex networks, Applied Math. Model. 34 (2010) 3631-3641. [43] X. Zhang, G. Feng, Y. Sun, Finite-time stabilization by state feedback control for a class of time-varying nonlinear systems, Automatica 48 (2012) 499-504.

CR IP T

[44] K. Gu, V. L. Kharitonov, J. Chen, Stability of time delay systems, Birkhuser, Boston, 2003. [45] H.B. Zeng, Y. He, M. Wu, J. She, New results on stability analysis for systems with discrete distributed delay, Automatica, 60 (2015) 189-192. [46] D. Liberzon, Finite time stability and robust control synthesis of uncertain switched systems. SIAM Journal on Control and Optimization, Siam J. Control Optim. 43 (2006) 1253-1271. [47] Y. He, M.D. Ji, C.K. Zhang, M. Wu, Global exponential stability of neural networks with timevarying delay based on free-matrix-based integral inequality, Neural Netw. 77 (2016) 80-86.

AN US

[48] P. Park, J. W. Ko, C. Jeong, Reciprocally convex approach to stability of systems with timevarying delays, Automatica 47 (2011) 235-238.

AC

CE

PT

ED

M

[49] Z. Meng, Z. Xiang, Stability analysis of stochastic memristor-based recurrent neural networks with mixed time-varying delays, Neural Comput. Appl. 28 (2017) 1787-1799.

ACCEPTED MANUSCRIPT

CR IP T

28

CE

PT

ED

M

AN US

M. Syed Ali graduated from the Department of Mathematics of Gobi Arts and Science College affiliated to Bharathiar University, Coimbatore in 2002. He received his post-graduation in Mathematics from Sri Ramakrishna Mission Vidyalaya College of Arts and Science affiliated to Bharathiar University, Coimbatore, Tamil Nadu, India, in 2005. He was awarded Master of Philosophy in 2006 in the field of Mathematics with specialized area of Numerical Analysis from Gandhigram Rural University Gandhigram, India. He was conferred with Doctor of Philosophy in 2010 in the field of Mathematics specialized in the area of Fuzzy Neural Networks in Gandhigram Rural University, Gandhigram, India. He was selected as a Post-Doctoral Fellow in the year 2010 for promoting his research in the field of Mathematics at Bharathidasan University, Trichy, Tamil Nadu and also worked there from November 2010 to February 2011. Since March 2011 he is working as an Assistant Professor in Department of Mathematics, Thiruvalluvar University, Vellore, Tamil Nadu, India. He was awarded Young Scientist Award 2016 by The Academy of Sciences, Chennai. He has published more than 70 research papers in various SCI journals holding impact factors. He has also published research articles in national journals and international conference proceedings. He also serves as a reviewer for several SCI journals. His research interests include stochastic differential equations, dynamical systems, fuzzy neural networks, complex networks and cryptography.

AC

S.Saravanan born in Periyakilambadi, India, in 1990. He received in B.Sc degree from the Department of Mathematics Government arts college Thiruvannamalai, affiliated to Thiruvalluvar University, Vellore, Tamil Nadu, India, in 2011. Then he took a successive post graduate in Madras Christian College, University of Madras, Chennai, Tamil Nadu, India, in 2013. Currently, he is working towards the Ph.D. degree, under the supervision of Assistant Professor . Dr. M. Syed Ali in the Department of Mathematics, Thiruvalluvar University, Vellore, Tamil Nadu, India. His current research interests Finite-time control of neural networks, H control, Markovian jump and Switched neural networks.