Passivity and passification of memristive recurrent neural networks with multi-proportional delays and impulse

Passivity and passification of memristive recurrent neural networks with multi-proportional delays and impulse

Applied Mathematics and Computation 369 (2020) 124838 Contents lists available at ScienceDirect Applied Mathematics and Computation journal homepage...

606KB Sizes 0 Downloads 56 Views

Applied Mathematics and Computation 369 (2020) 124838

Contents lists available at ScienceDirect

Applied Mathematics and Computation journal homepage: www.elsevier.com/locate/amc

Passivity and passification of memristive recurrent neural networks with multi-proportional delays and impulse Yuxiao Wang a, Yuting Cao b, Zhenyuan Guo b, Shiping Wen a,∗ a b

School of Computer Science and Engineering, University of Electronic Science and Technology of China, Chengdu 611731, China College of Mathematics and Econometrics, Hunan University, Changsha 410082, China

a r t i c l e

i n f o

Article history: Received 31 July 2019 Revised 2 October 2019 Accepted 13 October 2019

Keywords: Memristive recurrent neural network Passivity Passification Multi-proportional delay Impulse

a b s t r a c t The research direction of this paper is passivity and passification of memristive recurrent neural networks (MRNNs) with multi-proportional delays and impulse. Preparing for passive analysis, the model of MRNNs is transformed into the general recurrent neural networks (RNNs) model through the way of non-smooth analysis. Utilizing the proper Lyapunov–Krasovskii functions constructed in this paper and the common matrix inequalities technique, a novel condition is derived which is sufficient to make sure that system is passive. In addition, it relaxes the condition that the symmetric matrices are all required to be positive definite. The final results are presented by linear matrix inequalities (LMIs) and its verification is easy to be carried out by the LMI toolbox. And there are several numerical examples showing the effectiveness and correctness of the derived criteria. © 2019 Elsevier Inc. All rights reserved.

1. Introduction As the fundamental circuit components, resistor, inductor and capacitor are widely used in circuits currently to describe the relationship of several circuit variables in circuits. However, the description of the relationship between electric charge and magnetic was a missing point, which has not been depicted until Chua proposed memristor in 1971 [1]. Memristor possesses distinctive properties which not only make up for the theoretical gap, but also has great practical significance. Through long-term research and exploration, many scientists have noticed and explored its prospective applications in nanoelectronic computing technologies [2], neuromorphic system [3–14], biology computation [15,16], and so on. The great similarity between the function of memristor and human synapses is greatly utilized, memristor-based synapse has been stated in [17], such that many scholars have been fully engaged in the using of memristor to construct new neural network models for the purpose of a more perfect emulation human brain. Therefore, in circuits, the analog device embodied in the synapse is replaced with a memristor, which is the key difference between the general neural networks (NNs) and memristive neural networks (MNNs). Among the most widespread neural networks, MRNNs are popular in neural computing as for the strong optimization and computational ability [18–20]. Among various systems, different aspects all have been studied in depth, see in [21–30]. The dynamics of the system is one of the most concerned aspects. As an important branch of dynamics, passivity plays an important role to guarantee the dynamic systems maintain internal stability and the related theories mostly provide a molded function to analyze the system stability. The methods to analyze the stability are various, such as [31]. The passification process is unique and is ∗

Corresponding author. E-mail address: [email protected] (S. Wen).

https://doi.org/10.1016/j.amc.2019.124838 0 096-30 03/© 2019 Elsevier Inc. All rights reserved.

2

Y. Wang, Y. Cao and Z. Guo et al. / Applied Mathematics and Computation 369 (2020) 124838

actually to add a proper controller to the system and make it passive. Guo et al. [32] comprehensively analyzed passivity and passification of MRNNs for the first time. Then, different methods were utilized to reduce the conservation of the derived results, such as to accurate corresponding variables limits, construct more appropriate functions and some other ways. Less conservative passivity criteria were established through different methods as in [33–38]. To make system passive, various controllers are designed. The state-feedback controller is frequently used in related analysis of MRNNs. Besides, impulse is also widely considered by scholars to add in MRNNs for the purpose of letting the system passive. For the impulse itself, it takes into account the time asymmetry of changes in synaptic plasticity, that is, the change in the strength of the connection between the neurons before and after the synapse receives the pre-synaptic and post-synaptic action potentials. And it also has many functions, one of them is efficiently speeding up the process of system reaching a steady state. But the problem that comes with considering the impulse is that system trajectory is changed instantaneously while system is running, such that related systems are more complicated and more unstable. Under this condition, it is also indispensable to study and optimize the passivity of the system with the impulse. Till now, the dynamics behaviors of MRNNs with impulse have been discussed a lot. In [39], correlation study of the memristive neural network model stability subject to time-varying impulses was presented, and several related results are in [40–42] and so on. Delays are commonly existed as a phenomenon which occurs in the course of signal transmission and causes the neural network system instability, periodic oscillation, etc. [43,44]. Therefore, how to maintain the stability under the condition of various delays has been researched widely recently. Some work has been carried out to study MRNNs with leakage delays [45,46], distributed delays [47–49], and time-varying delays [50–56]. Wang et al. [57] have studied passivity and passification of MRNNs subject to time-varying and leakage delays. Besides, multiple proportional delays are also common in the process of transmission and have a great impact on dynamic systems, which are the special cases of the time-varying delays, that is to say τ (t) is expressed as pt. And there exists a key difference here that the general delays are bounded but proportional delays are unbounded, such that the corresponding processing methods will also be different. In [58], MNNs with multiproportional delays were stated and anti-synchronization control of this system was discussed. At this time, MRNNs with multiple proportional delays need to be investigated more, but it still has not much progress till now. Inspired by the discussions above, the main research direction of this article is the passivity and passification of MRNNs with multiple proportional delays and impulse. It is studied for the first time. The way for dynamic analysis used in this paper is general: constructing new Lyapunov–Krasovskill functions and applying matrix inequality technique. And the related appropriate matrices, simulation results can be obtained easily through MATLAB. This paper is constructed according to the layout described below. In the next section, some preliminaries and MRNNs with multi-proportional delays are given. The Section 3 discusses passivity and passification of this system and Section 4 shows two numerical examples and the corresponding simulations. In the last section, conclusions are stated. Notations: In this paper, if S is a matrix, such that ST indicates the transpose of S and S > 0 describes the positive definiteness of S. The block-diagonal matrix is expressed by diag{ · }. In a symmetric matrix, the notation ∗ is a brief representation of a symmetric block. And that C means that a set of continuous real-valued functions. Two special matrix types, the zero matrix is denoted as 0 and the identity matrix is expressed by I. The common concepts Euclidean space of m dimensions and the all m × n real matrices set is represented by Rm and Rm∗n respectively. Then,  ·  is defined to represent the Euclidean vector or matrix norm. Real numbers or matrices A¯ and A can generate the convex hull, and its closure is expressed by the notation co{A¯ , A}.

2. Preliminaries In this section, some related preparation, lemmas and definitions that are needed in the analysis of the model of MRNNs with multi-proportional and impulse are presented. And the model is stated as follow:

⎧ x˙ i (t ) ⎪ ⎪ ⎪ ⎪ ⎪ ⎨ yi (t ) ⎪ ⎪ ⎪ x(tg ) ⎪ ⎪ ⎩ xi (t )

  = −pi xi (t ) + nj=1 υi j (xi (t ))h j (x j (t )) + nj=1 γi j (x j (ξ j t ))h j (x j (ξ j t ))  + nj=1 λi j (x j (ξˆj t ))h j (x j (ξˆj t )) + ωi (t ), i = 1, 2, . . . , n, t = tg , = hi (xi (t )),

(1)

= Eg (x(tg− )), g ∈ N, = δi (t ), t ∈ [ρ , 1],

where xi (t) illustrates the ith neuron state at time t, pi > 0 denotes the self-feedback connection weight of each neuron cell. hi ( · ) represents the nonlinear active function and satisfies the zero initial condition. As for the system (1), its input is expressed by ωi (t) and yi (t) indicates its output. υi j , γ ij and λij are the connection weights. ξ j and ξˆj denote the proportional delay factors, which satisfy the condition 0 < ξ j , ξˆj ≤ 1, and ξ j t = t − (1 − ξ j )t, ξˆj t = t − (1 − ξˆj )t denote multiple

proportional delays. The time series tg is selected as 0 < t0 < t1 < . . . < tg < tg+1 < . . . and limg→+∞ tg = +∞. And the instantaneous state change of xi (t) at time tg are expressed by xi (t ) = xi (tg+ ) − xi (tg− ), xi (tg+ ) = xi (tg ) and xi (tg− ) = limt→tg xi (t ). Eg (x(tg− )) = vig x(tg− ) describes the magnitude of the change in state variables at time tg . For system (1), δ i (t) is zero initial conditions and differential on [ρ , 1], in which ρ = min1≤ j≤n (ξ j , ξˆj ).

Y. Wang, Y. Cao and Z. Guo et al. / Applied Mathematics and Computation 369 (2020) 124838

3

For convenience and utilize the unique characteristic of current-voltage of memristor, then in the memristive neural networks the connection weights υi j (x j (t )), γi j (x j (ξ j t )), λi j (x j (ξˆj t )) change as follow:

⎧ ∗ D+ hi (xi (ξ t )) − D+ x j (t ) < 0, ⎨γi j , # γi j (x j (ξ j t )) = γi j , D+ hi (xi (ξ t )) − D+ x j (t ) > 0, ⎩ γ (t − ), D+ hi (xi (ξ t )) − D+ x j (t ) = 0, ⎧ ∗ ij υ D+ hi (xi (t )) − D+ x j (t ) < 0, ⎨ i j, # υi j (x j (t )) = υi j , D+ hi (xi (t )) − D+ x j (t ) > 0, ⎩ − υi j (t ), D+ hi (xi (t )) − D+ x j (t ) = 0, ⎧ ∗ D+ hi (xi (ξˆt )) − D+ x j (t ) > 0, ⎪ ⎨λi j , λi j (x j (ξˆj t )) = λ#i j , are all constants. D+ hi (xi (ξˆt )) − D+ x j (t ) < 0, where υi∗j , υi#j , γi∗j , γi#j , λ∗i j , λ# ij ⎪ ⎩ λi j (t − ), D+ hi (xi (ξˆt )) − D+ x j (t ) = 0,

Definition 1 [59]. Under the condition Q ⊆ Rm , then from Q → Rm define a set valued map u → V(u), and a corresponding nonempty set V (u ) ⊆ Rm is here for each point of a set Q ⊆ Rm . At u0 ∈ Q ⊆ Rm , V is upper semi-continuous with nonempty values. And V(M)⊆N is satisfied if a neighborhood M of u0 exists here for any open set N that contains V(u0 ). For each u ∈ Q, if V(u) is closed (convex,compact), then it has a closed (convex,compact) image. Definition 2 [60]. Let as

= v(t, u ) be differential, and in u, v(t, u) is discontinuous and then its set-valued map is defined

du dt

v(t, u ) = ∩ >0 ∩μ(N )=0 co[w(E (u, )\N )], in which co[K] denotes the set K ⊆ Rm and the closure of its convex hull, E (u, ) = y : y − u ≤ , u, y ∈ Rm , > 0. The set N ∈ Rm and its Lebesgue measure is expressed by μ(N). In the system (1), for t ∈ [0, T], x(t, x0 ) a Filippov solution is absolutely continuous that satisfies the differential inclusions:

dx ∈ V (t, x ), for t ∈ [0, T ]. dt Applying Assumption 1 and if under Definition 2, V(t, x) expresses a set-value map and in the sense of Filippov it is feasible to extend the solution x(t) of system (1) to infinity. Assumption 1. For any given positive constant scalars ki and any β1 , β2 ∈ R with β 1 = β 2 , the nonlinear functions hi ( · ) satisfy

0≤

hi (β1 ) − hi (β2 ) ≤ ki , β1 − β2

i = 1, 2, . . . , n,

(2)

Through the above discussion, system (1) satisfies that n 

x˙ i (t ) ∈ − pi xi (t ) +

co{υi∗j , υi#j }h j (x j (t )) +

j=1 n 

+

n 

co{γi∗j , γi#j }h j (x j (ξ j t ))

j=1

co{λ∗i j , λ#i j }h j (x j (ξˆj t )) + ωi (t ), i = 1, 2, . . . , n, t = tg ,

(3)

j=1

˜ i j (x j (ξˆj t )) ∈ co{λ∗ , λ# } exist, such that which means that if υ˜ i j (xi (t )) ∈ co{υi∗j , υi#j }, γ˜i j (x j (ξ j t )) ∈ co{γi∗j , γi#j } and λ ij ij

x˙ i (t ) = − pi xi (t ) +

n  j=1

+

n 

υ˜ i j (xi (t ))h j (x j (t )) +

n 

γ˜i j (x j (ξ j t ))h j (x j (ξ j t ))

j=1

λ˜ i j (x j (ξˆj t ))h j (x j (ξˆj t )) + ωi (t ), i = 1, 2, . . . , n, t = tg .

(4)

j=1

In these equations, co{υi∗j , υi#j } = [υ¯i j , υi j ] and co{γi∗j , γi#j } = [γ¯i j , γi j ] and co{λ∗i j , λ# } = [λ¯i j , λi j ], applying these conditions ij can convert equation (3) into the form stated below:

x˙ i (t ) ∈ − pi xi (t ) +

n  j=1

+

n  j=1

co{ϒ ∗ , ϒ # }h j (x j (t )) +

n 

co{ ∗ ,  # }h j (x j (ξ j t ))

j=1

co{∗ , # }h j (x j (ξˆj t )) + ωi (t ), i = 1, 2, . . . , n, t = tg ,

(5)

4

Y. Wang, Y. Cao and Z. Guo et al. / Applied Mathematics and Computation 369 (2020) 124838

ϒ ∗ = (υi∗j )n∗n , ϒ # = (υi#j )n∗n ,  ∗ = (γi∗j )n∗n ,  # = (γi#j )n∗n , ∗ = (λ∗i j )n∗n , # = (λ#i j )n∗n . There exist ϒ ∈ co{ϒ ∗ , ϒ # },  ∈ co{ ∗ ,  # },  ∈ co{∗ , # }, therefore, system (1) can be expressed as: where

⎧ ⎪ ⎨x˙ (t ) y(t ) ⎪ ⎩x(tg ) x(t )

= −P x(t ) + ϒ h(x(t )) +  h(x(ξ t )) + h(x(ξˆt )) + ω (t ), t ≥ 1, t = tg , = h(x(t )), = Eg (x(tg− )), k ∈ N, = δ (t ), t ∈ [ρ , 1],

(6)

where x(t ) = (x1 (t ), x2 (t ), . . . , xn (t ))T ∈ Rn , P = diag( p1 , p2 , . . . , pn ) with pi > 0, i = 1, 2, . . . , n, ϒ = (υi j )n×n ,  = (γi j )n×n ,  = (λi j )n×n , ω (t ) = (ω1 (t ), ω2 (t ), . . . , ωn (t ))T , h(x(t )) = (h1 (x1 (t )), h2 (x2 (t )), . . . , hn (xn (t )))T ∈ Rn , h(x(ξ t )) = T ˆ ˆ ˆ (h1 (x1 (ξ1 t )), h2 (x2 (ξ2 t )), . . . , hn (xn (ξn t ))) , h(x(ξ t )) = (h1 (x1 (ξ1 t )), h2 (x2 (ξ2 t )), . . . , hn (xn (ξˆn t )))T , δ (t ) = (δ1 (t ), δ 2 (t), . . . , δn (t ))T ∈ ([ρ , 1], Rn ). In this article, the nonlinear active function h( · ) is supposed to be continuous, bounded. In addition, the assumption stated as follow is required to be satisfied by h( · ). Now we present the definition which is useful in the analysis of our results.

Definition 3 [61]. Under zero initial conditions, system (1) is passive if a positive real number ι exists for all tf ≥ 1 and the following inequality



tf

2 1

h(α )T ω (α )dα ≥ −ι



tf 1

ω ( α ) T ω ( α )d α .

(7)

is satisfied. 3. Main results 3.1. Passivity analysis In this section, we analyze the passivity of system (6) through a novel Lyapunov–Krasovskii function and in this LKF the positive definiteness condition is not needed to satisfied by all the symmetric matrices. 2 ≤ 1, then if there exist matrices: R = (r ) Theorem 1. Under Assumption 1, and x(tg ) = νig x(tg )− , where νig i j n×n , T T ˆ = diag(ξˆ1 , ξˆ2 , . . . , ξˆn ),  = N1 > 0, N3 > 0, N2 = N2 , M1 > 0, M3 > 0, M2 = M2 , diagonal matrices K¯ = diag(k¯ 1 , k¯ 2 , . . . , k¯ n ),  diag(ξ1 , ξ2 , . . . , ξn ), with k¯ i > 0, ξˆi > 0, ξ i > 0, and scalars ι > 0, z > 0, zi > 0, i = 1, 2 such that



φ1 :=

φ2

M1 − K 2 z 1 ∗

N1 − K 2 z2 := ∗

⎡ ψ1 ⎢∗ ⎢∗ ⎢  =⎢ ⎢∗ ⎢∗ ⎣

ψ2 ψ3 ∗ ∗ ∗ ∗ ∗

∗ ∗



M2 > 0, M3 + z 1

(8)

N2 > 0, N3 + z2

(9)



R

R

−M3 ∗ ∗ ∗ ∗

0 ˆ N3 − ∗ ∗ ∗



0 0 −M2 0 −M1 ∗ ∗



0 0 0 ˆ N2 − 0 ˆ N1 − ∗



R 0 ⎥ 0 ⎥ ⎥ 0 ⎥ ⎥ < 0, 0 ⎥ ⎦ 0 −ιI

(10)

where ψ1 = −(P R + RP ) + M1 + N1 + zK 2 ; ψ2 = Rϒ − P + M2 + N2 ; ψ3 = 2ϒ + M3 + N3 − zI, then, system (6) is passive. Proof. For system (6), The following LKF are constructed,

V (t ) = xT (t )Rx(t ) + 2





n   i=1



xi (t ) 0

h i ( s )d s +

n   i=1





t

ξi t

n  t     κiT (ρ )Mκi (ρ ) dρ + κiT (θ )Nκi (θ ) dθ ,



i=1



ξˆi t

(11)

M1 M2 x(t ) N1 N2 , κ (t ) = ,N= . ∗ M3 h(x(t )) ∗ N3 That V(t) satisfies the condition of positive definiteness is firstly proved. And applying Assumption 1, the inequality is established that where M =

x(t )T K 2 zx(t ) − h(x(t ))T zh(x(t )) ≥ 0,

(12)

holds for a scalar z > 0. Then it is not difficult to deduce that for any scalars z1 > 0 and z2 > 0, n   i=1

t

ξi t



κiT (t )

K 2 z1 0

0 −z1



κi (t )dt +

n   i=1

t

ξˆi t



κiT (t )

K 2 z2 0

0 −z2



κi (t )dt > 0.

(13)

Y. Wang, Y. Cao and Z. Guo et al. / Applied Mathematics and Computation 369 (2020) 124838

5

Assumption 1 also guarantees that

2

n   i=1

xi (t )

0

h i ( θ )d θ > 0 .

(14)

From (9) to (12), the following inequality is deduced:

V (t ) ≥ xT (t )Rx(t ) + 2 −

n  

n   0

i=1

t

ξi t i=1

κiT (t )

= xT (t )Rx(t ) + 2

xi (t )

K 2 z1 0

n   i=1

h i ( s )d s +

0 −z1

xi (t ) 0

n  

n  

κi (t )dt −

h i ( s )d s +

n  t     κiT (ρ )Mκi (ρ ) dρ + κiT (θ )Nκi (θ ) dθ



ξi t

i=1



t

n  

ξˆi t i=1 t

κiT (t )

ξˆi t

i=1

K 2 z2 0

0 −z2



κi (t )dt

n  t     κiT (ρ )φ1 κi (ρ ) dρ + κiT (θ )φ2 κi (θ ) dθ .



ξi t

i=1

t



i=1

ξˆi t

(15)

Recalling the conditions in (8) and (9), we have that φ satisfies that φi > δi I, i = 1, 2 for sufficiently small but positive scalar δ i > 0, and the following inequality is true for any x(t) = 0,

ˆ t δ2 |x(t )|2 ≥ 2t δ|x(t )|2 > 0. V (t ) ≥ (1 − )t δ1 |x(t )|2 + (1 − )

(16)

where t > 0, δ = min{δ1 , δ2 }. In another way, to consider the elements of the impulse and related conditions, V(tg ) can be expressed as follow:

V (tg ) = νig2 xT (tg− )Rx(tg− ) + 2

n  νig xi (tg− )  i=1

2 ≤ 1, that As for the condition νig

V (tg ) ≤ V (

tg−

)

= xT (tg− )Rx(tg− ) + 2

0

 νig xi (tg− ) 0

n   i=1

h i ( s )d s +

h i ( s )d s ≤

xi (tg− )

 xi (tg− ) 0

h i ( s )d s +

0

n  

n    κiT (ρ )Mκi (ρ ) dρ +



ξi tg− i=1

tg−

ξˆi tg− i=1



 κiT (θ )Nκi (θ ) dθ .

(17)

hi (s )ds, then the following inequality is established:

n   i=1

tg−

tg−

ξi tg−

n    κiT (ρ )Mκi (ρ ) dρ +



i=1

tg−



ξˆi tg−

 κiT (θ )Nκi (θ ) dθ .

(18)

Based on above inequality (18), taking the upper right Dini-derivative D+V (t ), and the derivation interval is from t0 = 1 to t0 = T (tg ≤ T < tg+1 , g = 0, 1, 2, . . . ), the result can be obtained:



T

t0

D + V ( u )d u =



t1

t0

D + V ( θ )d θ +



t2

t1

D + V ( θ )d θ + . . . +



tg

tt−1

D + V ( θ )d θ +



T

tg

D + V ( θ )d θ

− = V (t1− ) − V (t1 ) + . . . + V (tg−1 ) − V (tg−1 ) + V (tg− ) − V (tg ) + V (T ) − V (t0 )

≥ V (T ) − V (t0 ) = V (T ) > 0.

(19)

Now, derive the upper right Dini-derivative of V(t), the result can be obtained as follow:

D+V (t ) − 2hT (t )ω (t ) − ιωT (t )ω (t ) = 2xT (t )R(−P x(t ) + ϒ h(x(t )) +  h(x(ξ t )) + h(x(ξˆt )) + ω (t )) + 2hT (x(t ))(−P x(t ) + ϒ h(x(t )) +  h(x(ξ t ) + h(x(ξˆt )) + ω (t )) + κ T (t )Mκ (t ) − κ T (ξ t )Mκ (ξ t ) ˆ κ (ξˆt ) − 2hT (t )ω (t ) − ιωT (t )ω (t ). + κ T (t )Nκ (t ) − κ T (ξˆt )N 

(20)

By applying Assumption 1

D+V (t ) − 2hT (t )ω (t ) − ιωT (t )ω (t ) ≤ 2xT (t )R(−P x(t ) + ϒ h(x(t )) +  h(x(ξ t )) + h(x(ξˆt )) + ω (t )) + 2hT (x(t ))(−P x(t ) + ϒ h(x(t )) +  h(x(ξ t )) + h(x(ξˆt )) + ω (t )) + κ T (t )Mκ (t ) − κ T (ξ t )Mκ (ξ t ) ˆ κ (ξˆt ) − 2hT (t )ω (t ) − ιωT (t )ω (t ) +κ T (t )Nκ (t ) − κ T (ξˆt )N  +xT (t )K 2 zx(t ) − hT (x(t ))zh(x(t )) = ζ T (t )ζ (t ), where

ζ (t ) = [xT (t ) hT (t ) hT (x(ξ t )) hT (x(ξˆt )) xT (ξ t ) xT (ξˆt ) ωT (t )]T . Applying (8), (9) and (21), for any tf ≥ 0 it is easily to obtain



tf 0

[D+V (xt ) − 2hT (t )ω (t ) − ιωT (t )ω (t )]dt ≤ 0,

(22)

6

Y. Wang, Y. Cao and Z. Guo et al. / Applied Mathematics and Computation 369 (2020) 124838

which ensures that



tf

0

[−2hT (t )ω (t ) − ιωT (t )ω (t )]dt ≤ −D+V (xt ) ≤ 0.

(23)

Then Definition 1 holds, which implies system (6) is passive. Remark 1. In this paper, the proportional delays are considered. And in the process of analysis, more items affecting the system are taken into account than in [62], where the time-delayed inputs are not considered. This may help to reduce the conservativeness. Then from the definition of the matrices in Theorem 1 and the proof process of Theorem 1, it can be seen that N2 and M2 are not required to be positive definite and only need to satisfy that N2T = N2 and M2T = M2 . However, the related matrices of the Lyapunov functions are all need to be positive definite, such that the definition of these two matrices results in more relaxed conditions and lower conservation. Remark 2. Matrices ϒ ,  ,  are memristor-based connection weights of MRNNs (6). Due to the state transition characteristic of memristor, the connection weights will change as the each subsystem state. In contrast, if the connection weights are constant matrices but not change, then there exists no difference between MRNNs and the general RNNs. Consider system (6) without impulse, and construct the same LKF as in Theorem 1, the LKF can be proved to be positive definite under Assumption 1. Then following the same proof process in Theorem 1, a corollary can be obtained as follow: Corollary 1. MRNNs (6) without impulse under Assumption 1 are passive if there exists R = (ri j )n×n , N1 > 0, N3 > 0, N2T = N2 , ˆ = diag(ξˆ1 , ξˆ2 , . . . , ξˆn ),  = diag(ξ1 , ξ2 , . . . , ξn ), with M1 > 0, M3 > 0, M2T = M2 , diagonal matrices K¯ = diag(k¯ 1 , k¯ 2 , . . . , k¯ n ),  k¯ i > 0, ξˆi > 0, ξ i > 0, and scalars ι > 0, z > 0, zi > 0 where i = 1, 2 and then

φi > 0 , i = 1 , 2 ,

(24)

 < 0.

(25)

3.2. Passification analysis In this section, passification is investigated. Actually, the passification problem is to design an appropriate controller and then the obtained closed-loop system is passive. So consider MRNNs (6) with control input as follows:

⎧ x˙ (t ) ⎪ ⎪ ⎨ x(tg ) ⎪ y(t ) ⎪ ⎩ x(t )

= −P x(t ) + ϒ h(x(t )) +  h(x(ξ t )) + h(x(ξˆt )) + ω (t ) + E v(t ), t ≥ 1, t = tg , = Eg (x(tg− )), g ∈ N, = h(x(t )), = δ (t ), t ∈ [ρ , 1],

(26)

in which E denotes a constant matrix of suitable dimensions and v(t ) ∈ Rm is the control input vector. And the state feedback law is stated as follow:

v(t ) = W x(t ).

(27)

Then system (26) is equal to the following equations:

⎧ x˙ (t ) ⎪ ⎪ ⎨ x(tg ) ⎪ y(t ) ⎪ ⎩ x(t )

where W =

= −(P − EW )x(t ) + ϒ h(x(t )) +  h(x(ξ t )) + h(x(ξˆt )) + ω (t ), t ≥ 1, t = tg , = Eg (x(tg− )), g ∈ N, = h(x(t )), = δ (t ), t ∈ [ρ , 1],

(28)

n

i=1 Wi .

Theorem 2. Under Assumption 1, the passivity of MRNNs (28) can be proved if these matrices exist: R¯ = (ri j )n×n , ˆ = diag(ξˆ1 , ξˆ2 , . . . , ξˆn ),  = ¯ , diagonal matrices K¯ = diag(k¯ 1 , k¯ 2 , . . . , k¯ n ),  M¯ 1 > 0, M3 > 0, M¯ 2 = M¯ 2T , N¯ 1 > 0, N3 > 0, N¯ 2 = N¯ 2T , W ¯ ˆ diag(ξ1 , ξ2 , . . . , ξn ), with ki > 0, ξi > 0, ξ i > 0, and scalars ι > 0, z > 0, zi > 0, i = 1, 2, such that



φ1

M1 − K 2 z 1 := ∗



φ2

N1 − K 2 z2 := ∗



M2 > 0, M3 + z 1

(29)



N2 > 0, N3 + z2

(30)

Y. Wang, Y. Cao and Z. Guo et al. / Applied Mathematics and Computation 369 (2020) 124838

⎡ ⎢ ⎢ ⎢ ⎢  =⎢ ⎢ ⎢ ⎣

ψ¯ 1 ∗ ∗ ∗ ∗ ∗ ∗

ψ¯ 2 ψ3 ∗ ∗ ∗ ∗ ∗

  −M3

  0 ˆ N3 − ∗ ∗ ∗

∗ ∗ ∗ ∗

0 0 −M¯ 2 0 −M¯ 1 ∗ ∗

0 0 0 ˆ N¯ 2 − 0 ˆ N¯ 1 − ∗

7



I 0 ⎥ ⎥ 0 ⎥ ⎥ 0 ⎥ < 0, ⎥ 0 ⎥ 0 ⎦ −ιI

(31)

where ψ¯ 1 = −(R¯ P + PR¯ ) + (EW¯ + W¯ T E ) + M¯ 1 + N¯ 1 + zK¯ , ψ¯ 2 = ϒ − R¯ P + EW¯ + M¯ 2 + N¯ 2 , ψ3 = 2ϒ + M3 + N3 − zI, then, system (1) is passive. Its verification is easily carried out by MATLAB. Proof. In this part, the same LKF as in Theorem 1 is constructed:

V (t ) = xT (t )Rx(t ) + 2

n   i=1

xi (t ) 0

h i ( s )d s +

n   i=1

t

ξi t



n  t     κiT (ρ )Mκi (ρ ) dρ + κiT (θ )Nκi (θ ) dθ . i=1

ξˆi t

(32)

Firstly, by replacing −P in system (6) with −P + EW and using the same proof process, system (28) is passive under the conditions (8) and (9). Now, calculate D+V (t ), consider Assumption 1 and the related various of Definition 3, the following inequality is obtained and presented as follow:

V˙ (t ) − 2hT (t )ω (t ) − ιωT (t )ω (t ) ≤ 2xT (t )R(−(P − EW )x(t ) + ϒ h(x(t )) +  h(x(ξ t )) + h(x(ξˆt )) + ω (t )) + 2hT (x(t ))(−(P − EW )x(t ) + ϒ h(x(t )) +  h(x(ξ t )) + h(x(ξˆt )) + ω (t )) ˆ κ (ξˆt ) + κ T (t )Mκ (t ) − κ T (ξ t )Mκ (ξ t ) + κ T (t )Nκ (t ) − κ T (ξˆt )N 

(33)

− 2hT (t )ω (t ) − ιωT (t )ω (t ) + xT (t )K 2 zx(t ) − hT (x(t ))zh(x(t )) = ζ T (t )ζ (t ), where

⎡ ψ1 ⎢∗ ⎢∗ ⎢ =⎢ ⎢∗ ⎢∗ ⎣ ∗ ∗

ψ2 ψ3 ∗ ∗ ∗ ∗ ∗

R

R

−M3 ∗ ∗ ∗ ∗

0 ˆ N3 − ∗ ∗ ∗





0 0 −M2 0 −M1 ∗ ∗

0 0 0 ˆ N2 − 0 ˆ N1 − ∗



R 0 ⎥ 0 ⎥ ⎥ 0 ⎥ ⎥ < 0, 0 ⎥ ⎦ 0 −ιI

(34)

and ψ1 = −(P R + RP ) + REW + W ER + M1 + N1 + zK 2 , ψ2 = Rϒ − P + M2 + N2 + EW . The inequality (34) is a quadratic matrix inequality but not LMI as for that ψ is quadratic. A congruence transformation is performed to (34) by diag{R−1 , I, I, I, R−1 , R−1 , I}, ¯ = W R−1 , M¯ 1 = R−1 M−1 R−1 , N¯ 2 = R−1 N2 , N¯1 = R−1 N1 R−1 , K¯ = and the related matrix variables are changed: R¯ = R−1 , W 1 −1 2 −1 −1 R K R , M¯ 2 = R M2 . Then by integrating the function (33), system (28) satisfies Definition 3, such that system (28) is passive.  Remark 3. It can be seen that (10) and (31) are LMIs which subject to the scalar ι. Then the scalar ι can be included as an variable to reduce the passivity performance bound. The minimum passivity performance bound can be found through solving convex optimization problem stated as follow: (Here is an example of (10) and the results are implemented by using the matlab toolbox.) Minimize ι subject to (10) with R = (ri j )n×n , N1 > 0, N3 > 0, N2T = N2 , M1 > 0, M3 > 0, M2T = M2 , diagonal matrices K¯ = ˆ = diag(ξˆ1 , ξˆ2 , . . . , ξˆn ),  = diag(ξ1 , ξ2 , . . . , ξn ), with k¯ i > 0, ξˆi > 0, ξ i > 0 and z > 0. diag(k¯ 1 , k¯ 2 , . . . , k¯ n ),  Consider system (28) without impulse, under Assumption 1 and through the same proof method, the corresponding Corollary 2 is obtained: Corollary 2. Applying Assumption 1 into MRNNs (28) without impulse, it is easily to prove the passivity if there exist ¯ , diagonal matrices K¯ = diag(k¯ 1 , k¯ 2 , . . . , k¯ n ), these matrices: R¯ = (ri j )n×n , M¯ 1 > 0, M3 > 0, M¯ 2 = M¯ 2T , N¯ 1 > 0, N3 > 0, N¯ 2 = N¯ 2T , W ˆ = diag(ξˆ1 , ξˆ2 , . . . , ξˆn ),  = diag(ξ1 , ξ2 , . . . , ξn ), with k¯ i > 0, ξˆi > 0, ξ i > 0, and scalars ι > 0, z > 0, zi > 0, i = 1, 2, such that  ψ¯ i > 0,  < 0.

8

Y. Wang, Y. Cao and Z. Guo et al. / Applied Mathematics and Computation 369 (2020) 124838

4. Numerical simulations In this section, for the purpose of explaining the final theoretical results are effective, the results of the two experiments will be shown. Example 1. The MRNNs model of two dimensions with multi-proportional delays and impulse as follow:

x˙ (t ) = −P x(t ) + ϒ h(x(t )) +  h(x(ξ t )) + h(x(ξˆt )) + ω (t ), where







6 0 −2.1 0.3 −1.3 0.2 −2.1 1.3 P= , ϒ= ,= , = . 0 5 0.3 0.7 0.7 0.8 0.8 0.5 The feasible solutions which are easy to get by LMI toolbox in MATLAB, and the results shown below are the corresponding matrices:

5.9987 0.5556 R= , 0.5556 10.6191



9.8483 ⎢ 0.8507 N =⎣ 3.8661 −1.0617 ⎡ 11.3144 ⎢ 1.0495 M =⎣ 5.1920 −1.3812

0.8507 10.5911 −1.0617 2.6196 1.0495 12.2494 −1.3812 3.6117



−1.0617 2.6196⎥ ⎦, 0.1998 12.1228 ⎤ −1.3812 3.6117⎥ ⎦, then system (6) is passive. 0.3392 13.6682 Let hi (x ) = 0.5tanh(x ), i = 1, 2, ξ = 0.02, ξˆ = 0.03, ω (t ) = [0; 0], Eg = 0.76, and F˜0i j = D+ hi (xi (t )) − D+ x j (t ), F˜1i j = D+ hi (xi (ξ t )) − D+ x j (t ), F˜2i j = D+ hi (xi (ξˆt )) − D+ x j (t )

⎧ ⎨

3.8661 −1.0617 11.7617 0.1998 5.1920 −1.3812 12.5524 0.3392

−2.1, υ11 (x1 (t )) = −2.4, ⎩υ (t − ), 11

F˜011 < 0; F˜011 > 0; F˜011 = 0,

0.3, υ21 (x1 (t )) = 0.7, ⎩υ (t − ), 21

F˜012 < 0; F˜012 > 0; F˜012 = 0,

−1.3, γ11 (x1 (t )) = −1.5, ⎩γ (t − ), 11

F˜111 < 0; F˜111 > 0; F˜111 = 0,

γ12 (x2 (t )) =

0.7, γ21 (x1 (t )) = −0.3, ⎩γ (t − ), 21

F˜112 < 0; F˜112 > 0; F˜112 = 0,

γ22 (x2 (t )) =

−2.1, λ11 (x1 (t )) = −1.9, ⎩λ (t − ), 11

F˜211 < 0; F˜211 > 0; F˜211 = 0,

λ12 (x2 (t )) =

⎧ ⎨

⎧ ⎨

⎧ ⎨

⎧ ⎨

⎧ ⎨

υ12 (x2 (t )) = υ22 (x2 (t )) =

⎧ ⎨

0.3, 0.5, ⎩υ (t − ), 12

⎧ ⎨

−2.1, −2.4, ⎩υ (t − ), 22

⎧ ⎨

F˜021 < 0; F˜021 > 0; F˜021 = 0, F˜022 < 0; F˜022 > 0; F˜022 = 0;

0.2, 0.5, ⎩γ (t − ), 12

F˜121 < 0; F˜121 > 0; F˜121 = 0,

0.8, 1.0, ⎩γ (t − ), 22

F˜122 < 0; F˜122 > 0; F˜122 = 0,

1.3, 1.2, ⎩λ (t − ), 12

F˜221 < 0; F˜221 > 0; F˜221 = 0,

⎧ ⎨

⎧ ⎨



⎨ 0.8, F˜212 < 0; 0.5, F˜222 < 0; λ21 (x1 (t )) = λ22 (x2 (t )) = 0.7, F˜212 > 0; 1.0, F˜222 > 0; and under the initial condition x0 = [5; 4]. ⎩λ (t − ), F˜ = 0, ⎩λ (t − ), F˜ = 0, 21 212 22 222 In Fig 1(a), the corresponding state trajectories of MRNNs with both proportional delays and impulse are shown. Fig 1(b) illustrates the state trajectories of MRNNs with proportional delays but without impulse. Example 2. This section considers model (6) additionally with control input, which is demonstrated

in function (28). The 1 0 values of same matrices are based on Example 1, values of matrices E are assigned as: E = , such that the feasible 0 2 solutions are obtained by LMI toolbox in MATLAB and the obtained matrices are shown as follows: 57.9748 −5.3965 ¯ = 175.1108 9.9366 , R¯ = ,W −5.3965 57.9836 −9.9366 5.4213



55.4596 ⎢ 0.7109 ¯ M=⎣ 1.0010 −2.5168

0.7109 31.0685 2.5168 −26.8627

1.0010 −2.5168 63.0495 −0.1844



−2.5168 −26.8627⎥ ⎦, −0.1844 60.6918

Y. Wang, Y. Cao and Z. Guo et al. / Applied Mathematics and Computation 369 (2020) 124838

5

5 x1(t) x2(t)

4.5

x1(t) x2(t)

4.5

4

4

3.5

3.5 0.2

2.5

0.2

3

x(t)

3

x(t)

9

0.1

2

2.5

0.1

2 0 0.5

1.5

1

1.5

0 0.5

1.5

2

1

1

0.5

0.5

0

1

1.5

2

2.5

3

0 0

1

2

3

4

5

0

1

2

3

Time t

Time t

(a)

(b)

4

5

Fig. 1. State trajectories of MRNNs (6) with ω (t ) = [0; 0]T . (a) State x(t) of system (6). (b) State x(t) of system (6) without impulse.

5

5 x1(t) x2(t)

4.5

x1(t) x2(t)

4.5

4

4

3.5

3.5

0.15

0.15

3

3

0.1

x(t)

x(t)

0.1 2.5 0.05

2

1

1.5

2

1

0.5

0.5

0

0 1

2

3

0

1.5

2.5

1

0

0.05

2

0

1.5

2.5

4

5

3

0

1

2

4

3

5

4

Time t

Time t

(a)

(b)

6

7

5

8

6

7

8

Fig. 2. State trajectories of MRNNs (28). (a) State x(t) of MRNNs (28). (b) State x(t) of MRNNs (28) without impulse.





47.9935 0.6175 0.5120 −2.4399

27.4633 −2.4399 −20.9349⎥ 3.0630 04564 ⎢ 0.6175 ¯ R= N ¯ =⎣ , such that W = W . ⎦ 0.5120 −2.4399 54.3947 −1.0600 −0.1641 0.0782 −2.4399 −20.9349 −1.0600 50.9543 Under the same condition as in Example 1, Fig 2(a) shows the results that are the corresponding state trajectories of system (28). And Fig 2(b) shows the state trajectories of (6) with a controller but without impulse. From Figs. 1 and 2, systems (6) and (28) are proved to be stable, then the corresponding theoretical results are proved to be correct. Through the comparison of figures (a) and (b) of Figs. 1 and 2, it can be seen easily that system with impulse will arrive steady state quickly than system without impulse. The function of impulse is simulated successfully. Remark 4. In this paper, passivity and passification of MRNNs with multi proportional delays are mainly investigated. Furthermore, for processing proportional delays, there is still a lot of work need to be carried out. First, based on the topic of this paper, more effective methods can be used in dealing with the conservativeness to improve our results. Besides, considering some other models, such as networked-based framework, is useful to improve the applicability of this analytical method. And utilizing the event-triggered or sampled-data controller in [63,64], or other controllers, is meaningful for proportional delays processes. All of them are worthy of being the next step in the study.

10

Y. Wang, Y. Cao and Z. Guo et al. / Applied Mathematics and Computation 369 (2020) 124838

5. Conclusion The issues studied in this paper are passivity and passification of memristive recurrent neural networks subject to multiproportional delays and impulse. The delays considered in this paper are different from the previous research and the delays are unbounded. Unlike general time-varying delay processing, dealing with the unbounded coefficients is also a challenge. Moreover, impulse is also considered in this paper, which effectively makes system reach the stable state quickly. Relaxed passivity criteria with less conservativeness were obtained without the requirement of positive definiteness for all the symmetric matrices. At the final part, we gave two experiments to verify the effectiveness of the novel proposed criteria. In the next step, by considering free-weighting matrix and other methods, it will be a new direction to reduce the conservation of the derived results. This can be analyzed based on more neural networks. Then designing controllers for proportional delays processes is also a novel topic. These works can be studied in near future. References [1] L. Chua, Memristor-the missing circuit element, IEEE Trans. Circuit Theory 18 (5) (1971) 507–519. [2] M. Teimoory, A. Amirsoleimani, A. Ahmadi, S. Alirezaee, S. Salimpour, M. Ahmadi, Memristor-based linear feedback shift register based on material implication logic, in: 2015 European Conference on Circuit Theory and Design (ECCTD), IEEE, 2015, pp. 1–4. [3] W. Niu, Z. Feng, M. Zeng, B. Feng, Y. Min, C. Cheng, J. Zhou, Forecasting reservoir monthly runoff via ensemble empirical mode decomposition and extreme learning machine optimized by an improved gravitational search algorithm, Appl. Soft Comput. 82 (105589) (2019) 1–11. [4] W. Niu, Z. Feng, C. Cheng, J. Zhou, Forecasting daily runoff by extreme learning machine based on quantum-behaved particle swarm optimization, J. Hydrol. Eng.-ASCE 23 (3) (2018) 1–15. [5] W. Niu, Z. Feng, Y. Min, B. Feng, C. Cheng, J. Zhou, Comparison of multiple linear regression, artificial neural network, extreme learning machine and support vector machine in deriving hydropower reservoir operation rule, Water 11 (1) (2019) 88–100. [6] Z. Feng, W. Niu, C. Cheng, China’s large-scale hydropower system: operation characteristics, modeling challenge and dimensionality reduction possibilities, Renew. Energy 136 (0) (2019) 805–818. [7] Z. Feng, W. Niu, R. Zhang, S. Wang, J. Zhou, C. Cheng, Operation rule derivation of hydropower reservoir by k-means clustering method and extreme learning machine based on particle swarm optimization, J. Hydrol. 576 (0) (2019) 229–238. [8] S.H. Jo, T. Chang, I. Ebong, B.B. Bhadviya, P. Mazumder, W. Lu, Nanoscale memristor device as synapse in neuromorphic systems, Nano Lett. 10 (4) (2010) 1297–1301. [9] S. Wen, W. Liu, Y. Yang, Z. Zeng, T. Huang, Generating realistic videos from keyframes with concatenated gans, IEEE Trans. Circuits Syst. Video Technol. 99 (2018) 1–11. [10] G. Ren, Y. Cao, S. Wen, Z. Zeng, T. Huang, A modified Elman neural network with a new learning rate, Neurocomputing 286 (2018) 11–18. [11] M. Dong, S. Wen, Z. Zeng, Z. Yan, T. Huang, Sparse fully convolutional network for face labeling, Neurocomputing 331 (2019) 465–472. [12] Z. Li, M. Dong, S. Wen, X. Hu, P. Zhou, Z. Zeng, CLU-CNNs: object detection for medical images, Neurocomputing 350 (2019) 53–59. [13] S. Wen, M.Z.Q. Chen, X. Yu, Z. Zeng, T. Huang, Fuzzy control for uncertain vehicle active suspension systems via dynamic sliding-mode approach, IEEE Trans. Syst. Man Cybern.: Syst. 47 (2017) 24–32. [14] S. Wen, T. Huang, X. Yu, M.Z.Q. Chen, Z. Zeng, Aperiodic sampled-data sliding-mode control of fuzzy systems with communication delays via the event-triggered method, IEEE Trans. Fuzzy Syst. 24 (2016) 1048–1057. [15] Q. Zou, P. Xing, L. Wei, B. Liu, Gene2vec: gene subsequence embedding for prediction of mammalian n6 methyladenosine sites from mRNA, RNA 25 (2) (2019) 205–218. [16] L. Wei, Y. Ding, R. Su, J. Tang, Q. Zou, Prediction of human protein subcellular localization using deep learning, J. Parallel Distrib. Comput. 117 (2018) 212–217. [17] S.P. Adhikari, C. Yang, H. Kim, L.O. Chua, Memristor bridge synapse-based neural network and its learning, IEEE Trans. Neural Netw. Learn. Syst. 23 (9) (2012) 1426–1435. [18] S. Wen, S. Xiao, Y. Yang, Z. Yan, Z. Zeng, T. Huang, Adjusting the learning rate of memristor-based multilayer neural networks via fuzzy method, IEEE Trans. Comput. Aided Des. Integr. Circuits Syst. 38 (6) (2019) 1084–1094. [19] S. Wen, R. Hu, Y. Yang, Z. Zeng, T. Huang, Y.-D. Song, Memristor-based echo state network with online least mean square, IEEE Trans. Syst. Man Cybern.: Syst. 99 (2018) 1–10. [20] S. Wen, H. Wei, Y. Yang, Z. Guo, Z. Zeng, T. Huang, Y. Chen, Memristive LSTM networks for sentiment analysis, IEEE Trans. Syst. Man Cybern.: Syst. 99 (2019) 1–11. [21] J. Cheng, J.H. Park, J. Cao, W. Qi, Hidden Markov model-based nonfragile state estimation of switched neural network with probabilistic quantized outputs, IEEE Trans. Cybern. (2019). [22] J. Cheng, J.H. Park, X. Zhao, J. Cao, W. Qi, Static output feedback control of switched systems with quantization: a nonhomogeneous sojourn probability approach, Int. J. Robust Nonlinear Control (2019). [23] J. Cheng, Y. Zhan, Nonstationary l1 − l∞ filtering for Markov switching repeated scalar nonlinear systems with randomly occurring nonlinearities, Appl. Math. Comput. 365 (124714) (2020). [24] H. Shen, T. Wang, J. Cao, G. Lu, Y. Song, T. Huang, Nonfragile dissipative synchronization for Markovian memristive neural networks: a gain-scheduled control scheme, IEEE Trans. Neural Netw. Learn. Syst. 30 (6) (2018) 1841–1853. [25] H. Shen, S. Huo, H. Yan, J.H. Park, V. Sreeram, Distributed dissipative state estimation for Markov jump genetic regulatory networks subject to round-robin scheduling, IEEE Trans. Neural Netw. Learn. Syst. (2019). [26] X. Hu, J. Xia, Y. Wei, B. Meng, H. Shen, Passivity-based state synchronization for semi-Markov jump coupled chaotic neural networks with randomly occurring time delays, Appl. Math. Comput. 361 (2019) 32–41. [27] Y. Men, X. Huang, Z. Wang, H. Shen, B. Chen, Quantized asynchronous dissipative state estimation of jumping neural networks subject to occurring randomly sensor saturations, Neurocomputing 291 (2018) 207–214. [28] M. Dai, J. Xia, H. Xia, H. Shen, Event-triggered passive synchronization for Markov jump neural networks subject to randomly occurring gain variations, Neurocomputing 331 (2019) 403–411. [29] M. Dai, Z. Huang, J. Xia, B. Meng, J. Wang, H. Shen, Non-fragile extended dissipativity-based state feedback control for 2-D Markov jump delayed systems, Appl. Math. Comput. 362 (124571) (2019). [30] L. Shen, X. Yang, J. Wang, J. Xia, Passive gain-scheduling filtering for jumping linear parameter varying systems with fading channels based on the hidden Markov model, Proc. Inst. Mech. Eng. Part I 233 (1) (2019) 67–79. [31] L. Fang, L. Ma, S. Ding, D. Zhao, Finite-time stabilization for a class of high-order stochastic nonlinear systems with an output constraint, Appl. Math. Comput. 358 (2019) 63–79. [32] Z. Guo, J. Wang, Z. Yan, Passivity and passification of memristor-based recurrent neural networks with time-varying delays, IEEE Trans. Neural Netw. Learn. Syst. 25 (11) (2014) 2099–2109. [33] R. Rakkiyappan, A. Chandrasekar, J. Cao, Passivity and passification of memristor-based recurrent neural networks with additive time-varying delays, IEEE Trans. Neural Netw. Learn. Syst. 26 (9) (2014) 2043–2057.

Y. Wang, Y. Cao and Z. Guo et al. / Applied Mathematics and Computation 369 (2020) 124838

11

[34] R. Rakkiyappan, K. Sivaranjani, G. Velmurugan, Passivity and passification of memristor-based complex-valued recurrent neural networks with interval time-varying delays, Neurocomputing 144 (2014) 391–407. [35] H. Wang, S. Duan, T. Huang, L. Wang, C. Li, Exponential stability of complex-valued memristive recurrent neural networks, IEEE Trans. Neural Netw. Learn. Syst. 28 (3) (2016) 766–771. [36] H. Liu, Z. Wang, B. Shen, F.E. Alsaadi, State estimation for discrete-time memristive recurrent neural networks with stochastic time-delays, Int. J. Gen. Syst. 45 (5) (2016) 633–647. [37] A. Wu, S. Wen, Z. Zeng, Synchronization control of a class of memristor-based recurrent neural networks, Inf. Sci. 183 (1) (2012) 106–116. [38] S. Wen, Z. Zeng, T. Huang, Y. Chen, Passivity analysis of memristor-based recurrent neural networks with time-varying delays, J. Franklin Inst. 350 (8) (2013) 2354–2370. [39] J. Qi, C. Li, T. Huang, Stability of delayed memristive neural networks with time-varying impulses, Cogn. Neurodyn. 8 (5) (2014) 429–436. [40] W. Yang, W. Yu, J. Cao, F.E. Alsaadi, T. Hayat, Global exponential stability and lag synchronization for delayed memristive fuzzy Cohen–Grossberg BAM neural networks with impulses, Neural Netw. 98 (2018) 122–153. [41] K. Mathiyalagan, J.H. Park, R. Sakthivel, Synchronization for delayed memristive BAM neural networks using impulsive control with random nonlinearities, Appl. Math. Comput. 259 (2015) 967–979. [42] Y. Zhou, C. Li, L. Chen, T. Huang, Global exponential stability of memristive Cohen–Grossberg neural networks with mixed delays and impulse time window, Neurocomputing 275 (2018) 2384–2391. [43] D. Yang, X. Li, J. Qiu, Output tracking control of delayed switched systems via state-dependent switching and dynamic output feedback, Nonlinear Anal. 32 (2019) 294–305. [44] X. Yang, X. Li, Q. Xi, P. Duan, Review of stability and stabilization for impulsive delayed systems, Math. Biosci. Eng. 15 (2018) 1495–1515. [45] R. Li, J. Cao, Stability analysis of reaction-diffusion uncertain memristive neural networks with time-varying delays and leakage term, Appl. Math. Comput. 278 (2016) 54–69. [46] J. Xiao, S. Zhong, Y. Li, New passivity criteria for memristive uncertain neural networks with leakage and time-varying delays, ISA Trans. 59 (2015) 133–148. [47] X. Wang, C. Li, T. Huang, L. Chen, Dual-stage impulsive control for synchronization of memristive chaotic neural networks with discrete and continuously distributed delays, Neurocomputing 149 (2015) 621–628. [48] A. Wu, Z. Zeng, Lagrange stability of memristive neural networks with discrete and distributed delays, IEEE Trans. Neural Netw. Learn. Syst. 25 (4) (2013) 690–703. [49] S. Wen, X. Xie, Z. Yan, T. Huang, Z. Zeng, General memristor with applications in multilayer neural networks, Neural Netw. 103 (2018) 142–148. [50] Y. Cao, Y. Cao, S. Wen, Z. Zeng, T. Huang, Passivity analysis of reaction-diffusion memristor-based neural networks with and without time-varying delays, Neural Netw. 109 (2019) 159–167. [51] X. Xie, S. Wen, Z. Zeng, T. Huang, Memristor-based circuit implementation of pulse-coupled neural network with dynamical threshold generator, Neurocomputing 284 (2018) 10–16. [52] Z. Guo, S. Gong, S. Wen, T. Huang, Event based synchronization control for memristive neural networks with time-varying delay, IEEE Trans. Cybern. 49 (2019) 3268–3277. [53] Z. Guo, L. Liu, J. Wang, Event based synchronization control for memristive neural networks with time-varying delay, IEEE Trans. Neural Netw. Learn. Syst. 30 (2019) 2052–2066. [54] Z. Guo, S. Gong, T. Huang, Finite-time synchronization of inertial memristive neural networks with time delay via delay-dependent control, Neurocomputing 108 (2018) 260–271. [55] Z. Guo, S. Gong, S. Yang, T. Huang, Global exponential synchronization of multiple coupled inertial memristive neural networks with time-varying delay via nonlinear coupling, Neural Netw. 108 (2018) 260–271. [56] S. Wang, Z. Guo, S. Wen, T. Huang, Finite/fixed-time synchronization of delayed memristive reaction-diffusion neural networks, Neurocomputing (2019). [57] S. Wang, Y. Cao, T. Huang, S. Wen, Passivity and passification of memristive neural networks with leakage term and time-varying delays, Appl. Math. Comput. 361 (2019) 294–310. [58] W. Wang, L. Li, H. Peng, J. Kurths, J. Xiao, Y. Yang, Anti-synchronization control of memristive neural networks with multiple proportional delays, Neural Process. Lett. 43 (1) (2016) 269–283. [59] J.-P. Aubin, H. Frankowska, Set-Valued Analysis, Springer Science & Business Media, 2009. [60] A.F. Filippov, Differential Equations with Discontinuous Righthand Sides: Control Systems, 18, Springer Science & Business Media, 2013. [61] E. Fridman, U. Shaked, On delay-dependent passivity, IEEE Trans. Autom. Control 47 (4) (2002) 664–669. [62] L. Zhou, Delay-dependent and delay-independent passivity of a class of recurrent neural networks with impulse and multi-proportional delays, Neurocomputing 308 (2018) 235–244. [63] J. Wang, T. Ru, J. Xia, Y. Wei, Z. Wang, Finite-time synchronization for complex dynamic networks with semi-Markov switching topologies: an H event-triggered control scheme, Appl. Math. Comput. 356 (2019) 235–251. [64] Z. Wang, L. Shen, J. Xia, H. Shen, J. Wang, Finite-time non-fragile l2-1 control for jumping stochastic systems subject to input constraints via an event-triggered mechanism, J. Franklin Inst. 355 (14) (2018) 6371–6389.