Author's Accepted Manuscript
Passivity and passification of memristorbased complex-valued recurrent neural networks with interval time-varying delays R. Rakkiyappan, K. Sivaranjani, G. Velmurugan
www.elsevier.com/locate/neucom
PII: DOI: Reference:
S0925-2312(14)00625-0 http://dx.doi.org/10.1016/j.neucom.2014.04.034 NEUCOM14197
To appear in:
Neurocomputing
Received date: 20 November 2013 Revised date: 8 April 2014 Accepted date: 23 April 2014 Cite this article as: R. Rakkiyappan, K. Sivaranjani, G. Velmurugan, Passivity and passification of memristor-based complex-valued recurrent neural networks with interval time-varying delays, Neurocomputing, http://dx.doi.org/ 10.1016/j.neucom.2014.04.034 This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and review of the resulting galley proof before it is published in its final citable form. Please note that during the production process errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.
Passivity and passification of memristor-based complex-valued recurrent neural networks with interval time-varying delays 1 R. Rakkiyappan, K. Sivaranjani, G. Velmurugan Department of Mathematics, Bharathiar University, Coimbatore - 641 046, Tamilnadu, India. Abstract: In this paper, we made an effort to investigate the passivity and passification of memristor-based complex-valued recurrent neural networks (MCVNNs) with interval timevarying delays. By constructing proper Lyapunov-Krasovskii functional and using the characteristic function method, passivity conditions are derived in terms of linear matrix inequalities (LMIs). Then, based on the derived passivity condition, the desired feedback controller is designed, which ensures the MCVNNs with interval time-varying delays to be passive. Finally, numerical examples are given to illustrate the effectiveness of the proposed theoretical results. Keywords: Passivity, Passification, Complex-valued neural networks, Memristor, Linear matrix inequality (LMI).
1
Introduction:
In circuit theory, there are four fundamental circuit variables, namely, current, voltage, charge and flux. There are six possible combinations of the four fundamental circuit variables. Among, these six possible combinations five are well defined except the relationship between charge and flux. In 1971, Chua found that missing circuit element and named it as memristor [1]. On May 1, 2008, after thirty seven years, Stanley Williams and his group at HP labs realized the memristor in devise form. Memristor, the contraction of memory resistor is a two terminal passive device [2]. Memristors can be used as non-volatile memory, allowing greater data density than hard drives. The synapse is a crucial element in biological neural networks, but a simple electronic equivalent to it was absent. There are lot of similarities between memristor and the synapse in our human brain. Because of this feature, many applications of memristors have been identified by [3]- [5]. In the electronic circuit implementation of the neural networks, time-delay plays an important role due to the finite switching speed of amplifiers. Time-delay is often a main source of oscillatory which causes systems instability and poor performance. On the other hand, 1
Corresponding author at: Department of Mathematics, Bharathiar University, Coimbatore - 641 046,
Tamilnadu, India. Email:
[email protected] (R. Rakkiyappan).
2 the interval time-varying delays has strong application background, which commonly exists in many practical systems. Thus, the investigation for the systems with interval time-varying delays have attracted considerable attention. Many of the applications of neural networks depend upon the stability properties of the network [6], [7]. Thus, many of the researchers have created interest to investigate the dynamical behaviors of memristor-based recurrent neural networks with time-delay. In the existing literature [8]- [12], the authors investigated the stability properties such as multistability, global exponential stability, synchronization, anti-synchronization, exponential synchronization of memristor-based recurrent neural networks with time-delay. Passivity theory is closely related to the circuit analysis method and it provides a powerful tool for analyzing the stability of systems. The main scope of passivity theory is that the passive properties of system can keep the system internally stable. The passification problem is also called as the passive control problem. The objective of passive control problem is to design a controller such that the resulting closed-loop system is passive. Because of this feature, the passivity and passification problems have been an active area of research in the past decades. Due to the occurrence of time-delay, many researchers focused on passivity problem of delayed neural networks. In [13], the authors studied the passivity and passification for stochastic Takagi-Sugeno fuzzy systems with mixed time-varying delays. Passivity and passification of time-delay systems are investigated in [14]. In [15], authors studied passivity analysis of stochastic time-delay neural networks via Lyapunov Krasovskii functional method and by using the weighting matrices. In addition, the problem of passivity analysis for various neural networks was also widely investigated see references [16]- [25]. In recent years, the stability analysis of complex-valued neural networks have played an important role because of its widespread applications in many fields. In complex-valued neural networks (CVNNs) the inputs, connection weights and activation functions are all complex-valued. In real-valued neural networks the activation functions are chosen to be smooth and bounded. In CVNNs, if we choose an activation function to be smooth and bounded, then according to Liouville’s theorem it will reduce to a constant [28]. Therefore the choice of an activation function is the main challenge in CVNNs. For different types of activation functions we need different approaches to study the relevant neural networks, which are quite different from those used for real-valued recurrent neural networks. It has more complicated properties when compared with real-valued networks in both theoretical and practical applications. In [29], global stability of complex-valued neural networks with both leakage time delay and discrete time delay are discussed. In [30]- [32], stability properties of complex-valued neural networks were studied. However, to the best of our knowledge,
3 there is no results addressed on the passivity and passification analysis of memristor-based complex-valued recurrent neural networks with interval time-varying delays, which motivates the present study. Motivated by the above discussions, the objective of this paper is to study the passivity and passification of memristor-based complex-valued recurrent neural networks with interval time-varying delays. Some new passivity conditions are derived in the form LMIs by employing Lyapunov-Krasovkii functional method. The state feedback controller has been designed, which ensures the given system to be passive. Finally numerical examples are given to illustrate the effectiveness of our proposed results. The rest of this paper is organized as follows: Model description and preliminaries are given in section 2. In section 3, some new passivity conditions are derived and the passification of the given system is discussed. Numerical examples are presented in section 4. Finally conclusion is presented in section 5. Notations: Throughout this paper, z(t) = x(t) + iy(t) denote the complex-valued function, where x(t), y(t) ∈ Rn . The superscript ∗ and T denote the matrix complex conjugate transpose and matrix transposition respectively. For any matrix P, P > 0 (P < 0) means P is positive definite (negative definite) matrix. in a matrix represents the elements below the main diagonal of a symmetric matrix. The solutions of all the systems considered in the ˜ Π} ˆ denotes closure of the convex hull of Cn following are intended in Filippov’s sense. co{Π, ˜ and Π. ˆ generated by complex numbers Π
2
Model description and preliminaries:
The memristor-based recurrent neural network can be implemented by very large scale of integration circuits and the connection weights are implemented by the memristors. By Kirchoff’s current law, a general class of memristor-based recurrent neural networks can be written in the form as Cp
dxp (t) dt
⎡ ⎤ n 1 ⎦ = − ⎣ (Mpq (xq (t)) + Npq (xq (t))) + xp (t) Rp q=1
+ +
n q=1 n q=1
signpq fq (xq (t))Mpq (xq (t)) signpq fq (xq (t − τp (t)))Npq (xq (t)) + Ip ,
(1)
4 where
signpq =
1
p = q
−1 p = q,
(2)
Mpq and Npq are the memductances of the memristors, τp (t) is the time-varying delay, xp (t) is the voltage of the capacitor Cp , fq (xq (t)), fq (xq (t − τp (t))) are the activation functions about xq (t) without and with time-varying delays, respectively, Rp is the resistor parallel to the capacitor Cp , Ip is an external input or bias, where p, q = 1, 2, · · · , n. Therefore dxp (t) dt
= −dp xp (t) +
n
apq (xq (t))fq (xq (t))
q=1
+
n
bpq (xq (t))fq (xq (t − τp (t))) + up ,
(3)
q=1
where dp = apq (xq (t)) = bpq (xq (t)) = up =
n 1 1 (Mpq + Npq ) + , Cp q=1 Rp signpq Mpq , Cp signpq Npq , Cp Ip . Cp
In this paper, we consider the complex-valued memristor-based recurrent neural network model with interval time-varying delays described as follows: dz(t) = −Dz(t) + A(z)f (z(t)) + B(z)f (z(t − τ (t))) + v(t), dt y(t)
= f (z(t)),
(4)
where z(t) = (z1 (t), z2 (t), · · · , zn (t))T is the state vector; D = diag {d1 , d2 , · · · , dn } is the positive diagonal matrix; A = [apq ]n×n and B = [bpq ]n×n are the feedback connection weight matrix and the delayed feedback connection weight matrix, respectively; y(t) = f (z(t)) = (f1 (z1 (t)), f2 (z2 (t)), · · · , fn (zn (t)))T represents the output of the neural network; v(t) = [v1 (t), v2 (t), · · · , vn (t)] ∈ Cn is an exogenous input vector; τ (t) is a continuous timevarying function satisfies 0 ≤ τ1 ≤ τ (t) ≤ τ2 , τ˙ (t) ≤ μ.
(5)
5 Based on the feature of memristor, the complex-valued connection weights apq (z(t)) and bpq (z(t)) are defined as follows: a ˆpq , apq (xp (t)) = a ˇpq , ˆbpq , bpq (xp (t)) = ˇbpq ,
|xp (t)| < πp |xp (t)| > πp |xp (t)| < πp |xp (t)| > πp
, apq (yp (t)) = , bpq (yp (t)) =
a ˆpq , |yp (t)| < πp a ˇpq , |yp (t)| > πp ˆbpq , |yp (t)| < πp ˇbpq , |yp (t)| > πp
ˆ d, ˇ a for p, q = 1, 2, · · · , n, where the switching jumps πp > 0, d, ˆpq , a ˇpq , ˆbpq and ˇbpq are constants. The initial conditions of model (4) is zp (s) = φp (s), s ∈ [−τ2 , 0], where φ ∈ C([−τ2 , 0], D). Denote z(t, φ) as the solution of model (4) with initial condition (t0 , φ). It means that z(t, φ) is continuous and satisfies the equation (4), and zp (s, φ) = φp (s), s ∈ [−τ2 , 0]. Let us take xp (t) = Re(zp (t)), yp (t) = Im(zp (t)), a1pq = Re(apq ), a2pq = Im(apq ), b1pq = Re(bpq ), b2pq = Im(bpq ), v1p = Re(vp ), v2p = Im(vp ), φ1p (s) = Re(φp (s)) and φ2p (s) = Im(φp (s)). Let C([−τ2 , 0], D) be the space of continuous functions mapping [−τ2 , 0] into D ∈ Cn with the norm defined by φ = max1≤i≤n {sups∈[−τ2 ,0] |φs |}. Obviously, C([−τ2 , 0]) is a Banach space. Separating the neural networks (4) into real and imaginary parts we get x˙ p (t) = −dxp (t) +
n
a1pq (xq (t))fq (xq (t)) −
q=1
+
n
b1pq (xq (t))fq (xq (t − τ (t))) −
n
b2pq (yq (t))fq (yq (t − τ (t))) + v1p ,
(6)
q=1
y˙ p (t) = −dyp (t) +
n
a2pq (yq (t))fq (xq (t)) +
q=1 n
a2pq (yq (t))fq (yq (t))
q=1
q=1
+
n
b2pq (yq (t))fq (xq (t − τ (t))) +
q=1
n
a1pq (xq (t))fq (yq (t))
q=1 n
b1pq (xq (t))fq (yq (t − τ (t))) + v2p .
(7)
q=1
The corresponding initial conditions of the neural networks (6) and (7) are given in the following form xp (s) = φ1p (s), s ∈ [−τ2 , 0], p = 1, 2, · · · , n, yp (s) = φ2p (s), s ∈ [−τ2 , 0], p = 1, 2, · · · , n. It follows from (6) and (7), using the theory of differential inclusions and set valued maps we get for any p = 1, 2, · · · , n, x˙ p (t) = −dxp (t) +
n q=1
co{ˆ a1pq , a ˇ1pq }fq (xq (t)) −
n q=1
co{ˆ a2pq , a ˇ2pq }fq (yq (t))
6
+
n
co{ˆb1pq , ˇb1pq }fq (xq (t − τ (t))) −
q=1
n
n
co{ˆ a1pq , a ˇ1pq }fq (yq (t)) +
q=1 n
(8)
q=1
y˙ p (t) = −dyp (t) + +
co{ˆb2pq , ˇb2pq }fq (yq (t − τ (t))) + v1p , n
co{ˆ a2pq , a ˇ2pq }fq (xq (t))
q=1
co{ˆb1pq , ˇb1pq }fq (yq (t − τ (t))) +
q=1
n
co{ˆb2pq , ˇb2pq }fq (xq (t − τ (t))) + v2p ,
(9)
q=1
or equivalently, for p, q = 1, 2, · · · , n, t ≥ 0, there exist measurable functions Λ1pq (t) ∈ ˇ1pq }, Λ2pq (t) ∈ co{ˆ a2pq , a ˇ2pq }, Υ1pq (t) ∈ co{ˆb1pq , ˇb1pq }, Υ2pq (t) ∈ co{ˆb2pq , ˇb2pq } such co{ˆ a1pq , a that x˙ p (t) = −dxp (t) +
n
Λ1pq (t)fq (xq (t)) −
q=1
+
n
Υ1pq (t)fq (xq (t − τ (t))) −
n
Υ2pq (t)fq (yq (t − τ (t))) + v1p ,
(10)
q=1
y˙ p (t) = −dyp (t) +
n
Λ1pq (t)fq (yq (t)) +
q=1
+
Λ2pq (t)fq (yq (t))
q=1
q=1
n
n
n
Λ2pq (t)fq (xq (t))
q=1
Υ1pq (t)fq (yq (t − τ (t))) +
q=1
n
Υ2pq (t)fq (xq (t − τ (t))) + v2p .
(11)
q=1
The parameters Λ1pq (t), Λ2pq (t), Υ1pq (t), Υ2pq (t) (p, q = 1, 2, · · · , n) in (10) and (11) depends on the initial conditions of neural networks (6) and (7) and time t. Clearly, for p, q = 1, 2, · · · n, ˇ1pq } = [a1pq , a1pq ], co{ˆ a2pq , a ˇ2pq } = [a2pq , a2pq ], co{ˆ a1pq , a co{ˆb1pq , ˇb1pq } = [b1pq , b1pq ], co{ˆb2pq , ˇb2pq } = [b2pq , b2pq ]. Since the inter-neuron connections are implemented by using memristor and its memductance depends on the voltage applied to the memristor, apq are the functions of fq (zq (t)) − zp (t)), and bpq are functions of fq (zq (t − τpq (t))) − zp (t). For convenience, we denote apq (t) = apq (fq (zq (t)) − zp (t)), bpq (t) = bpq (fq (zq (t − τpq (t))) − zp (t)), p, q = 1, 2, · · · , n. Then the 2
2
combination number of possible forms of A(z) and B(z) is 22n . We order the 22n cases in any way as follows: (A1 , B1 ), (A2 , B2 ), · · · , (A22n2 , B22n2 ). 2
Therefore, at any fixed time t ≥ 0, the form of A(z) and B(z) must be one of 22n cases, 2
that is there exists some p0 ∈ {1, 2, · · · , 22n } such that A(z) = Ap0 and B(z) = Bp0 . Hence
7 at any time t, MCVNNs (4) has the following form: dz(t) = −Dz(t) + Ap0 f (z(t)) + Bp0 f (z(t − τ (t))) + v(t), dt y(t)
= f (z(t)),
(12)
2
Next, for p = 1, 2, · · · , 22n , we define the characteristic function of Ap and Bp at any fixed time t as follows:
μp (t) =
1
A(z) = Ap and B(z) = Bp ,
0,
otherwise.
(13)
2
Obviously,
2n 2
p=1
μp (t) = 1. So, MCVNNs (4) can be expressed as 2
dz(t) dt
=
2n 2
μp (t)[−Dz(t) + Ap f (z(t)) + Bp f (z(t − τ (t))) + v(t)],
p=1
= −Dz(t) + A(t)f (z(t)) + B(t)f (z(t − τ (t))) + v(t), y(t) = f (z(t)), where
(14) 2
A(t) =
2n 2
2
μp (t)Ap and B(t) =
p=1
2n 2
μp (t)Bp .
(15)
p=1
Definition 1: MCVNNs (4) is said to be passive if there exists a scalar γ such that T
∗
T
y (t)v(t)dt ≥ −γ
2 0
v ∗ (t)v(t)dt,
(16)
0
for all T ≥ 0 and all solutions of (4) with φ = 0. Definition 2. Let E ⊂ Cn , z → F (z) is called a set-valued map from E → Cn ; if to each point z of a set E ⊂ Cn , there corresponds a nonempty set F (z) ⊂ Cn . A set-valued map F with nonempty values is said to be upper semi-continuous at z0 ∈ E ⊂ Cn if, for any open set N containing F (z0 ), there exists a neighborhood M of z0 such that F (M ) ⊂ N . F (z) is said to have a closed (convex, compact) image if for each z ∈ E, F (z) is closed (convex, compact). Definition 3. For the system dz/dt = F (z), z ∈ Cn , with discontinuous right-hand sides, a set-valued map is defined as follows: Φ(z) = ∩δ>0 ∩μ(N )=0 co[F (B(z, δ) \ N )], where co(E) is the closure of the convex hull of set E, B(z, δ) = {z1 : z1 − z ≤ δ}, and μ(N ) is Lebesque measure of set N .
8 Lemma 1: [29] For any constant matrix M ∈ Rn×n and M > 0, a scalar function u(s) : [a, b] → Rn with scalars a < b such that the following inequalities are satisfied: (i) (ii)
T
b
u(s)ds
a
b a
M
b a
T
t
t+θ
M
u(s)dsdθ
≤ (b − a)
u(s)ds b a
a
t
t+θ
b
uT (s)Mu(s)ds,
u(s)dsdθ
(b2 − a2 ) ≤ 2
b
a
t
t+θ
uT (s)Mu(s)dsdθ.
Lemma 2: [29] For any constant Hermitain matrix N ∈ Cn×n and N > 0, a scalar function u(s) : [a, b] → Cn with scalars a < b such that the following inequalities are satisfied:
∗
b b b u(s)ds N u(s)ds ≤ (b − a) u∗ (s)Nu(s)ds, (i) a
(ii)
a
b
a
∗
t
t+θ
u(s)dsdθ
a
N
b a
t
t+θ
u(s)dsdθ
≤
(b2 − a2 ) 2
a
b
t
t+θ
u∗ (s)Nu(s)dsdθ.
Lemma 3: [32] Given a Hermitian matrix Q, then Q < 0 is equivalent to QR −QI < 0, QI QR where QR = Re(Q) and QI = Im(Q). Assumption 1: For any p ∈ {1, 2, · · · , n}, fp (0) = 0 and there exist constants lp− and lp+ such that, lp− ≤
fp (α1 ) − fp (α2 ) ≤ lp+ α1 − α2
for all α1 = α2 . Assumption 2: Let Re(z) and Im(z) be the real and imaginary parts of a complex number z respectively. fp (z) can be expressed by fp (z) = fpR (Re(z)) + ifpI (Im(z)), where fpR (.), fpI (.) : C → C for all p = 1, 2, · · · , n. Then, for α1 ∈ C, α2 ∈ C, and α1 = α2 , lpR− ≤ lpI− ≤
fpR (α1 ) − fpR (α2 ) ≤ lpR+ α1 − α2 fpI (α1 ) − fpI (α2 ) ≤ lpI+ α1 − α2
for all p = 1, 2, · · · , n. Remark 1. Under the assumption 1, by the existence theorem of Filippov solution, there is atleast a local solution x(t) = (x1 (t), x2 (t), · · · , xn (t))T of the proposed system, which is essentially bounded. Because of the neuron activation functions fi , i = 1, 2, · · · , n in the proposed system are bounded and satisfy Lipschitz conditions, the local solution x(t) can be defined on the interval [0, t0 ], t0 > 0 in the sense of Filippov.
9
3
Passivity and passification analysis
Passivity analysis: In this section by using the Lyapunov functional method and LMI technique, we will derive some sufficient conditions for passivity. Theorem 1: Suppose that assumption 1 holds. MCVNNs (4) is passive if there exist positive definite Hermitian matrices P1 , Q1 , Q2 , Q3 , Q4 , Q5 , Q6 , positive diagonal matrices G = diag{g1 , g2 , · · · , gn }, H = diag{h1 , h2 , · · · , hn }, any appropriately dimensioned matrix R1 and a scalar γ > 0 such that for 2
p = 1, 2, · · · , 22n , ⎡ (p) Φ1,1 Φ1,2 ⎢ ⎢ Φ2,2 ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ Φ=⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎣
Φ1,3
(p)
R1
0
0
0
0
0
0
Φ1,11
Φ1,12
0
−I
0
0
0
0
0
0
Φ2,11
0
Φ3,3
0
0
0
Φ3,7
0
0
0
Φ3,11
0
−γI
0
0
0
0
0
0
Φ4,11
0
Φ5,5
0
0
0
0
0
0
0
Φ6,6
0
0
0
0
0
0
Φ7,7
0
0
0
0
0
−Q4
0
0
0
0
−Q5
0
0
0
−Q5
0
0
Φ11,11
0
−Q6
⎤ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ < 0, (17) ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎦ (p)
2 where Φ1,1 = −(DR1 + R1T D) + Q1 + Q2 + Q3 + τ12 Q4 + τ12 Q5 − (τ2 − τ1 )2 Q6 − 2L1 G, Φ1,2 = R1 Ap + (p)
2L2 G, Φ1,3 = R1 Bp , Φ1,11 = −R1 + P1 − R1 D, Φ1,12 =
(τ1 −τ2 ) 2
2
Q6 +
τ2 −τ1 2 Q6 ,
Φ2,2 = −2G, Φ2,11 =
R1 Ap , Φ3,3 = −2H, Φ3,7 = 2L2 H, Φ3,11 = R1 Bp , Φ4,11 = R1 , Φ5,5 = −Q1 − 2L1 H, Φ6,6 = −Q2 − 2H, Φ7,7 = −(1 − μ)Q3 − 2L1 H and Φ11,11 = −R1 − R1T + (
τ12 −τ22 2 ) Q6 . 2
Proof: Consider a Lyapunov-Krasovskii functional defined by V (t) = V1 + V2 + V3 + V4 ,
(18)
where V1 V2
= z ∗ (t)P1 z(t), t t t ∗ ∗ = z (s)Q1 z(s)ds + z (s)Q2 z(s)ds + z ∗ (s)Q3 z(s)ds, t−τ1
0 t V3
=
t−τ2
τ1 z ∗ (θ)Q4 z(θ)dθds +
−τ1 t+s
V4
=
τ12 − τ22 2
t−τ (t) −τ1 t
τ12 z ∗ (θ)Q5 z(θ)dθ ds,
−τ2 t+s −τ10 t
−τ2 θ t+λ
z˙ ∗ (s)Q6 z(s)ds ˙ dλ dθ.
10 Calculate the upper right Dini-derivative of V1 , V2 , V3 and V4 along the solutions of (14), respectively, we have D+ V1
= 2z ∗ (t)P1 z(t) ˙
D+ V2
= z ∗ (t)(Q1 + Q2 + Q3 )z(t) − z ∗ (t − τ1 )Q1 z(t − τ1 ) − z ∗ (t − τ2 )Q2 z(t − τ2 ) −(1 − μ)z ∗ (t − τ (t))Q3 z(t − τ (t))
+
D V3 D+ V4
τ12 z ∗ (t)Q4 z(t)
=
τ12
τ22
t
∗
+ τ12 z (t)Q5 z(t) −
t−τ 1
∗
t−τ1
2
τ12 z ∗ (ζ)Q5 z(ζ)dζ
τ1 z (ζ)Q4 z(ζ)dζ − t−τ2
− [z˙ ∗ (t)Q6 z(t)] ˙ 2 ⎛ −τ ⎛ −τ ⎞ ⎞ 1 1 −⎝ z ∗ (t) − z ∗ (t + θ)dθ⎠ Q6 ⎝ z(t) − z(t + θ)dθ⎠
≤
=
−τ2
τ12 ⎛
− 2
τ22
2
−τ2
z˙ ∗ (t)Q6 z(t) ˙ ⎛
⎞
t−τ 1
z ∗ (s)ds⎠ Q6 ⎝(τ2 − τ1 )z(t) −
− ⎝(τ2 − τ1 )z ∗ (t) −
t−τ2
=
2 2
τ12 − τ2 2
z(s)ds⎠
t−τ2
∗
2 ∗
t−τ 1
∗
z˙ (t)Q6 z(t) ˙ − (τ2 − τ1 ) z (t)Q6 z(t) + (τ2 − τ1 )z (t)Q6
z(s)ds
t−τ2
t−τ 1
t−τ 1
(τ2 − τ1 )z ∗ (s)dsQ6 z(s) −
+
⎞
t−τ 1
t−τ2
z ∗ (s)dsQ6
t−τ2
t−τ 1
z(s)ds.
t−τ2
From assumption 1, we have (fp (zp (t)) − lp− zp (t))(fp (zp (t)) − lp+ zp (t)) ≤ 0, p = 1, 2, · · · , n, which are equivalent to 2
∗
z (t)
T ⎡ ⎣
f ∗ (z(t))
+ −l− p +lp ep eTp 2 ep eTp
lp− lp+ ep eTp + −l− p −lp ep eTp 2
⎤ ⎦
z(t)
≤ 0, p = 1, 2, · · · , n,
f (z(t))
where er denotes the unit column vector having one element on its rth row and zeros elsewhere. Let G = diag{g1 , g2 , · · · , gn }, then 2
n
gp
z ∗ (t) f ∗ (z(t))
p=1
T ⎡ ⎣
lp− lp+ ep eTp
+ −l− p −lp ep eTp 2
+ −l− p +lp ep eTp 2 ep eTp
⎤ ⎦
z(t) f (z(t))
that is 2
z ∗ (t) f ∗ (z(t))
T
L1 G
−L2 G
−L2 G
G
z(t) f (z(t))
≤
0,
≤
0,
11 where L1 = diag(l1− l1+ , l2− l2+ , · · · , ln− ln+ ), L2 = diag(
+ l− l− +l+ 1 +l1 , 2 2 2 ,··· 2
,
+ l− n +ln ) 2
is appropriate dimen-
sion. Similarly one has 2
T
z ∗ (t − τ (t)) f ∗ (z(t − τ (t)))
L1 H
−L2 H
−L2 H
H
z ∗ (t − τ (t))
≤
f ∗ (z(t − τ (t)))
0.
We can see that following equation hold for any matrix R1 with appropriate dimension 0
=
˙ − Dz(t) + A(t)f (z(t)) + B(t)f (z(t − τ (t)) + v(t)] 2[z ∗ (t)R1 + z˙ ∗ (t)R1 ][−z(t)
=
−2z ∗(t)R1 z(t) ˙ − 2z ∗ (t)R1 Dz(t) + 2z ∗ (t)R1 A(t)f (z(t)) + 2z ∗ (t)R1 B(t)f (z(t − τ (t))) +2z ∗(t)R1 v(t) − 2z˙ ∗ (t)R1 z(t) ˙ − 2z˙ ∗ (t)R1 Dz(t) + 2z˙ ∗ (t)R1 A(t)f (z(t)) +2z˙ ∗(t)R1 B(t)f (z(t − τ (t))) + 2z˙ ∗ (t)R1 v(t).
Therefore, we have D+ V (t) − 2y ∗ (t)v(t) − γv ∗ (t)v(t) ≤2z ∗ (t)P1 z(t) ˙ + z ∗ (t)(Q1 + Q2 + Q3 )z(t) − z ∗ (t − τ1 )Q1 z(t − τ1 ) − z ∗ (t − τ2 )Q2 z(t − τ2 ) − (1 − μ)z ∗ (t − τ (t))Q3 z(t − τ (t)) + τ12 z ∗ (t)Q4 z(t) +
2 ∗ τ12 z (t)Q5 z(t)
t −
t
∗
∗
z(ζ)dζ −
z (ζ)dζQ4 t−τ1
t−τ 1
t−τ1
∗
t−τ 1
∗
z (ζ)dζQ5 t−τ (t)
∗
z(ζ)dζ t−τ (t)
∗
− 2f (z(t))v(t) − γv (t)v(t) − 2z (t)L1 Gz(t) + 2f (z(t))L2 Gz(t) + 2z ∗ (t)L2 Gf (z(t)) − 2f ∗ (z(t))Gf (z(t)) − 2z ∗ (t − τ (t))L1 Hz(t − τ (t)) + 2f ∗ (z(t − τ (t)))L2 Hz(t − τ (t)) + 2z ∗ (t − τ (t))L2 Hf (z(t − τ (t))) 2 2 τ1 − τ22 ∗ z˙ ∗ (t)Q6 z(t) ˙ − 2f (z(t − τ (t)))Hf (z(t − τ (t))) + 2 t−τ 1 2 ∗ ∗ − (τ2 − τ1 ) z (t)Q6 z(t) + (τ2 − τ1 )z (t)Q6 z(s)ds t−τ2 t−τ 1
t−τ 1
∗
(τ2 − τ1 )z (s)dsQ6 z(s) ˙ −
+ t−τ2 ∗
∗
t−τ 1
z (s)dsQ6 t−τ2
z(s)ds − 2z ∗(t)R1 z(t) ˙ − 2z ∗ (t)R1 Dz(t)
t−τ2
∗
+ 2z (t)R1 A(t)f (z(t)) + 2z (t)R1 B(t)f (z(t − τ (t))) + 2z ∗ (t)R1 v(t) − 2z˙ ∗ (t)R1 z˙ ∗ (t) − 2z˙ ∗ (t)R1 Dz(t) + 2z˙ ∗ (t)R1 A(t)f (z(t)) + 2z˙ ∗ (t)R1 B(t)f (z(t − τ (t))) + 2z˙ ∗ (t)R1 v(t).
12
⎡ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ∗ ≤ ξ (t) ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎣
Ω1,1
Ω1,2
Ω1,3
R1
0
0
0
0
0
0
Ω1,11
Ω1,12
Ω2,2
0
−I
0
0
0
0
0
0
Ω2,11
0
Ω3,3
0
0
0
Ω3,7
0
0
0
Ω3,11
0
−γI
0
0
0
0
0
0
Ω4,11
0
Ω5,5
0
0
0
0
0
0
0
Ω6,6
0
0
0
0
0
0
Ω7,7
0
0
0
0
0
−Q4
0
0
0
0
−Q5
0
0
0
−Q5
0
0
Ω11,11
0
−Q6
⎡ ξ ∗ (t) = ⎣ z ∗ (t) f ∗ (z(t)) f ∗ (z(t − τ (t))) v ∗ (t) z ∗ (t − τ1 ) z ∗ (t − τ2 ) z ∗ (t − τ (t))
t
⎤ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ξ(t), ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎦
z ∗ (ζ)dζ
t−τ1 t−τ 1
t−τ (t)
z ∗ (ζ)dζ
z ∗ (ζ)dζ z˙ ∗ (t)
t−τ2
t−τ (t)
⎤
t−τ 1
⎥ z ∗ (s)ds⎦ .
t−τ2
According to the LMI (17), we have 2
2n 2
μp (t)Φ
= Ω ≤ 0,
p=1
where
⎡
⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ Ω=⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎣
Ω1,1
Ω1,2
Ω1,3
R1
0
0
0
0
0
0
Ω1,11
Ω1,12
Ω22
0
−I
0
0
0
0
0
0
Ω2,11
0
Ω3,3
0
0
0
Ω3,7
0
0
0
Ω3,11
0
−γI
0
0
0
0
0
0
Ω4,11
0
Ω5,5
0
0
0
0
0
0
0
Ω6,6
0
0
0
0
0
0
Ω7,7
0
0
0
0
0
−Q4
0
0
0
0
−Q5
0
0
0
−Q5
0
0
Ω11,11
0
−Q6
⎤ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥. ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎦
13 2 where Ω1,1 = −(DR1 + R1T D) + Q1 + Q2 + Q3 + τ12 Q4 + τ12 Q5 − (τ2 − τ1 )2 Q6 − 2L1 G, Ω1,2 = 2
1) 1) )Q6 + (τ2 −τ Q6 , Ω2,2 = R1 A(t) + 2L2 G, Ω1,3 = R1 B(t), Ω1,11 = −R1 + P1 − R1 D, Ω1,12 = ( (τ2 −τ 2 2
−2G, Ω2,11 = R1 A(t) Ω3,3 = −2H, Ω3,7 = 2L2 H, Ω3,11 = R1 B(t), Ω5,5 = −Q1 − 2L1 H, Ω6,6 = −Q2 − 2H, Ω7,7 = −(1 − μ)Q3 − 2L1 H and Ω11,11 = −R1 − R1T + (
τ22 −τ12 2 ) Q6 . 2
Then, D+ V (t) − 2y ∗ (t)v(t) − γv ∗ (t)v(t) ≤ 0.
(19)
By integrating (19) with respect to t over the time period from 0 to T, we have T 2
y ∗ (t)v(t)dt ≥ V (T) − V (0) − γ
0
T
v ∗ (t)v(t)dt,
0
for φ = 0. Since V (0) = 0 and V (T) ≥ 0, then (16) holds, which implies MCVNNs (4) is passive. Remark 2. Under the assumption 1, the activation function of MCVNNs (3) cannot be separated into real and imaginary parts. Based on this assumption 1, some sufficient conditions to ensure passivity for the proposed MCVNNs (3) are derived in the form of complex-valued linear matrix inequalities (LMIs) by constructing the proper Lyapunov-Krasovskii functional in theorem 1. If we assume that the activation function of MCVNNs (3) can be separated into real and imaginary parts, that is, if it satisfies assumption 2 then sufficient criteria for passivity of MCVNNs (3) are derived in the following theorem. Theorem 2: Suppose that assumption 2 holds. MCVNNs (4) is passive if there exist positive I R I R I definite Hermitian matrices P1 = P1R + iP1I , Q1 = QR 1 + iQ1 , Q2 = Q2 + iQ2 , Q3 = Q3 + iQ3 , Q4 = I R I R I QR 4 + iQ4 , Q5 = Q5 + iQ5 , Q6 = Q6 + iQ6 , positive diagonal matrices G = diag{g1 , g2 , · · · , gn }, H =
diag{h1 , h2 , · · · , hn }, any appropriately dimensioned matrix R1 = R1R + iR1I and a scalar γ > 0 such 2
that for p = 1, 2, · · · , 22n , the following LMI holds:
ΦR −ΦI ΦI where
⎡
⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ R Φ =⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎣
(p)
(p)
ΦR
< 0,
(20)
ΦR 1,1
ΦR 1,2
ΦR 1,3
R1R
0
0
0
0
0
0
ΦR 1,11
ΦR 1,12
ΦR 2,1
ΦR 2,2
0
−I
0
0
0
0
0
0
ΦR 2,11
0
ΦR 3,1
0
ΦR 3,3
0
0
0
0
0
0
0
ΦR 3,11
0
0
−γI
0
0
0
0
0
0
ΦR 4,11
0
0
ΦR 5,5
0
0
0
0
0
0
0
0
0
0
0
0
0
ΦR 7,7
0
0
0
0
0
0
−QR 4
0
0
0
0
0
−QR 5
0
0
0
0
−QR 5
0
0 0 −QR 6
(p) (p)
T R1R
0
−I 0
0
0
0
0
0
0
ΦR 6,6
0
0
0
0
0
0
0 0
0 0
0 0
0 0
0 0 0
0 0 0
0 0
0
(p) (p)
0
0
0
0
ΦR 11,1 ΦR 12,1
(p) ΦR 11,2
(p) ΦR 11,3
ΦR 11,4
0
0
0
0
0
0
ΦR 11,11
0
0
0
0
0
0
0
0
0
0
⎤ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ < 0 (21) ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎦
14 and
⎡
⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ I Φ =⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎣
(p)
(p)
ΦI1,1
ΦI1,2
ΦI1,3
R1I
0
0
0
0
0
0
ΦI1,11
ΦI1,12
ΦI2,1
ΦI2,2
0
0
0
0
0
0
0
0
ΦI2,11
0
ΦI3,1
0
ΦI3,3
0
0
0
0
0
0
0
ΦI3,11
0 0
(p) (p)
(p) (p)
T R1I
0
0
0
0
0
0
0
0
0
ΦI4,11
0
0
0
0
ΦI5,5
0
0
0
0
0
0
0
0
0
0
0
0
ΦI6,6
0
0
0
0
0
0
0
ΦI7,7
0
0
0
0
0
0
−QI4
0
0
0
0
0
−QI5
0
0
0
0
−QI5
0
0 0 −QI6
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0 0
0 0 0
0
0
0
0
0
ΦI11,1 ΦI12,1
(p) ΦI11,2
(p) ΦI11,3
ΦI11,4
0
0
0
0
0
0
0
0
ΦI11,11
0
0
0
0
0
0
0
0
0
0
T
⎤ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ < 0, (22) ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎦
(p)
R R R R R 2 R 2 R 2 R R R ΦR 1,1 = −(DR1 + R1 D) + Q1 + Q2 + Q3 + τ1 Q4 + τ12 Q5 − (τ2 − τ1 ) Q6 + S1 Γ, Φ1,2 = R1 A1p − 2
(τ1 −τ2 ) τ1 −τ2 R R(p) QR 6 + 2 2 Q6 , Φ2,1 T T T T R(p) R I R(p) T T AT1p R1R − AT2p R1I , ΦR = B1p R1R − B2p R1I , ΦR 2,2 = −S1 , Φ2,11 = R1 A1p − R1 A2p , Φ3,1 3,3 (p) R I R R R R R R R −S2 , ΦR 3,11 = R1 B1p − R1 B2p , Φ4,11 = R1 , Φ5,5 = −Q1 − 2L1 H, Φ6,6 = −Q2 − 2H, Φ7,7 T (p) R RT RT T RT T IT R(p) T RT −(1 − μ)QR − DT R1R , ΦR 11,2 = A1p R1 − A2p R1 , Φ11,3 = B1p R1 3 + S2 Γ, Φ11,1 = −R1 + P1 T τ22 −τ12 2 R (τ1 −τ2 )2 RT T RT R R RT RT I 2 R1I , ΦR ) Q 6 , ΦR Q6 + τ1 −τ B2p 11,4 = R1 , Φ11,11 = −R1 −R1 +( 12,1 = 2 2 2 Q6 , Φ1,1 (p) (p) T 2 −(DR1I + R1I D) + QI1 + QI2 + QI3 + τ12 QI4 + τ12 QI5 − (τ2 − τ1 )2 QI6 , ΦI1,2 = R1R A2p + R1I A1p , ΦI1,3 2 2) I I (p) T IT 2 R1R B2p + R1I B1p , ΦI1,11 = −R1I + P1I + R1I D, ΦI1,12 = (τ1 −τ QI6 + τ1 −τ 2 2 Q6 , Φ2,1 = −A1p R1 T (p) (p) T T (p) T T AT2p R1R , ΦI2,2 = 0, ΦI2,11 = R1R A2p + R1I A1p , ΦI3,1 = −B1p R1I − B2p R1R , ΦI3,3 = 0, ΦI3,11 T R1R B2p + R1I B1p , ΦI4,11 = R1I , ΦI5,5 = −QI1 , ΦI6,6 = −QI2 , ΦI7,7 = −(1 − μ)QI3 , ΦI11,1 = −R1I T T (p) T T (p) T T T T T P1I − DT R1I , ΦI11,2 = −AT1p R1I − AT2p R1R , ΦI11,3 = −B1p R1I − B2p R1R , ΦI11,4 = R1I , ΦI11,11 2 2 2 T T T τ −τ 2) I 2 −R1I − R1I + ( 2 2 1 )2 QI6 and ΦI12,1 = (τ1 −τ QI6 + τ1 −τ 2 2 Q6 . (p)
R I R R R R R R1I A2p , ΦR 1,3 = R1 B1p −R1 B2p , Φ1,11 = −R1 +P1 −R1 D, Φ1,12 =
= = = − = = − = + =
Proof: Consider a Lyapunov-Krasovskii functional defined by V (t) = V1 + V2 + V3 + V4 ,
(23)
where V1 , V2 , V3 and V4 are defined as in Theorem 1. If fp (.) satisfies assumption 2, then lpR− ≤
fpR (xp (t)) fpI (yp (t)) ≤ lpR+ , lpI− ≤ ≤ lpI+ , xp (t) yp (t)
for all p = 1, 2, · · · , n. Thus, |fpR (xp (t))| ≤ γpR |xp (t)|, |fpI (yp (t))| ≤ γpI |yp (t)|,
(24)
where γpR = max{|lpR− |, |lpR+ |}, γpI = max{|lpI− |, |lpI+ |} for all p = 1, 2, · · · , n. By (24), we have rp fp∗ (zp (t))fp (zp (t)) ≤ rp γp2 zp∗ zp ,
(25)
15 where γp = max{γpR , γpI } and rp is positive constant for all p = 1, 2, · · · , n. If fp (.) satisfies assumption 1, then |fp (zp (t)| ≤ Fp |zp (t)|, Thus, rp fp∗ (zp (t))fp (zp (t)) ≤ rp Fp2 zp∗ zp ,
(26)
where rp is a positive constant for all p = 1, 2, · · · , n. The vector forms of (25) and (26) are f ∗ (z(t))S1 f (z(t)) ≤ z ∗ (t)S1 Γz(t),
(27)
where S1 = diag{s1 , s2 , · · · , sn }. Also, we have f ∗ (z(t − τ (t)))S2 f (z(t − τ (t))) ≤ z ∗ (t − τ (t))S2 Γz(t − τ (t)),
(28)
where S2 = diag{s1 , s2 , · · · , sn }. Similarly as in the proof of Theorem 1 and by using (27) and (28), we have D+ V (t) − 2y ∗ (t)v(t) − γv ∗ (t)v(t) = ξ ∗ (t)Φξ(t),
(29)
where ξ(t) and Φ are given in Theorem 1. Separate the real and imaginary parts of Φ, then utilizing Lemma 3, Φ < 0 is equivalent to
ΦR −ΦI < 0. ΦI ΦR Thus, it follows from (29) that D+ V (t) − 2y ∗ (t)v(t) − γv ∗ (t)v(t) ≤ 0
(30)
Then by integrating (30) with respect to t over the time period from 0 to T that T 2 0
y ∗ (t)v(t)dt ≥ V (T) − V (0) − γ
T
v ∗ (t)v(t)dt,
0
for φ = 0. Since V (0) = 0 and V (T) ≥ 0, then (16) holds, which implies MCVNNs (4) is passive. Remark 3. In the existing literature, there are many interesting results reported for passivity analysis of neural networks, see ( [15]- [27]) and references therein. In [15], the authors studied the problem of passivity analysis of stochastic time-delay neural networks and sufficient conditions are presented in terms of LMIs. Some new passivity conditions are obtained for neural networks with time-delays based on the proper Lyapunov-Krasovskii functional method in [16]. Very recently, the problem of passivity analysis of memristor-based neural networks with time varying delays are considered and several sufficient conditions are derived in the form of LMIs in [26]. In [27], the authors formulated the problem of passivity of memrsitive neural networks with different memductance functions and some sufficient criteria are presented in terms of LMIs. In this paper, we consider the memristorbased complex-valued neural networks with interval time varying delays and analyze the passivity and passification of the proposed system. Some passivity conditions are derived in terms of both
16 complex-valued and real-valued LMIs. Those results are different from the existing results and it can be easily solved by the YALMIP tool in MATLAB. Passification Analysis: In this section, we consider the passification problem. That is to say, designing a state feedback control LAW to make MCVNNs (4) is passive in the sense of definition 1. Extending on system (4), we consider an MCVNNs system with control input as follows: dz(t) = −Dz(t) + A(z)f (z(t)) + B(z)f (z(t − τ (t))) + v(t) + Eu(t), dt y(t)
=
(31)
f (z(t)),
where u(t) ∈ Cm is the control input vector, E is a constant matrix of appropriate dimension. In this paper, the state feedback control law as follows: 2
u(t) =
2n 2
μq Kq z(t).
(32)
q=1
We select u(t) = Kq z(t). Then, MCVNN (31) can be represented as ⎧ 2 2 22n 22n ⎪ ⎪ ⎪ dz(t) = μp (t)μq (t)[(−D + EKq )z(t) + Ap f (z(t)) + Bp f (z(t − τ (t))) + v(t)] ⎪ ⎪ dt ⎪ ⎪ p=1 p=1 ⎪ ⎪ 2n2 ⎨ 2
μp (t)[(−D + EKp )z(t) + Ap f (z(t)) + Bp f (z(t − τ (t))) + v(t)], = ⎪ p=1 ⎪ ⎪ ⎪ ⎪ ⎪ = [(−D + EK(t))z(t) + A(t)f (z(t)) + B(t)f (z(t − τ (t))) + v(t)], ⎪ ⎪ ⎪ ⎩ y(t) = f (z(t)),
(33)
2
where K(t) =
2n 2
p=1
μp K p .
Theorem 3: Suppose that Assumption 1 holds. MCVNNs (31) is passive if there exist positive definite ¯ 1, Q ¯ 2, Q ¯ 3, Q ¯ 4 , Q¯5 , Q ¯ 6 , positive diagonal matrices G = diag{g1 , g2 , · · · , gn }, H = Hermitian matrices P¯1 , Q ¯ p , any appropriately dimensioned matrix R ¯ 1 and a scalar γ > 0 such diag{h1 , h2 , · · · , hn }, matrix K 2
that for p = 1, 2, · · · , 22n , ⎡ (p) ¯ ¯ (p) Φ ¯ (p) Φ I 1,1 Φ1,2 1,3 ⎢ ¯ ¯1 ⎢ Φ2,2 0 −R ⎢ ⎢ ¯ 3,3 Φ 0 ⎢ ⎢ ⎢ −γI ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎣
0
0
0
0
0
0
0
0
0
0
0
0
0
0
¯ 3,7 Φ
0
0
0
0 ¯ Φ5,5
¯ (p) Φ 1,11 ¯ (p) Φ
2,11 ¯ (p) Φ 3,11
¯ 1,12 Φ 0 0
0
0
0
0
0
¯ 4,11 Φ
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0 0
0 ¯ Φ6,6
0 ¯ 7,7 Φ
0 ¯4 −Q
0 ¯5 −Q
0 ¯5 −Q
0 ¯ Φ11,11
0
0 ¯6 −Q
⎤ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ < 0, ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎦
(34)
17 2 ¯ 2 ¯ ¯ ¯ (p) = −(R ¯ T D + DR ¯1 − E K ¯p − K ¯ T ET ) + Q ¯1 + Q ¯2 + Q ¯3 + τ 2Q where Φ 1 p 1 4 + τ12 Q5 − (τ2 − τ1 ) Q6 − 1,1 2 (p) (p) (p) (τ −τ ) 2 1 ¯ 1 + 2L2 G, ¯ Φ ¯ ¯1, Φ ¯6 + ¯ ¯ ¯ 1 + P¯1 − DR¯1 + E K ¯ p , Φ1,12 = Q = Ap R = Bp R = −R 2L1 G, Φ 1,2 τ2 −τ1 ¯ ¯ Q 6 , Φ2,2 2
1,3
1,11
2
¯1 , Φ ¯ Φ ¯ (p) = Bp R ¯1, Φ ¯ Φ ¯ (p) = Ap R ¯ 3,3 = −2H, ¯ Φ ¯ 3,7 = 2L2 H, ¯ 4,11 = R ¯1, Φ ¯ 5,5 = = −2G, 2,11 3,11 2 2 2 ¯ Φ ¯ 7,7 = −(1 − μ)Q ¯ 3 − 2L1 H, ¯1 − R ¯ 1T + τ2 −τ1 Q ¯ Φ ¯ 6,6 = −Q¯2 − 2H, ¯ Φ ¯ 11,11 = −R ¯ 6. −Q¯1 − 2L1 H, 2 Moreover, if the above condition is feasible, the gain matrix of a desire controller in (32) can be computed as follows: 2
¯ −1 , p = 1, 2, · · · , 22n . ¯ pR Kp = K 1 Proof: Consider a Lyapunov-Krasovskii functional as V (t) = V1 + V2 + V3 + V4 , where V1 , V2 , V3 and V4 are defined in (18). Similarly to in the proof of the above theorem, deriving the upper right Dini-derivative of V (t), along the solutions of (31), we have D+ V (t) − 2y ∗ (t)v(t) − γv ∗ (t)v(t) ⎡ (p) (p) (p) Φ1,1 Φ1,2 Φ1,3 R1 ⎢ ⎢ Φ2,2 0 −I ⎢ ⎢ Φ3,3 0 ⎢ ⎢ ⎢ −γI ⎢ ⎢ ⎢ ⎢ 2n2 2 ⎢ ⎢ ≤ ξ ∗ (t) ⎢ ⎢ p=1 ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎣
(p)
0
0
0
0
0
0
Φ1,11
Φ1,12
0
0
0
0
0
0
Φ2,11
0
0
0
Φ3,7
0
0
0
Φ3,11
(p)
0
(p)
0
0
0
0
0
0
Φ4,11
0
Φ5,5
0
0
0
0
0
0
0
Φ6,6
0
0
0
0
0
0
Φ7,7
0
0
0
0
0
−Q4
0
0
0
0
−Q5
0
0
0
−Q5
0
0
Φ11,11
0
−Q6
⎤ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ξ(t). (35) ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎦
(p)
2 where Φ1,1 = −(DR1 + R1T D − KpT E T R1 − R1 EKp ) + Q1 + Q2 + Q3 + τ12 Q4 + τ12 Q5 − (τ1 − τ2 )2 Q6 − (p)
(p)
(p)
2L1 G, Φ1,2 = R1 Ap + 2L2 G, Φ1,3 = R1 Bp , Φ1,11 = −R1 + P1 − R1 D + R1 EKp , Φ1,12 = τ2 −τ1 2 Q6 ,
(p)
(p)
(τ1 −τ2 )2 Q6 + 2
Φ2,2 = −2G, Φ2,11 = R1 Ap , Φ3,3 = −2H, Φ3,7 = 2L2 H, Φ3,11 = R1 Bp , Φ4,11 = R1 , Φ5,5 =
−Q1 − 2L1 H, Φ6,6 = −Q1 − 2H, Φ7,7 = −(1 − μ)Q3 − 2L1 H, Φ11,11 = −R1 − R1T + ( ⎡ ξ (t) = ⎣ z ∗ (t) f ∗ (z(t)) f ∗ (z(t − τ (t))) v ∗ (t) z ∗ (t − τ1 ) z ∗ (t − τ2 ) z ∗ (t − τ (t)) ∗
t t−τ1
t−τ 1
t−τ (t)
z ∗ (ζ)dζ
t−τ (t)
z ∗ (ζ)dζ z˙ ∗ (t)
t−τ2
t−τ 2
τ22 −τ12 2 ) Q6 2
and
z ∗ (ζ)dζ ⎤
⎥ z ∗ (s)ds⎦ .
t−τ1
From the proof of Theorem 1, we know that we need D+ V (t) − 2y ∗ (t)v(t) − γv ∗ (t)v(t) ≤ 0 to prove the passivity of MCVNNs (31). From (35), we only need that the following inequalities hold for
18 2
p = 1, 2, · · · , 22n . ⎡ (p) (p) (p) Φ Φ1,2 Φ1,3 ⎢ 1,1 ⎢ Φ2,2 0 ⎢ ⎢ Φ3,3 ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎣
(p)
R1
0
0
0
0
0
0
Φ1,11
Φ1,12
−I
0
0
0
0
0
0
Φ2,11
0
0
0
0
Φ3,7
0
0
0
Φ3,11
0
−γI
0
0
0
0
0
0
Φ4,11
0
Φ5,5
0
0
0
0
0
0
0
Φ6,6
0
0
0
0
0
0
Φ7,7
0
0
0
0
0
−Q4
0
0
0
0
−Q5
0
0
0
−Q5
0
0
Φ11,11
0
−Q6
(p) (p)
⎤ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ < 0. ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎦
(36)
(p)
Matrix inequalities (36) are not LMIs but quadratic matrix inequalities (QMIs) since Φ11 is quadratic. In order to use the convex optimization technique, the QMIs should be converted to LMIs. Now by performing a congruence transformation to the matrix in (36) by diag{R1−1 , R1−1 , R1−1 , I, R1−1 , R1−1 , R1−1 , R1−1 , R1−1 , R1−1 , R1−1 , R1−1 }, ¯ 1 = R−1 , P¯1 = R−1 P1 R−1 , Q ¯ 1 = R−1 Q1 R−1 , together with the change of matrix variables defined by R 1 1 1 1 1 ¯ 2 = R−1 Q2 R−1 , Q ¯ 3 = R−1 Q3 R−1 , Q ¯ 4 = R−1 Q4 R−1 , Q ¯ 5 = R−1 Q5 R−1 , Q ¯ 6 = R−1 Q6 R−1 , H ¯ = Q 1
1
1
1
1
1
1
¯ = R−1 GR−1 , we can obtain LMI (34) in Theorem 3. R1−1 HR1−1 , G 1 1
1
1
1
Theorem 4: Suppose that Assumption 2 holds. MCVNNs (31) is passive if there exist positive ¯1 = Q ¯ R + iQ ¯I, Q ¯2 = Q ¯ R + iQ ¯I, Q ¯3 = Q ¯ R + iQ ¯I, Q ¯4 = definite Hermitian matrices P¯1 = P¯1R + iP¯1I , Q 1 1 2 2 3 3 ¯R ¯I ¯ ¯R ¯I ¯ ¯R ¯I Q 4 + iQ4 , Q5 = Q5 + iQ5 , Q6 = Q6 + iQ6 , positive diagonal matrices G = diag{g1 , g2 , · · · , gn }, H = ¯1 = R ¯ R + iR ¯ I , matrix K ¯i = K ¯ R + iK ¯I diag{h1 , h2 , · · · , hn }, any appropriately dimensioned matrix R 1 1 i i 2
and a scalar γ > 0 such that for p = 1, 2, · · · , 22n , the following LMI holds:
ΦR −ΦI < 0, ΦI ΦR
19 where
⎡
¯ (p)R Φ 1,1 ⎢ (p)R ¯ ⎢ Φ ⎢ 2,1 ⎢ ¯ (p)R ⎢ Φ3,1 ⎢ ⎢ I ⎢ ⎢ ⎢ 0 ⎢ ⎢ 0 ⎢ ΦR = ⎢ ⎢ 0 ⎢ ⎢ ⎢ 0 ⎢ ⎢ 0 ⎢ ⎢ ⎢ 0 ⎢ (p)R ⎢ Φ ⎣ ¯ 11,1 ¯R Φ 12,1 and
⎡
¯ (p)I Φ 1,1 ⎢ (p)I ¯ ⎢ Φ ⎢ 2,1 ⎢ ¯ (p)I ⎢ Φ3,1 ⎢ ⎢ 0 ⎢ ⎢ ⎢ 0 ⎢ ⎢ 0 ⎢ ΦI = ⎢ ⎢ 0 ⎢ ⎢ ⎢ 0 ⎢ ⎢ 0 ⎢ ⎢ ⎢ 0 ⎢ (p)I ⎢ Φ ⎣ ¯ 11,1 ¯ I12,1 Φ
¯ (p)R Φ 1,2
¯ (p)R Φ 1,3
I
0
0
0
0
0
0
ΦR 2,2
0
−R1R
0
0
0
0
0
0
0
0
0
¯R Φ 3,7
0
0
0
0
0
0
0
0
2,11 ¯ (p)R Φ 3,11 ¯R Φ 4,11
0
0
0
0
0
0
0
0
0
0
0
0 ¯R −Q 4
0
0
0
0
0
0
0
0
0
0
¯R −Q 6
0
0 0 ¯R −Q 6
0
¯ 3,3 Φ
−R1R
0
−γI
0
0
0
0 ¯ ΦR 5,5
0
0
0
0
0 ¯ ΦR 7,3
0 ¯ ΦR 6,6
0
0
0
0 ¯ ΦR 7,7
0
0
0
0
0
0
¯ (p)R Φ 1,11 ¯ (p)R Φ
ΦR 1,12 0 0 0
0
0
0
0
0
0
0
0 ¯R −Q
0
0
0
0
0
0
0
0
¯ (p)R Φ 11,2
¯ (p)R Φ 11,3
¯R Φ 11,4
0
0
0
0
0
0
¯R Φ 11,11
0
0
0
0
0
0
0
0
0
0
5
¯ (p)I Φ 1,2 ¯I Φ
¯ (p)I Φ 1,3
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
¯I Φ 3,3
−R1I 0
0
0
¯I Φ 3,7
0
0
0
−R1I
0
0
0
0
0
0
0
0
0
0
0 ¯ ΦI5,5
2,11 ¯ (p)I Φ 3,11 ¯I Φ 4,11
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0 ¯ ΦI7,7
0
0
0 ¯ ΦI7,3
0 ¯ ΦI6,6
0
0
0
0
0
0
0
0
0
0
0 ¯ I4 −Q
0
0
0
0
0
0
0
0
0
0
0 ¯I −Q
0
0
0
0
0
0
0
0
0
0
0
0
0
¯ (p)I Φ 11,2
¯ (p)I Φ 11,3
¯I Φ 11,4
¯ I5 −Q
0
0
0
0
0
0
¯I Φ 11,11
0
0
0
0
0
0
0
0
0
0
0 ¯ I6 −Q
2,2
5
¯ (p)I Φ 1,11 ¯ (p)I Φ
⎤ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ , (37) ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎦
⎤
¯ I1,12 Φ
⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ , (38) ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎦
0 0 0
¯ (p)R = −(DR ¯ R − E I K¯p I ) + (K¯R T E R T − K¯I T E I T )) + Q ¯ R + R¯R T D − (E R K ¯R + Q ¯R + Q ¯R + Φ p p 1 p 1 2 3 1,1 1 (p)R (p)R (p)R 2 ¯R 2 ¯R R I R I ¯R ¯ ¯ ¯ ¯ ¯ ¯ ¯ ¯ Q Q + τ − (τ − τ ) + S Γ, Φ = R A − R A , Φ = R B − R B , Φ τ12 Q 2 1 1 4 12 5 6 1 1p 1 2p 1 1p 1 2p 1,2 1,3 1,11 = 2 T T (p)R (τ −τ ) (τ −τ ) 1 2 2 1 R R R R R R R R T R T I ¯ , Φ ¯ + ¯ , Φ ¯ ¯ ¯ + P¯ − DR ¯ +E K ¯ ¯ ¯R Q Q −R 1 1 1 p 12,1 = 6 6 2,1 = A1p R1 − A2p R1 , Φ2,2 = 2 2 (p)R T ¯ RT T ¯IT ¯ R ¯ (p)R = B1p ¯ (p)R = R ¯ 1R A1p − R ¯ 1I A2p Φ ¯R ¯ 1R B1p − R1 − B2p R1 , Φ3,3 = −S¯2 , Φ =R −S¯1 , Φ 3,7 = 0, Φ 2,11
3,1
3,11
¯ I B2p , Φ ¯ 4,11 = R ¯1 , Φ ¯ 5,5 = −Q ¯ 1, Φ ¯ 6,6 = −Q ¯ 2, Φ ¯ R = 0, Φ ¯ R = −(1 − μ)Q ¯ R + S¯2 Γ, Φ ¯ R = −R ¯ RT + R 1 1 7,3 7,7 3 11,1 T T ¯ RT T ¯ I T ¯ (p)R T ¯ RT T ¯IT ¯ R T ¯ RT ¯ 1RT DT + K ¯ pRT E RT , Φ ¯R P¯1R − R R R R R R = A −A , Φ = A −A , Φ = B 11,2 1p 1 2p 1 1p 1 2p 1 11,3 1p 1 − 2
2,1
2
2
τ2 −τ1 2 ¯ R ¯ R 2) ¯ R 1) ¯ R T ¯IT ¯ R R RT ¯T , Φ ¯R ¯ (p)I = R1 , Φ11,4 = R Q7 + (τ2 −τ Q7 , Φ B2p ) Q6 , Φ12,1 = (τ1 −τ 1 11,11 = −R1 − R1 + ( 1,1 2 2 2 T T T ¯ I + R¯I D − (E I K ¯ R + ERK ¯ I ) − (K¯I E R T + K¯R E I T )) + Q ¯I + τ 2 Q ¯ I − (τ2 − ¯I + Q ¯I + Q ¯ I + τ 2Q −(DR 1
1
p
¯ I6 , Φ ¯ (p)I = R ¯ 1R A2p + R ¯ 1I A1p , τ1 )2 Q 1,2 T T ¯ I − AT R ¯R , Φ ¯ I = 0, −AT R
p
p
p
1
2
3
1
4
12
5
¯ pI , ¯ (p)I = R ¯ 1R B2p + R ¯ 1I B1p , Φ ¯ (p)I = −R ¯ 1I + P¯1I −DR ¯ 1I +E I K Φ 1,3 1,11 T ¯I − BT R ¯ RT , ¯ (p)I = R ¯ R A2p + R ¯ I A1p , Φ ¯ (p)I = −B T R Φ
¯ (p)I = Φ 2,1 ¯I = Φ 1p 1 2p 1 2,2 1 1 1p 1 2p 1 3,3 2,11 3,1 ¯ (p)I = R ¯ 1R B2p + R ¯ 1I B1p , Φ ¯ I4,11 = R¯1 I , Φ ¯ I5,5 = −Q ¯ I1 , , Φ ¯ I6,6 = −Q ¯ I2 , Φ ¯ I7,3 = 0, Φ ¯ I7,7 = ¯ I3,7 = 0, Φ 0, Φ 3,11 T ¯IT T ¯ RT ¯I , Φ ¯I ¯IT ¯IT − R ¯ T DT + K ¯ IT EIT , Φ ¯I ¯I −(1 − μ)Q p 3 11,1 = −R1 + P1 1 11,2 = −A1p R1 − A2p R1 , Φ11,3 =
20 2
2
τ2 −τ1 ¯ I ¯ I T ¯IT T ¯ RT I IT ¯I R1 − B2p R1 , Φ −B1p )Q6 , Φ12,1 = 11,11 = −R1 − R1 + ( 2
(τ1 −τ2 )2 ¯ I T Q6 2
+
(τ2 −τ1 ) ¯ I T Q6 . 2
Moreover, if the above condition is feasible, the gain matrix of a desire controller in (32) can be computed as follows: 2
¯ pR ¯ −1 , p = 1, 2, · · · , 22n . Kp = K 1 Proof: Consider a Lyapunov-Krasovskii functional as V (t) = V1 + V2 + V3 + V4 , where V1 , V2 , V3 and V4 are defined in (18). Similarly as in the proof of ⎡ (p) ¯ ¯ (p) Φ ¯ (p) Φ Φ I 0 0 0 0 0 0 1,2 1,3 ⎢ 1,1 ¯ 2,2 ¯1 ⎢ Φ 0 −R 0 0 0 0 0 0 ⎢ ⎢ ¯ ¯ Φ3,3 0 0 0 Φ3,7 0 0 0 ⎢ ⎢ ⎢ −γI 0 0 0 0 0 0 ⎢ ⎢ ¯ ⎢ 0 0 0 0 0 −Q1 ⎢ ⎢ ¯ −Q2 0 0 0 0 ⎢ ⎢ ¯ ⎢ 0 0 0 Φ7,7 ⎢ ⎢ ¯ − Q 0 0 ⎢ 4 ⎢ ⎢ ¯5 −Q 0 ⎢ ⎢ ¯5 ⎢ −Q ⎢ ⎢ ⎣
theorem 3, we have ⎤ ¯ 1,12 ¯ (p) Φ Φ 1,11 ⎥ ¯ (p) Φ 0 ⎥ 2,11 ⎥ ⎥ ¯ (p) Φ 0 ⎥ 3,11 ⎥ ¯ Φ4,11 0 ⎥ ⎥ ⎥ 0 0 ⎥ ⎥ 0 0 ⎥ ⎥ ⎥ < 0. 0 0 ⎥ ⎥ ⎥ 0 0 ⎥ ⎥ 0 0 ⎥ ⎥ ⎥ 0 0 ⎥ ⎥ ¯ 11,11 0 ⎥ Φ ⎦
(39)
¯6 −Q
Separate the real and imaginary parts of Φ, then utilizing lemma 3, Φ < 0 is equivalent to
ΦR −ΦI < 0. ΦI ΦR Thus, we have MCVNNs (31) is passive. Remark 4. Let apq (zq (t)) and bpq (zq (t)) be memristor-based complex-valued connection weights of the memristor-based complex-valued recurrent neural networks with interval time-varying delays. Memristor is a state dependent switching device. Thus, the connection weights will change according to the state of the each subsystem. If the connection weights do not change based on the state of each subsystem, then the MCVNNs (3) are reduced to the complex-valued recurrent neural networks in [32]. Remark 5. In [13], the authors studied the problem of passivity and passification for stochastic Takagi-Sugeno fuzzy systems with mixed time-varying delays. In [14], passivity and passification of time delay systems has been investigated. This paper attempts first to deal with passivity and passification of memristor-based complex-valued recurrent neural networks with interval time-varying delays. Remark 6. In Theorem 3, (36) is an QMI and it cannot be solved by using Matlab YALMIP toolbox. In order to reduce (36) to an LMI, we have pre and post multiplied (36) by the diagonal
21 matrix diag{R1−1 , R1−1 , R1−1 , I, R1−1 , R1−1 , R1−1 , R1−1 , R1−1 , R1−1 , R1−1 , R1−1 } and obtained (34), which can be solved easily by using YALMIP toolbox. This procedure to study passification problem for MCVNNs is a new technique and has not yet been used in the existing literature.
4
Numerical examples:
In this section two numerical examples are given to verify the results in section 3. Example 1: Consider a two-neuron memristor-based complex-valued recurrent neural network model described as follows: z(t) ˙ = −Dz(t) + Af (z(t)) + Bf (z(t − τ (t))) + V,
(40)
where, a11 (x1 (t)) = a21 (x1 (t)) = a11 (y1 (t)) = a21 (y1 (t)) = b11 (x1 (t)) = b11 (y1 (t)) =
0.3, |x1 (t)| < 1
, a12 (x2 (t)) =
0.2, |x1 (t)| > 1 −0.1, |x1 (t)| < 1 −0.2, |x1 (t)| > 1 −1, |y1 (t)| < 1 −2, |y1 (t)| > 1 0.2, |y1 (t)| < 1 0.1, |y1 (t)| > 1
, a22 (y2 (t)) =
−0.3, |x1 (t)| > 1 |y1 (t)| < 1
−1, |y1 (t)| > 1
, a12 (y2 (t)) =
−0.2, |x1 (t)| < 1 1,
, a22 (x2 (t)) =
, b22 (x2 (t)) =
, b22 (y2 (t)) =
0.6, |x2 (t)| < 1
0.5, |x2 (t)| > 1 0.2, |x2 (t)| < 1 0.1, |x2 (t)| > 1 −3, |y2 (t)| < 1 −2, |y2 (t)| > 1
, ,
,
0.3, |y2 (t)| < 1
, 0.2, |y2 (t)| > 1 0.5, |x2 (t)| < 1 0.4, |x2 (t)| > 1
,
−0.2, |y2 (t)| < 1 −0.3, |y2 (t)| > 1
and b12 (x2 (t)) = b21 (x1 (t)) = b12 (y2 (t)) = b21 (y1 (t)) = 0. From the above values we get the following matrices , D=
1 0
0.6 − 2i;
, −0.1 + 0.2i 0.2 + 0.3i −0.2 + i 0 4 cos t + i sin t B= ,V = . 0 0.5 − 0.2i 6 sin 2t + i cos 2t 0.01 0 0.1 0 and L2 = . We have μ = 0.2, τ (t) = 0.8|0.2 sin(2t)|, f (z) = tanh(z(t)), L1 = 0 0.01 0 0.1 By solving the LMIs in Theorem 1 and using the YALMIP toolbox, we can obtain the following feasible solutions: P1 =
0 1
, A=
0.3 − 1i
5.1994
0.9027 + 0.9737i
0.9027 − 0.9737i
10.7145
, Q1 =
0.8422
0.2670 + 0.2424i
0.2670 − 0.2424i
3.2044
22
2 Im(z1(t))
Re(z1(t))
5 0 −5
0
5
10
15
20
0 −2
25
0
5
10
t Im(z2(t))
Re(z2(t))
25
15
20
25
2
0
0
5
10
15
20
0 −2
25
0
5
t
10 t
2 Im(z2(t))
5 Re(z2(t))
20
t
5
−5
15
0 −5 −4
−2
0 Re(z (t))
2
4
0 −2 −2
−1
1
0 Im(z (t))
1
2
1
Figure 1: Time responses and state trajectories of real and imaginary parts of the MCVNNs. Q2 =
0.4586
0.1558 + 0.1489i
0.9218
0.2849 + 0.2530i
, Q3 = 0.1558 − 0.1489i 1.8597 0.2849 − 0.2530i 3.4345 9.0593 0.0435 + 0.0337i 9.2409 0.0296 + 0.0229i 2 2 Q4 = 10 × , Q5 = 10 × 0.0435 − 0.0337i 9.4335 0.0296 − 0.0229i 9.4955 10.0295 −0.0335 − 0.0260i 2.4919 − 0.1770i 0.6598 − 0.0707i Q6 = 102 × , R1 = 102 × −0.0335 + 0.0260i 9.7419 0.5267 − 0.4602i 7.8988 − 0.0477i 8.4818 0 6.7507 0 G= , H= , γ = 17.7024. 0 12.1790 0 6.8308 Next, we introduce the state feedback control (32) in to MCVNNs (40). Let E =
1 − 0.2i
2 − 0.3i By using Theorem 3 and YALMIP toolbox, we can obtain the following feasible solutions: 7.1807 2.7196 + 0.4223i 2.2713 2.4258 + 0.0300i ¯1 = P¯1 = , Q 2.7196 − 0.4223i 14.7575 2.4258 − 0.0300i 6.7273
.
23
5 Im(z1(t))
Re(z1(t))
5 0 −5
0
5
10
15
20
0 −5
25
0
5
10
t Im(z2(t))
Re(z2(t))
25
15
20
25
2
0
0
5
10
15
20
0 −2
25
0
5
10
t
t 2 Im(z2(t))
5 Re(z2(t))
20
t
5
−5
15
0 −5 −5
0 Re(z (t))
5
0 −2 −4
−2
1
0 Im(z (t))
2
4
1
Figure 2: Time responses and state trajectories of real and imaginary parts of the MCVNNs. ¯2 = Q
1.1394
1.7618 + 0.0223i
1.7618 − 0.0223i
4.3418
8.7060
0.0987 + 0.0006i
¯4 = Q
¯3 = , Q
¯5 = , Q
2.4111
2.7219 + 0.0348i
2.7219 − 0.0348i
7.3929
8.7831
0.0663 + 0.0004i
0.0987 − 0.0006i 8.8847 0.0663 − 0.0004i 8.9032 9.0907 −0.0742 + 0.0000i 2.2321 − 0.0185i −1.3194 + 0.0601i ¯ 6 = 102 × ¯1 = Q , R −0.0742 − 0.0000i 8.9581 −0.8184 − 0.1727i 2.5064 − 0.1105i 0 0 ¯ = 7.2875 ¯ = 4.8813 G , H 0 8.8829 0 3.9512 ¯ 1 = 102 × [ −0.7925 − 1.5391i −1.5063 − 7.2041i ], K ¯ 2 = 102 × [ 0.2556 + 0.5775i 0.7808 + 2.2084i ], K ¯ 3 = 102 × [ 0.2555 + 0.6483i 0.0690 + 2.5987i ], K ¯ 4 = 102 × [ 0.2555 + 0.3073i 0.6071 + 2.3898i ], K γ = 9.5120. The time responses and state trajectories of real and imaginary parts of (40) are shown in Figure 1 and it can been seen that the mentioned system is passive with the considered parameters. The time
24 responses and state trajectories of real and imaginary parts of (40) in Figure 2 depicts the passification of MCVNNs (40) with the designed controller V . Example 2: Consider a two-neuron memristor-based complex-valued recurrent neural network model described as follows: z(t) ˙ = −Dz(t) + Af (z(t)) + Bf (z(t − τ (t))) + V, where,
a11 (x1 (t)) = a21 (x1 (t)) = a11 (y1 (t)) = a21 (y1 (t)) = b11 (x1 (t)) = b11 (y1 (t)) =
0.6, |x1 (t)| < 1
, a12 (x2 (t)) =
0.5, |x1 (t)| > 1 −0.5, |x1 (t)| < 1 −0.6, |x1 (t)| > 1 −1.0, |y1 (t)| < 1 −2.0, |y1 (t)| > 1 |y1 (t)| < 1
0.1,
−0.1, |y1 (t)| > 1 −0.4, |x1 (t)| < 1 −0.5, |x1 (t)| > 1 |y1 (t)| < 1
1,
−1, |y1 (t)| > 1
, a22 (x2 (t)) =
0.7, |x2 (t)| < 1
0.6, |x2 (t)| > 1 0.5, |x2 (t)| < 1
, a12 (y2 (t)) = , a22 (y2 (t)) = , b22 (x2 (t)) =
, b22 (y2 (t)) =
(41)
, ,
0.4, |x2 (t)| > 1 −0.2, |y2 (t)| < 1
,
−0.3, |y2 (t)| > 1 0.3, |y2 (t)| < 1 0.4, |y2 (t)| > 1 1.0, |x2 (t)| < 1 2.0, |x2 (t)| > 1
, ,
−0.4, |y2 (t)| < 1 −0.5, |y2 (t)| > 1
and b12 (x2 (t)) = b21 (x1 (t)) = b12 (y2 (t)) = b21 (y1 (t)) = 0. From the above values we get the following matrices, D= B=
1 0
−0.4 + i 0
0
0.7 − 0.2i
, −0.5 + 0.1i 0.5 + 0.3i 0 4 sin t + i sin 2t , V = . 0.2 − 0.4i 4 cos t + i6 cos t 1
, A=
0.6 − i
0.01 0 1 − e−x 1 We have μ = 0.2, τ (t) = 0.3| sin(6t)|, f (x) = , f (y) = , Γ1 = and 1 + e−x 1 + e−x 0 0.01 0.1 0 . Γ2 = 0 0.1 By Theorem 2 and using the YALMIP toolbox, we can obtain the following feasible solutions: 3.3797 0.3089 0.6133 0.0888 0.6133 0.0888 R R R P1 = , Q1 = , Q2 = , 0.3089 6.2535 0.0888 1.3894 0.0888 1.3894 0.6708 0.0932 2.6799 0.1171 4.7181 0.0394 R R R Q3 = , Q4 = , Q5 = , 0.0932 1.4670 0.1171 3.7229 0.0394 5.0726 6.6987 −0.0725 2.3416 0.4657 0 0.0034 R R I Q6 = , R1 = , P1 = , −0.0725 6.0333 0.0370 4.4846 −0.0034 0
25
2 y1(t)
x1(t)
5 0 −5
0
5
10
15
20
0 −2
25
0
5
10
t
y2(t)
x2(t)
25
15
20
25
5
0
0
5
10
15
20
0 −5
25
0
5
10
t
t 5 Im(z2(t))
2 Re(z2(t))
20
t
2
−2
15
0 −2 −4
−2
0 Re(z (t))
2
4
0 −5 −0.5
0
1
0.5 Im(z (t))
1
1.5
1
Figure 3: Time responses and state trajectories of real and imaginary parts of the MCVNNs. QI1
=
QI4
=
QI6 = S2 =
0
−0.0013
0.0013
0
0
−0.0015
0.0015
0
0
−0.0021
0.0021
0
10.6873
0
0
7.4410
,
QI2
,
QI5
=
= 10
, R1I =
0
−0.0013
0.0013
0
−3
×
,
0
−0.4683
0.4683
0
−0.3341 −0.0331 0.3781
QI3
0.0414
=
0
−0.0014
0.0014
0
,
,
, S1 =
11.8838
0
0
11.5022
,
, γ = 11.4573
Next, we introduce the state feedback control (32) in to MCVNNs (41). Let E =
2 − 0.2i
−3 − 0.3i By Theorem 4 and using the YALMIP toolbox, we can obtain the following feasible solutions: 5.5892 −2.8353 2.2082 −1.7605 2.2082 −1.7605 ¯R ¯R P¯1R = ,Q ,Q , 1 = 2 = −2.8353 10.9299 −1.7605 4.1734 −1.7605 4.1734
.
26
5 y (t)
0 −5
1
1
x (t)
5
0
5
10
15
20
0 −5
25
0
5
10
t
y (t)
2
2
x (t)
0
0
5
10
15
20
15
20
25
0 −10
25
0
5
10 t
10 2
Im(z (t))
2 2
25
10
t Re(z (t))
20
t
2
−2
15
0 −2 −4
−2
0 Re(z (t))
2
4
0 −10 −2
0
2
4
Im(z (t))
1
1
Figure 4: Time responses and state trajectories of real and imaginary parts of the MCVNNs. ¯R = Q 3
2.3833
¯R = ,Q 4
5.6888
−0.0672
¯R = ,Q 5
5.7283
−0.0451
4.5905 −0.0672 5.7625 −0.0451 5.7778 5.8324 0.0303 1.9343 0.8297 0 0.5817 R R I ¯ ¯ ¯ Q6 = , R1 = , P1 = , 0.0303 5.7861 0.5087 2.8457 −0.5817 0 0 0.0013 0 0.0013 0 0.0015 ¯ I1 = ¯ I2 = 10−3 × ¯ I3 = Q ,Q ,Q −0.0013 0 −0.0013 0 −0.0015 0 0 0.3952 0 0.2650 0 −0.3123 I −4 I −4 I −3 ¯ = 10 × ¯ = 10 × ¯ = 10 × ,Q ,Q , Q 4 5 6 −0.3952 0 −0.2650 0 0.3123 0 −0.1987 −0.2797 I ¯ = ¯ R = 103 × 1.6463 −0.3176 , K ¯ R = −491.3978 −132.9669 R , K 1 1 2 0.2892 0.1617 ¯ 3R = −700.3565 137.6205 , K ¯ 4R = −456.0774 314.8922 , K ¯ 1I = 103 × 1.0663 −1.3988 , K ¯ I = −367.4737 460.5156 , K ¯ I = −350.5686 471.0685 , K ¯ I = −348.4758 466.8793 , K 2 3 4
−1.9843
−1.9843
27 S¯1 =
8.1328
0
0
8.9517
, S¯2 =
8.2400
0
0
6.5540
, γ = 7.0303.
¯ 1, K ¯ 2, K ¯ 3, K ¯ 4 to make MCVNNs Thus we can construct a state feedback control with feedback gains K passive. The time responses and state trajectories of real and imaginary parts of (41) are depicted in Figures 3 and 4, where Figure 3 corresponds to the passivity of MCVNNs and Figure 4 corresponds to the passification of MCVNNs with 21 initial conditions in each.
5
Conclusion:
In this paper, the problem of passivity and passification for a class of memristor-based complex-valued recurrent neural networks with interval time-varying delays has been widely investigated. Some new passivity conditions have been obtained in the form of both complex-valued and real-valued linear matrix inequalities (LMIs) by using Lyapunov-Krasovskii’s functionals, differential inclusion theory and characteristic function method. All the solutions of the MCVNNs (3) is in the Fillipov sense. Also, the desired state feedback controllers has been designed, which guarantees the given system to be passive. Finally, two numerical examples have been presented to show the advantage of our proposed results.
References [1] Chua, L. O., Memristor - The Missing Circuit Element, IEEE Transactions on Circuit Theory, 18 (1971) 507-519. [2] Strukov, D. B., Snider, G. S., Sterwart, D. R. and Williams, R. S., The Missing Memristor Found, Nature, 453 (2008) 80-83. [3] Tour, J. M. and He, T., The Fourth Element, Nature, 453 (2008) 42-43. [4] Yang, J. J., Pickett, M. D., Li, X., Ohlberg, D. A. A., Stewart, D. R. and Williams, R. S., Memristive Switching Mechanism for Metal/Oxide/Metal Nanodevices, Nature Nanotechnology, 3 (2008) 429-433. [5] Thomas, A., Memristor-based Neural Networks, Journal of Physics, 49 (2013). [6] Wu, A., Zhang, J. and Zeng, Z., Dynamic behaviors of a class of memristor-based Hopfield networks, Physics Letters A, 375 (2011) 1661-1665. [7] Wu, A. and Zeng, Z., Dynamic behaviors of memristor-based recurrent neural networks with time-varying delays, Neural Networks, 36 (2012) 1-10. [8] Gang Bao, G. and Zeng, Z., Multistability of periodic delayed recurrent neural network with memristors, Neural computing and Applications, 23 (2013) 1963-1967.
28 [9] Wang, G. and Shen, Y., Exponential Synchronization of Coupled Memristive Neural Networks with Time-delays, Neural Computing and Applications, doi: 10.1007/s00521-013-1349-3. [10] Wu, A., Wen, S. and Zeng, Z., Synchronization Control of a Class of Memristor-based Recurrent Neural Networks, Information Sciences, 183 (2012) 106-116. [11] Wu, A., Zeng, Z., Zhu, X. and Zhang, J., Exponential Synchronization of Memristor-based Recurrent Neural Networks with Time-delays, Neurocomputing, 74 (2011) 3043-3050. [12] Wu, A. and Zeng, Z., Anti-synchronization control of a class of memristive recurrent neural networks, communications in nonlinear science and numerical simulation, 18 (2013) 373-385. [13] Song, Q., Zhao, Z. and Yang, J., Passivity and Passification for Stochastic TakagiSugeno
Fuzzy
systems
with
mixed
time-varying
delays,
Neurocomputing
(2013),
http://dx.doi.org/10.1016/j.neucom.2013.06.018. [14] Mahmoud, M. S. and Ismail, A., Passivity and Passification of time-delay systems., Journal of mathematical analysis and applications, 292 (2004) 247-258. [15] Chen, Y., Wang, H., Xue, A. and Lu, R., Passivity analysis of stochastic time-delay neural networks, Nonlinear dynamics, 61 (2010) 71-82. [16] Li, C. and Liao, X., Passivity Analysis of Neural Networks with Time delay , IEEE Transaction on Circuit systems II, 52(8) (2005) 471-475. [17] Zhang, Z., Mou, S.,Lam, J. and Gao, H., New passivity criteria for neural networks with timevarying delay, Neural Networks, 22(7) (2009) 864-868. [18] Wu, Z., Park, J., Su, H. and Chu, J., New results on exponential passivity of neural networks with time-varying delays, Nonlinear analysis: Real-World applications, 13(4) (2012) 1593-1599. [19] Zeng, H., He, Y., Wu, M. and Xiao, S., Passivity analysis for neural networks with a time-varying delay, Neural computing, 74(5) (2011) 730-734. [20] Wang, C., Yao, X., Wu, L. and Zeng, W. X., On Passivity and Passification of Markovian jump systems, Joint 48th IEEE conference on decision and control and 28th chinese control conference shanghai, P.R.china, (2009) 7145-7150. [21] Balasubramaniam, P. and Nagamani, G., Passivity analysis of neural networks with Markovian jumping parameters and interval time-varying delays, Nonlinear analysis: Hybrid systems, 4 (2010) 853-864. [22] Song, Q., Liang, J. and Wang, Z., Passivity analysis of discrete-time stochastic neural networks with time-varying delays, Neurocomputing 72 (2009) 1782-1788. [23] Chen, B., Li, H., Lin, C. and Zhou, Q., Passivity analysis for uncertain neural networks with discrete and distributed time-varying delays, Physics Letters A, 373 (2009) 1242-1248.
29 [24] Zhu, S., Shen, Y. and Chen, G., Exponential passivity of neural networks with time-varying delay and uncertainty, Physics Letters A, 375 (2010) 136-142. [25] Fu, J., Zhang, H., Ma, T. and Zhang, Q., On passivity analysis for stochastic neural networks with interval time-varying delay, Neurocomputing, 73 (2010) 795-801. [26] Wen, S., Zeng, Z., Huang, T. and Chen, Y., Passivity analysis of memristor-based recurrent neural networks with time-varying delays, Journal of Franklin Institute, 350 (2013) 2354-2370. [27] Wu, A. and Zeng, Z., Passivity analysis of memristive neural networks with different memductance functions, communications in nonlinear science and numerical simulation, 19 (2014) 274-285. [28] Hirose, A., Complex-valued Neural Networks, Berlin, Germany: Springer-Verlag, (2012). [29] Chen, C. and Song, Q., Global stability of complex-valued neural networks with both leakage time delay and discrete time delay, Neurocomputing, 121 (2013) 254-264. [30] Hu, J. and Wang, J.,Global stability of complex-valued recurrent neural networks with time delays, IEEE Transactions on Neural Networks Learning system, 23 (2012) 853-865. [31] Bohner, M., Rao, V.S.H. and Sanyal, S., Global stability of complex-valued neural networks on time scales, Differential equations Dynamical system, 19 (2011) 3-11. [32] Zhou, B. and Song, Q., Boundedness and Complete Stability of Complex-valued Neural Networks with Time-delay, IEEE Transactions on Neural Networks and Learning Systems, 24 (2013) 12271238.