Stability analysis for uncertain switched delayed complex-valued neural networks

Stability analysis for uncertain switched delayed complex-valued neural networks

Neurocomputing 367 (2019) 198–206 Contents lists available at ScienceDirect Neurocomputing journal homepage: www.elsevier.com/locate/neucom Stabili...

1MB Sizes 0 Downloads 43 Views

Neurocomputing 367 (2019) 198–206

Contents lists available at ScienceDirect

Neurocomputing journal homepage: www.elsevier.com/locate/neucom

Stability analysis for uncertain switched delayed complex-valued neural networks Nallappan Gunasekaran, Guisheng Zhai∗ Department of Mathematical Sciences, Shibaura Institute of Technology, Saitama 337-8570, Japan

a r t i c l e

i n f o

Article history: Received 22 March 2019 Revised 24 June 2019 Accepted 9 August 2019 Available online 13 August 2019 Communicated by Prof. D. Wang

a b s t r a c t The main concern of the paper is to address the stability of switched delayed complex-valued neural networks with uncertainties. Based on suitable Lyapunov–Krasovskii functional (LKF) and proposed lemma, the delay-dependent sufficient conditions are derived to guarantee the asymptotical stability of considered uncertain switched complex-valued neural networks. The derived sufficient conditions in terms of linear matrix inequalities are solved with the help of YALMIP toolbox in MATLAB. Two numerical examples are provided to ensure the effectiveness of the theoretical conditions. © 2019 Elsevier B.V. All rights reserved.

Keywords: Complex-valued neural networks Lyapunov method Integral inequality Linear matrix inequality Stability

1. Introduction Since 1980s, neural networks (NNs) have received considerable much attention among the researchers due to their potential applications in various fields that include image processing, pattern recognition, associative memories, and solving certain optimization problems. In general, each neuron in the NNs is named as node and each node receive their inputs from different nodes or from outside sources. Later the received informations are processed for the output. Technically, each input has a weight, which is assigned on the basis of its relative significance to different inputs. Therefore, the node applies in the ratio of a weighted sum of its inputs. Besides, a computer neuron is worked on lines, as shown in Fig. 1. There are contributions to the neuron set apart with green arrows, and the neuron transmits the output signal after some calculation. The input layer is called as dendrites of the neuron and the output signal is named as axon. Each input signal is associated with a weight. Hence, the weights are increased by the information esteem and the neuron stores the aggregated weights of all input variables. These weights are computed in the preparation period of the NNs through ideas called gradient descent and back propagation. An activation function is then connected to the weights which leads to the output signal of the neuron. The input signals are generated through different neurons, i.e., the output of different ∗

Corresponding author. E-mail address: [email protected] (G. Zhai).

https://doi.org/10.1016/j.neucom.2019.08.030 0925-2312/© 2019 Elsevier B.V. All rights reserved.

neurons, and the system is worked to make expectations/calculations in this manner. Even, the researchers have used NNs in power generation, for example, the hardware processing of artificial NNs is expounded by the following differential equations [1]:

 1 d xi (t ) = − xi (t ) + w1i j fi (xi (t )) + si (t ), i = 1, 2, . . . , n. dt Ri n

Ci

j=1

An analog RC (resistant-capacity) network circuit that appeared in Fig. 2 can be formulated as nonlinear system. In this regard, authors (Marcus and Westervelt) in [2] have introduced Hopfield NNs with time-delay as given below

  d xi (t ) = −ci xi (t ) + w1i j f j (x j (t )) + w2i j f j (x j (t − τ )) + Ji dt n

n

j=1

j=1

(1) Different types of examinations relating to the stability results of delayed NNs (1) have been proposed on the literature based on Lyapunov stability theory have been driven. To date, the Hopfield NNs and cellular NNs with delays have proved their significance in the applications. Recently, the existence and uniqueness of various types of delayed NNs have investigated in [3–9]. It is envisaged that the investigations of NNs not only include the properties of equilibrium analysis but also other dynamical characteristics that include bifurcation, chaos and periodic oscillatory behavior. Notice that, numerous results on literature have mainly focused on

N. Gunasekaran and G. Zhai / Neurocomputing 367 (2019) 198–206

Fig. 1. Mathematical model of connections between neurons.

Fig. 2. Circuit for neuron i in the analong implementation of Hopfield NNs.

real-valued NNs when compared to complex-valued NNs. Hence, the main objective of the paper is to investigate the complex-valued NNs. Complex-valued neural networks (CVNNs), whose states, connection weights, outputs, and activation functions are complex in nature, have predominant highlights in managing some complicated real-life problems and have pulled in much consideration and enthusiasm of a lot of researchers, because of their broad applications in sifting, machine learning, phase shift-key modulation, multi-state associative memory (see [10–16] and references therein). Compared with the real-valued neural systems, CVNNs show extraordinary favorable circumstances in complex signal processing. It is helpful to portray the phase progression and retardation through the complex-valued signals. In addition, it encourages the data preparing and can lessen the unpredictability of data handling. As is notable, the practical applications of a delayed neural network heavily depend on its dynamical behaviors, and in this way, it is fundamental and vital to examine the dynamics of CVNNs [17,18]. An unique basis to ensure that the NNs is asymptotically stable was controlled by Lyapunov functional and some stability criteria are proposed using complex-valued Lyapunov functional technique in [19–21]. Switching theory have a predominant role in the control theory, [22–28]. Switching signals is a concept that takes a group of N signal inputs and reproduces them in any permuted order at the output. Moreover, the authors in [29–31] can be viewed as an exceptional instance of the switched NNs showed in this work, since switched structures are viewed as nonlinear systems, which are conveyed out of a gathering of persistent time subsystems and a rule base that organizes the switching between the subsystems. Moreover, in the equipment execution of neural systems on extremely enormous scale incorporated circuits, the systems parameters are most likely subject to certain variations as a result of the resistances of the used electronic components. These are the real reasons why parameter uncertainties can be encountered in neural systems. Consequently, it is of reasonable enthusiasm to consider

199

parameter uncertainties in the displaying of neural systems. At the point when both parameter uncertainties and time delay appear in neural networks, the stability issue turns out to be more complex and challenging than the addressed in [32–34]. For CVNNs, the impact of parameter uncertainties has considered (see [19–21]). The analysis of asymptotic stability for delayed CVNNs with parameter uncertainties has examined in [21]. Some adequate conditions for stability of the considered neural systems have been exhibited on the standard of the system parameters. Unfortunately, a large portion of the previously mentioned works have focused on CVNNs with parameter uncertainties and there is no pertinent research concerning the stability analysis of delayed swithched CVNNs with parameter uncertainties. Attribute to the complexity of the definition of fractional derivatives, some traditional analysis methods (e.g., Lyapunov-Krasovskii functionals) for switched CVNNs cannot be just stretched out and connected to Hopfield CVNNs. To be progressively explicit, generally they are communicated as far as the supreme estimations of the real or imaginary parts of the synaptic weights. This will led to a huge computational burden and great conservatism. As a matter of fact, neural systems are typically executed by very large scale integration or digital circuits. Motivated by the above concepts, this paper presents the stability criteria for switched CVNNs based on the switching signals approach, in which the CVNNs are expressed as sub-systems. Further, the stability conditions are derived for the switched sub-systems and the conditions are derived to ensure the asymptotical stability of the CVNNs. By constructing a suitable LKF and with standard integral inequality, delay-dependent stability criteria are obtained for the considered CVNNs. The obtained results are formulated as solvable LMIs that can be easily solved through standard software packages. Two numerical examples validate the superiority of the derived sufficient conditions and the advantages of theoretical results. Notations. Cn and Cm×n denote the set of n-dimensional complex vectors, m × n complex matrices, respectively. The superscript “T” and “∗ ” denotes the matrix transposition and complex conjugate√transpose, respectively; “i” denotes the imaginary unit, that is i = −1. For any matrix P, P > 0 (P < 0) means P is positive definite (negative definite) matrix. For complex number z = x + iy, the no √ tation |z| = x2 + y2 stands for the module of z and z = z∗ z. If n ×n A∈C , denotes by A its operator norm, i.e., A = sup{Ax :  x = 1} = λmax (A∗ A ). The notation “” always denotes the conjugate transpose of block in a Hermitian matrix.

2. Preliminaries In this paper, we consider the following uncertain switched complex-valued delayed neural networks

y˙ (t ) = − C r (t ) y(t ) + Ar (t ) g(y(t )) + B r (t ) g(y(t − τ (t ))).

(2)

Here y(· ) = (y1 (· ), y2 (· ), . . . , yn (· ))T ∈ Cn denotes the state vector; g(y(· )) = (g1 (y1 (· )), g2 (y2 (· )), . . . , gn (yn (· )))T ∈ Cn denotes the complex-valued neuron activation function; C r (t ) = Cr (t ) + ΔCr (t ) , Ar (t ) = Ar (t ) + ΔAr (t ) , B r (t ) = Br (t ) + ΔBr (t ) ; Cr (t ) is a positive diagonal matrix; Ar (t ) , and Br (t ) represent the connection weight matrix and delayed connection weight matrix, respectively; ΔCr (t ) , ΔAr (t ) and ΔBr (t ) are unknown matrices representing timevarying parametric uncertainties in the system model; and r (t ) : [0, +∞ ) → N = {1, 2, . . . , m} is the switching signal and is usually modelled as a piecewise function in CVNNs. τ (t) is time-varying delay, which satisfies

0 ≤ τ (t ) ≤ τ , τ˙ (t ) ≤ μ,

(3)

200

N. Gunasekaran and G. Zhai / Neurocomputing 367 (2019) 198–206

The following lemmas are useful in the derivation of the main result. Lemma 1 [38]. For any constant Hermitian matrix M ∈ Cn×n , M = M∗ > 0, a scalar function y : [a, b] → Cn with scalars a < b such that the integration concerned are well defined, then



(b − a )

b a

y∗ (s )My(s )ds ≥



b a



y(s )ds





M

b

a



y(s )ds .





W1 W2 > 0,  W3 W1 , W2 , W3 ∈ Cn×n , a scalar function y : [0, τ ] → Cn such that the integration concerned are well defined, then Lemma 2. For any Hermitian matrices W =

Fig. 3. Architecture of the switched CVNNs.

where τ and μ are known positive constants. And, the initial condition for system (2) is

x(s ) = ϕ (s ), ∀ s ∈ [−τ , 0],



1, t ∈ Tk 0, otherwise

m 



ϕk (t ) − C k y(t ) + Ak g(y(t )) + Bk g(y(t − τ (t )))



(5)

k=1

m

and k=1 ϕk (t ) = 1. The prototypical switched CVNNs is shown in Fig. 3.

architecture

for

|gi ( k1 ) − gi ( k2 ) | ≤ li |k1 − k2 |, ∀ k1 , k2 ∈ C , where li are positive constants.

Assumption 2 [35]. The parametric uncertainties ΔCk , ΔAk and ΔBk are time variant and unknown, but norm bounded. The uncertainties are of the following form:

ΔAk ΔBk ] = EF (t ) HkC ,

HkA ,

[HkC

HkA

HkB ]

(7)

HkB

in which E, and are known matrices with appropriate dimensions. The uncertain matrix F (t ) satisfies

F T (t )F (t ) ≤ I.

(8)

Remark 1. The parameter uncertainties as in (7) and (8) has been widely used in the robust stability analysis of NNs, for more details, kindly refer [33,36,37], and the references therein. Many practical systems possess parameter uncertainties that can be either exactly modeled or over bounded by (8). Observe that the unknown matrix in (7) can even be allowed to be time-dependent, i.e., F (t ), as long as (8) is satisfied. In fact, it is almost impossible to have an exact mathematical model of a dynamical system due to modeling errors, measurement errors, process uncertainties and so on, which motivates the present study to formulate the system (2) containing uncertainties ΔCk ΔAk , and ΔBk . In this way, there is an extraordinary enthusiasm to make some compelling techniques and applicable results for uncertain switched CVNNs. Besides, the time delay about by the structure of NNs need to be considered as well with an assumption that the time varying delay is not restricted to be zero, which is progressively sensible as indicated by practical situation.

(9)

where

ξ (t ) = [y∗ (t ) y˙ ∗ (t )]∗ , ∗  t ∗ ∗ ∗ (t ) = y (t ) y (t − τ (t )) y (s )ds , t −τ (t )

 ∗ −W3  

W3 −W3 

−W2 W2∗ . −W1

Proof. By using Lemma 1, we have

− τ (t )

 

≤−

(6)

ξ ∗ (s )W ξ (s )ds ≤ ∗ (t )Ω (t )

t −τ (t )

such

Assumption 1. For any i = {1, 2, . . . , n}, gi (·) is continuous and bounded and there exists a positive diagonal matrix L = diag{l1 , l2 , · · · , ln } such that

[ΔCk

t

Ω=

and k ∈ N . Thus, the model of the system (2) can be described by

y˙ (t ) =



(4)

where ϕ (s ) ∈ Cn is continuous on [−τ , 0]. In this paper, we indicate [0, +∞ ) = T1 ∪ T2 ∪ · · · ∪ Tm , where Tk represents to the arrangement of running time of the kth subsystem, which implies that when t ∈ Tk , the kth subsystem is activated. An indicator function is defined as ϕ k (t), where

ϕk (t ) =

−τ (t )

t

t −τ (t ) t

t −τ (t )

ξ ∗ (s )W ξ (s )ds  W1 W2 ∗ ξ (s )ds

 t

= − tt−τ (t ) t −τ (t )

= ∗ (t )



y(s )ds



y˙ (s )ds

−W3  

W3

W1 

W3 −W3 

which completes the proof.

t

t −τ (t )

W2 W3

−W2∗ W2∗ −W1



ξ (s )ds

 t

tt−τ (t ) t −τ (t )

y(s )ds



y˙ (s )ds

(t )



Lemma 3 [35]. For given matrices W = W ∗ , U and V with appropriate dimensions, then

W + UF (t )V + (UF (t )V )∗ < 0 for all F(t) satisfying F∗ (t)F(t) ≤ I, if and only if there exists a scalar

> 0, such that

W + −1U U ∗ + V V ∗ < 0. 3. Main results In this section, we will give the design method to solve for the switched CVNNs without uncertainty. Obviously, when ΔCk = ΔAk = ΔBk = 0, the switched delayed CVNNs (5) can be described by

y˙ (t ) =

m 

  ϕk (t ) − Ck y(t ) + Ak g(y(t )) + Bk g(y(t − τ (t ))) . (10)

k=1

Theorem 1. Under Assumption 1, for given scalars τ > 0, and μ > 0, the system (10) is stable, if there exist positive Hermitian matrix P ∈ Cn×n , and any matrices Q1 ∈ Cn×n , Q2 ∈ Cn×n , Q3 ∈ Cn×n , M1 ∈ Cn×n , M2 ∈ Cn×n , M3 ∈ Cn×n , diagonal matrices Γi > 0, (i = 1, 2 ),

N. Gunasekaran and G. Zhai / Neurocomputing 367 (2019) 198–206

such that the following LMIs hold with k ∈ N :



⎢ ⎢ Σk = ⎢ ⎢ ⎣

( 1, 1 )

M3

( 1, 3 )

   

0 Q 3 − Γ1   

( 2, 2 )

    

( 1, 4 ) ( 2, 4 )

−τ Ck∗ M3 0 τ A∗k M3 τ Bk∗ M3 0 −M3

−M∗2 M∗2 0 0 −M1 

0 ( 4, 4 )  

< 0,

Q=

⎤ ⎥ ⎥ ⎥ ⎥ ⎦

(11)

Q1 

Q2 Q3



> 0, M =

M1 



M2 M3

> 0,

(12)

where

(1, 1 ) = −PC k − (PC k )∗ + Q1 + τ 2 M1 − τ 2 M2 Ck − M3 + L∗ Γ1 L, (1, 3 ) = PAk + Q2 + τ 2 M2 Ak , (1, 4 ) = PBk + τ 2 M2 Bk , ( 2 , 2 ) = − ( 1 − μ ) Q 1 − M 3 + L ∗ Γ2 L , ( 2 , 4 ) = − ( 1 − μ ) Q 2 , ( 4 , 4 ) = − ( 1 − μ ) Q 3 − Γ2 .

V (t ) =

Applying Lemma 2 to estimate the integral term in V˙ 3 (t ) yields

−τ (t )





y (s ) t −τ (t ) y˙ (s ) t

−M3  ≤  (t ) 

V˙ 1 (t ) = 2y∗ (t )P y˙ (t ) = 2 ×

V˙ 2 (t ) =



Q2 Q3



M2 M3

y (s ) g(y(s ))

m 





Q1 



y (s ) y˙ (s )

dsdθ .



y(t ) g(y(t ))

Q1 

y(t − τ (t )) g(y(t − τ (t )))



y(t ) V˙ 3 (t ) = τ 2 y˙ (t )  t −τ

≤τ2



M1 

y (s ) t−τ y˙ (s )

y(t ) y˙ (t )

− τ (t )





M1 





Q1 

Q2 Q3



Q1 

M2 M3



M2 M3

y (s ) t −τ (t ) y˙ (s ) t

M1 

y(t ) g(y(t ))

Q2 Q3









y(t ) y˙ (t )

M2 M3



y(t ) y˙ (t )

M1 

(14)



∗ ∗

( 1, 3 )

M3 ( 2, 2 )   

0

( 3, 3 )  

,

( 1, 4 ) − ( 1 − μ )Q2 ( 3, 4 ) ( 4, 4 ) 



−M∗2 M∗2 0 0 −M1

⎤ ⎥ ⎥ ⎥ < 0, ⎦

(1, 1 ) = −PC k − (PC k ) + Q1 + τ M1 − τ 2 M2 Ck + τ 2 Ck∗ M3 Ck − M 3 + L ∗ Γ1 L , 2

(1, 4 ) = PBk + τ 2 M2 Bk − τ 2 Ck∗ M3 Bk ,

If k < 0, then V˙ (t ) < 0 for any ζ (t) = 0. Applying the Schur compliment shows that (11) implies k < 0. From the LyapunovKrasovskii stability theorem, the switched CVNNs (10) is asymptotically stable if LMIs (11) is true. This completes the proof. 

− (1 − μ )

, (15)

y (s ) y˙ (s )

Using Assumption 2, we extend the above Theorem 1 based on uncertainties and obtain some criteria for the switched CVNNs (5) with time-varying delay yields the following theorem. Theorem 2. Under Assumption 1, for given scalars τ > 0, and μ > 0, the system (5) is stable, if there exist positive Hermitian matrix P ∈ Cn×n , and any matrices Q1 ∈ Cn×n , Q2 ∈ Cn×n , Q3 ∈ Cn×n , M1 ∈ Cn×n , M2 ∈ Cn×n , M3 ∈ Cn×n , diagonal matrices Γi > 0, (i = 1, 2 ), and scalar > 0 such that the following LMIs hold with k ∈ N :



M2 M3

 

t −τ (t )

y(s )ds

(4, 4 ) = −(1 − μ )Q3 + τ 2 Bk∗ M3 Bk − Γ2 .





y(t − τ (t )) g(y(t − τ (t )))



( 1, 1 )

t

(3, 3 ) = Q3 + τ 2 A∗k M3 Ak − Γ1 , (3, 4 ) = τ 2 A∗k M3 Bk ,

− (1 − τ˙ (t ))





( 2 , 2 ) = − ( 1 − μ ) Q 1 − M 3 + L ∗ Γ2 L ,

y(t − τ (t )) g(y(t − τ (t )))

y(t ) g(y(t ))

Q2 Q3



(20)

(1, 3 ) = PAk + Q2 + τ 2 M2 Ak − τ 2 Ck∗ M3 Ak ,





(19)

ϕk (t ){ζ ∗ (t )k ζ (t )}

⎢  ⎢ k = ⎢  ⎣



ϕk (t )y∗ (t )P

Q2 Q3

(18)

ζ (t ) = y∗ (t ) y∗ (t − τ (t )) g∗ (y(t )) g∗ (y(t − τ (t )))



ds,

− Ck y(t ) + Ak g(y(t )) + Bk g(y(t − τ (t ))) ,

y(t ) g(y(t ))

(17)

× Γ2 g(y(t − τ (t ))).



k=1

y(t − τ (t )) g(y(t − τ (t )))







−M∗2 M∗2 (t ). −M1

k=1

Taking derivative of V(t) along the solutions of (10), we have m 

ds

0 ≤ y∗ (t − τ (t ))[L∗ Γ2 L]y(t − τ (t )) − g∗ (y(t − τ (t )))

×

M1 



y (s ) y˙ (s )

0 ≤ y∗ (t )[L∗ Γ1 L]y(t ) − g∗ (y(t ))Γ1 g(y(t )),

where





In addition, for positive diagonal matrices Γ1 and Γ2 , from Assumption 1 we have

i=1

Q1 

M2 M3

M3 −M3 



V˙ (t ) ≤

(13)



M1 

where

Vi (t ),

V1 (t ) = y∗ (t )Py(t ),  t y (s ) V2 (t ) = t −τ (t ) g(y (s ))  0 t y (s ) V3 (t ) = τ −τ t+θ y˙ (s )



Combining (14)–(19), we have

Proof. Construct the following LKF: 3 

201

ds



Σk + Zk Zk∗ 



y (s ) y˙ (s )



ds.

(16)

Q=

Q1 

Q2 Q3

K − I



< 0,



(21)

> 0, M =

M1 

M2 M3

> 0,

(22)

202

N. Gunasekaran and G. Zhai / Neurocomputing 367 (2019) 198–206

where



Zk = −(HkC )



0

∗

(HkA ) (HkB ) 0 0 ,

K = (PE + τ 2 M2 E )∗

0

0

0

∗ τ ( M3 E )∗ .

0

Proof. Replacing Ck , Ak and Bk in (11) with Ck + EF (t )HkC , Ak + EF (t )HkA and Bk + EF (t )HkB , we have



⎤ (PE + τ 2 M2 E ) 0 ⎢ ⎥ ⎢ ⎥   0 ⎥F (t ) −HC 0 HA HB 0 0 Σk + ⎢ k k k ⎢ ⎥ 0 ⎣ ⎦ 0

τ M3 E ⎡ ⎤ ⎡ ⎤∗ −(HkC )∗ (PE + τ 2 M2 E ) 0 ⎢ 0 ⎥ ⎢ ⎥ ⎢ ( H A )∗ ⎥ ∗ ⎢ ⎥ 0 k ⎥ ⎢ ⎥ < 0. +⎢ F ( t ) ⎢ (HkB )∗ ⎥ ⎢ ⎥ 0 ⎣ ⎦ ⎣ ⎦

(23)

0

0 0

τ M3 E

It follows from Lemma 3 that the above LMIs hold if and only if there exist > 0 such that



⎢ ⎢ Σk + −1 ⎢ ⎢ ⎣ ⎡

(PE + τ M2 E )

−(HkC )∗ ⎢ 0 ⎢ ( H A )∗ k + ⎢ ⎢ (HkB )∗ ⎣ 0 0

2

0 0 0 0 τ M3 E

⎤⎡

−(HkC )∗ ⎥⎢ 0 ⎥⎢ (HA )∗ ⎥⎢ kB ∗ ⎥⎢ (Hk ) ⎦⎣ 0 0

⎤⎡ ⎥⎢ ⎥⎢ ⎥⎢ ⎥⎢ ⎦⎣ ⎤∗

(PE + τ M2 E ) 2

0 0 0 0 τ M3 E

⎤∗

Example 4.1. Consider the following switched CVNNs:

⎥ ⎥ ⎥ < 0. ⎥ ⎦

y˙ (t ) =

70.5002 + 0.0000i −16.0 0 07 + 6.7322i



0.6460 + 0.0 0 0 0i ⎢−0.3451 + 0.1234i Q =⎣ 0.0 0 05 + 0.0 0 0 0i −0.0899 + 0.2146i



90.0617 + 0.0 0 0 0i ⎢−14.2266 + 6.6228i M =⎣ −8.0943 + 0.0 0 0 0i −7.4255 + 3.2566i

2 

  ϕk (t ) − Ck y(t ) + Ak g(y(t )) + Bk g(y(t − τ (t ))) (25)

k=1

(24) where



Remark 2. In the proof of Theorem 1, we use a new integral inequality to estimate the derivative of single integral terms is V˙ 3 (t ) dependent on Jensen’s inequality, which can give additional opportunity and reduce the computational complexity. It is expected that better stability criteria can be acquired if future research stretches out the way to deal with different LKFs with progressively broad structures. It is expected that the methodology proposed in this paper can be reached out to fractional-order CVNNs.



4. Simulation examples In this section, we will give a numerical example to outline the viability and capability of the proposed design techniques. We performed simulations by utilizing the compelling YALMIP toolbox in MATLAB [40] running on MATLAB 2018a.

⎥ ⎥ ⎥ ⎥ ⎦

By using Schur compliment in (24), we have (21). This completes the proof. 

P=

conservativeness and the multifaceted nature can be considered in the further research. In addition, the subsystems of the switched delayed neural network (5) are all stable, but switched frameworks with unstable frameworks are definitely experienced in real and imaginary plant. Subsequently, the switched frameworks with stable and unstable subsystems can be studied. To the extent the LMIs approach, Theorem 1 sets up a stability for switched NNs, which is bound and 0 ≤ τ (t) ≤ τ . In [39], the time-varying τ (t) was evaluated as τ when assessing the upper bound. So the estimation of τ (t) in this paper is assessed progressively precise. Additionally, in this paper the boundedness of activation functions is not expected. According to the Liouvilles theorem, if g(y(t)) is analytic function and bounded for all y ∈ C, then g(y(t)) is a constant. Clearly, the bounded and analytic function is a trivial case and it is normally not appropriate for activation functions. Moreover, Theorem 1 in [39] accept that the neuron activation functions are bounded via Lipschitz condition, and along these lines the activation functions are not analytic. This prompts restrictions in choosing activation functions. Along these lines, compared with the existing work in [39], we investigates a more extensive class of CVNNs.

C1 =

1.5 0





0.1 − 0.3i B1 = 0.2 + 0.2i



2 C2 = 0





0 0.6 + 0.2i , A1 = 1.5 0.9 − 0.7i





0.1 − 0.5i , 0.3 − 0.7i



0 0.3 + 0.4i , A2 = 2 0.9 − 0.7i

0.1 − 0.2i B2 = 0.2 + 0.2i





1.1 + 2i , 1.2 + 0.8i



0.2 + 2i , 1.2 + 0.8i

0.2 − 0.2i . 0.3 − 0.1i

Let L = diag{1/2, 1/2}, τ (t ) = 0.4 sin(t ) and μ = 0.3. Then LMIs in (11) have the following feasible solution:



−16.0 0 07 − 6.7322i , 91.5283 + 0.0 0 0 0i −0.3451 − 0.1234i 0.7421 + 0.0 0 0 0i −0.0899 − 0.2146i −0.0393 + 0.0 0 0 0i −14.2266 − 6.6228i 88.9396 + 0.0 0 0 0i −7.4255 − 3.2566i −10.1463 + 0.0 0 0 0i



0.0 0 05 + 0.0 0 0 0i −0.0899 + 0.2146i 1.0700 + 0.0000i −0.1860 + 0.4062i

−0.0899 − 0.2146i −0.0393 + 0.0 0 0 0i⎥ , −0.1860 − 0.4062i ⎦ 1.1574 + 0.0 0 0 0i

−8.0943 + 0.0 0 0 0i −7.4255 + 3.2566i 52.9899 + 0.0 0 0 0i −0.0943 + 0.8167i

Remark 3. Theorem 2 gives a LMI based on sufficient condition for the stability of the CVNNs. The new integral inequality gives opportunity in determining stability criterion, but presents some variables. In this manner, the tradeoff between the



−7.4255 − 3.2566i −10.1463 + 0.0 0 0 0i⎥ . −0.0943 − 0.8167i ⎦ 67.0 0 08 + 0.0 0 0 0i

The corresponding state responses of the considered switched CVNNs are shown at Figs. 4 and 5, with different initial values. One of the possible realizations of the switching signals is plotted in Fig. 6.

N. Gunasekaran and G. Zhai / Neurocomputing 367 (2019) 198–206

203

Real part of y1 (t)

50

0

-50 0

1

2

3

4

5

6

7

8

9

10

6

7

8

9

10

t

Real part of y2 (t)

50

0

-50 0

1

2

3

4

5

t Fig. 4. Real part of state variables with different initial values in Example 4.1.

Imaginary part of y1 (t)

50

0

-50 0

1

2

3

4

5

6

7

8

9

10

6

7

8

9

10

t Imaginary part of y2 (t)

50

0

-50 0

1

2

3

4

5

t Fig. 5. Imaginary part of state variables with different initial values in Example 4.1.

204

N. Gunasekaran and G. Zhai / Neurocomputing 367 (2019) 198–206

2.5

y˙ (t ) =

2 

  ϕk (t ) − C k y(t ) + Ak g(y(t )) + Bk g(y(t − τ (t ))) (26)

k=1

where

2



C1 =

1.5 0



1.5

B1 =

H1C

0.3 = 0



0.5

H1B 2

4

6

8

10



t



0

 ϕ2 (t ) =

B2 =

1 t ∈ [0, 1.3] ∪ [1.3, 2.5] ∪ [2.5, 3.7] ∪ [3.7, 9] ∪ [9, 10] otherwise







0.7 − 0.8i 0.5 + 0.2i

0.7 = 0







0.2 + 2i , 0.2 + 0.6i

0.6 + 0.3i , 0.5 + 0.1i





0 0.8 + 1i , H2A = −0.8 0

0.3 + 0.2i 0



0 , 0.2 + 0.8i

0 , 0.1 − 0.2i





H2C



0 0.1 − 0.7i , H1A = 0.4 0

0 0.5 + 0.5i , A2 = 2 0.6 − 0.5i

2 0



Fig. 6. Switching signals in Example 4.1.

ϕ1 (t ) =

C2 =



1 + 2i , 1.2 + 0.8i

0.1 − 0.5i , 0.3 − 0.7i

0.2 − 0.2i = 0

0 0





0.1 − 0.3i 0.2 + 0.2i



1



0 0.8 + 0.2i , A1 = 1.5 0.9 − 0.7i





0 , 0.6 − 0.2i

0 . 0.2 − 0.3i

0 t ∈ [0, 1.3] ∪ [1.3, 2.5] ∪ [2.5, 3.7] ∪ [3.7, 9] ∪ [9, 10]

H2B =

1

Let E = diag{0.8, −0.6}, F = diag{sin(t ), −cos(t )}, L= diag{1/5, 1/5}, τ (t ) = 0.3 sin(t ) + 1, μ = 0.3. Then LMIs in (23) have the following feasible solution:

otherwise

Example 4.2. Consider the following uncertain switched CVNNs:





0.0492 + 0.0 0 0 0i P= −0.0 0 06 − 0.0 0 01i

−0.0 0 06 + 0.0 0 01i , 0.0916 + 0.0 0 0 0i

= 0.3571,

0.0408 + 0.0 0 0 0i ⎢−0.0096 − 0.0008i Q =⎣ −0.0375 + 0.0 0 0 0i 0.0070 + 0.0284i

−0.0096 + 0.0008i 0.1468 + 0.0 0 0 0i 0.0070 − 0.0284i 0.0703 + 0.0 0 0 0i

−0.0375 + 0.0 0 0 0i 0.0070 + 0.0284i 0.3215 + 0.0 0 0 0i −0.0765 + 0.0142i

0.0070 − 0.0284i 0.0703 + 0.0 0 0 0i ⎥ , −0.0765 − 0.0142i⎦ 0.4232 + 0.0 0 0 0i

0.0111 + 0.0 0 0 0i ⎢−0.0042 − 0.0034i M =⎣ −0.0011 + 0.0000i 0.0 0 0 0 + 0.0 0 06i

−0.0042 + 0.0034i 0.1130 + 0.0 0 0 0i 0.0 0 0 0 − 0.0 0 06i −0.0100 + 0.0000i

−0.0011 + 0.0000i 0.0 0 0 0 + 0.0 0 06i 0.0 0 02 + 0.0 0 0 0i 0.0 0 0 0 − 0.0 0 01i

0.0 0 0 0 − 0.0 0 06i −0.0100 + 0.0000i⎥ . 0.0 0 0 0 + 0.0 0 01i ⎦ 0.0015 + 0.0000i



Real part of y1(t)







2

0

-2 0

0.5

1

1.5

2

2.5

3

3.5

4

4.5

5

3

3.5

4

4.5

5

Real part of y2(t)

t 2

0

-2 0

0.5

1

1.5

2

2.5

t Fig. 7. Real part of state variables with different initial values in Example 4.2.

Imaginary part of y 2(t)

Imaginary part of y 1(t)

N. Gunasekaran and G. Zhai / Neurocomputing 367 (2019) 198–206

205

2

0

-2 0

0.5

1

1.5

2

2.5

3

3.5

4

4.5

5

3

3.5

4

4.5

5

t 2

0

-2 0

0.5

1

1.5

2

2.5

t Fig. 8. Imaginary part of state variables with different initial values in Example 4.2.

examples to illustrate the feasibility and effectiveness of the proposed outcomes.

2.5

2

Declaration of Competing Interest The authors declare that they have no conflict of interest.

1.5

References 1

0.5

0 0

1

2

3

4

5

t Fig. 9. Switching signals in Example 4.2.

The corresponding state responses of the considered switched CVNNs are shown at Figs. 7 and 8, with different initial values. One of the possible realizations of the switching signals is plotted in Fig. 9.

 ϕ1 (t ) =

1 t ∈ [0, 1.3] ∪ [1.3, 2.5] ∪ [2.5, 3.7] ∪ [3.7, 5] 0

 ϕ2 (t ) =

otherwise

0 t ∈ [0, 1.3] ∪ [1.3, 2.5] ∪ [2.5, 3.7] ∪ [3.7, 5] 1

otherwise

5. Conclusion In this paper, the stability of uncertain switched complexvalued delayed neural networks have been examined. By built suitable Lyapunov–Krasovskii functional and employing inequality technique, delay-dependent sufficient conditions that guaranteed the stability of the consider CVNNs, which can be checked numerically evaluated through YALMIP toolbox in MATLAB. At last, LMIsbased stability criteria are introduced by means of two numerical

[1] J.J. Hopfield, Neurons with graded response have collective computational properties like those of two-state neurons, Proc. Natl. Acad. Sci. 81 (10) (1984) 3088–3092. [2] C. Marcus, R. Westervelt, Stability of analog neural networks with delay, Phys. Rev. A 39 (1) (1989) 347–359. [3] L. Ding, Y. He, Y. Liao, M. Wu, New result for generalized neural networks with additive time-varying delays using free-matrix-based integral inequality method, Neurocomputing 238 (2017) 205–211. [4] R. Saravanakumar, G. Rajchakit, C.K. Ahn, H.R. Karimi, Exponential stability, passivity, and dissipativity analysis of generalized neural networks with mixed time-varying delays, IEEE Trans. Syst. Man Cybern.: Syst. 49 (2) (2019) 395–405. [5] S. Lakshmanan, C.P. Lim, M. Prakash, S. Nahavandi, P. Balasubramaniam, Neutral-type of delayed inertial neural networks and their stability analysis using the LMI approach, Neurocomputing 230 (2017) 243–250. [6] J. Cao, K. Yuan, H.X. Li, Global asymptotical stability of recurrent neural networks with multiple discrete delays and distributed delays, IEEE Trans. Neural Netw. 17 (6) (2006) 1646–1651. [7] Y. Liu, J.H. Park, F. Fang, Global exponential stability of delayed neural networks based on a new integral inequality, IEEE Trans. Syst. Man Cybern.: Syst. (2018) 1–8, doi:10.1109/TSMC.2018.2815560. [8] S. Lakshmanan, M. Prakash, C.P. Lim, R. Rakkiyappan, P. Balasubramaniam, S. Nahavandi, Synchronization of an inertial neural network with time-varying delays and its application to secure communication, IEEE Trans. Neural Netw. Learn. Syst. 29 (1) (2016) 195–207. [9] M. Prakash, P. Balasubramaniam, S. Lakshmanan, Synchronization of Markovian jumping inertial neural networks and its applications in image encryption, Neural Netw. 83 (2016) 86–93. [10] A. Hirose, Complex-Valued Neural Networks, 400, Springer Science & Business Media, 2012. [11] X. Wang, M. Che, Y. Wei, Complex-valued neural networks for the Takagi vector of complex symmetric matrices, Neurocomputing 223 (2017) 77–85. [12] S. Yang, J. Yu, C. Hu, H. Jiang, Quasi-projective synchronization of fractional-order complex-valued recurrent neural networks, Neural Netw. 104 (2018) 104–113. [13] A. Hirose, Complex-Valued Neural Networks: Advances and Applications, 18, John Wiley & Sons, 2013. [14] M. Kobayashi, Singularities of three-layered complex-valued neural networks with split activation function, IEEE Trans. Neural Netw. Learn. Syst. 29 (5) (2018) 1900–1907. [15] R. Rakkiyappan, J. Cao, G. Velmurugan, Existence and uniform stability analysis of fractional-order complex-valued neural networks with time delays, IEEE Trans. Neural Netw. Learn. Syst. 26 (1) (2015) 84–97.

206

N. Gunasekaran and G. Zhai / Neurocomputing 367 (2019) 198–206

[16] Z. Wang, X. Liu, Exponential stability of impulsive complex-valued neural networks with time delay, Math. Comput. Simul. 156 (2019) 143–157. [17] B. Zhou, Q. Song, Boundedness and complete stability of complex-valued neural networks with time delay, IEEE Trans Neural Netw. Learn. Syst. 24 (8) (2013) 1227–1238. [18] Z. Zhang, C. Lin, B. Chen, Global stability criterion for delayed complex-valued recurrent neural networks, IEEE Trans. Neural Netw. Learn. Syst. 25 (9) (2014) 1704–1708. [19] B. Hu, Q. Song, K. Li, Z. Zhao, Y. Liu, F.E. Alsaadi, Global μ-synchronization of impulsive complex-valued neural networks with leakage delay and mixed time-varying delays, Neurocomputing 307 (2018) 106–116. [20] D. Liu, S. Zhu, E. Ye, Synchronization stability of memristor-based complex-valued neural networks with time delays, Neural Netw. 96 (2017) 115–127. [21] Z. Zhang, X. Liu, J. Chen, R. Guo, S. Zhou, Further stability analysis for delayed complex-valued recurrent neural networks, Neurocomputing 251 (2017) 81–89. [22] G. Zhai, B. Hu, K. Yasuda, A.N. Michel, Stability analysis of switched systems with stable and unstable subsystems: an average dwell time approach, Int. J. Syst. Sci. 32 (8) (2001) 1055–1061. [23] G. Zhai, B. Hu, K. Yasuda, A.N. Michel, Disturbance attenuation properties of time-controlled switched systems, J. Frankl. Inst. 338 (7) (2001) 765–779. [24] G. Zhai, H. Lin, P.J. Antsaklis, Quadratic stabilizability of switched linear systems with polytopic uncertainties, Int. J. Control 76 (7) (2003) 747–753. [25] Z.G. Wu, P. Shi, H. Su, J. Chu, Delay-dependent stability analysis for switched neural networks with time-varying delay, IEEE Trans. Syst. Man Cybern. B (Cybern.) 41 (6) (2011) 1522–1530. [26] G. Zhai, X. Xu, A unified approach to stability analysis of switched linear descriptor systems under arbitrary switching, Int. J. Appl. Math. Comput. Sci. 20 (2) (2010) 249–259. [27] D. Zhai, L. An, J. Dong, Q. Zhang, Switched adaptive fuzzy tracking control for a class of switched nonlinear systems under arbitrary switching, IEEE Trans. Fuzzy Syst. 26 (2) (2018) 585–597. [28] X. Li, R. Wang, B. Yang, W. Wang, Stability analysis for delayed neural networks via some switching methods, IEEE Trans. Syst. Man Cybern.: Syst. (99) (2018) 1–6. [29] C.D. Zheng, Q.H. Shan, H. Zhang, Z. Wang, On stabilization of stochastic cohen-Grossberg neural networks with mode-dependent mixed time-delays and Markovian switching, IEEE Trans. Neural Netw. Learn. Syst. 24 (5) (2013) 800–811. [30] J. Lian, J. Wang, Passivity of switched recurrent neural networks with time– varying delays, IEEE Trans. Neural Netw. Learn. Syst. 26 (2) (2015) 357–366. [31] S. Wen, Z. Zeng, T. Huang, Q. Meng, W. Yao, Lag synchronization of switched neural networks via neural activation function and applications in image encryption, IEEE Trans. Neural Netw. Learn. Syst. 26 (7) (2015) 1493–1502. [32] M.S. Ali, N. Gunasekaran, M.E. Rani, Robust stability of Hopfield delayed neural networks via an augmented L-K functional, Neurocomputing 234 (2017) 198–204. [33] Y. Li, S. Zhong, J. Cheng, K. Shi, J. Ren, New passivity criteria for uncertain neural networks with time-varying delay, Neurocomputing 171 (2016) 1003–1012. [34] S. Senan, Robustness analysis of uncertain dynamical neural networks with multiple time delays, Neural Netw. 70 (2015) 53–60. [35] L. Xie, Output feedback H∞ control of systems with parameter uncertainty, Int. J. Control 63 (4) (1996) 741–750. [36] W. Xie, H. Zhu, S. Zhong, H. Chen, Y. Zhang, New results for uncertain switched neural networks with mixed delays using hybrid division method, Neurocomputing 307 (2018) 38–53.

[37] W. Shen, Z. Zeng, L. Wang, Stability analysis for uncertain switched neural networks with time-varying delay, Neural Netw. 83 (2016) 32–41. [38] X. Chen, Q. Song, Global stability of complex-valued neural networks with both leakage time delay and discrete time delay on time scales, Neurocomputing 121 (2013) 254–264. [39] J. Hu, J. Wang, Global stability of complex-valued recurrent neural networks with time-delays, IEEE Trans. Neural Netw. Learn. Syst. 23 (6) (2012) 853–865. [40] J. Lofberg, YALMIP: a toolbox for modeling and optimization in MATLAB, in: Proceedings of the 2004 IEEE International Symposium on Computer Aided Control Systems Design, IEEE, 2004, pp. 284–289. Dr. Nallappan Gunasekaran was born in 1987. He received the bachelor degree from the Mahendra Arts and Science College, Namakkal affiliated to Periyar University, Salem, Tamil Nadu, India, in 2009. He received his postgraduation in mathematics from Jamal Mohamed College affiliated to Bharathidasan University, Trichy, Tamil Nadu, India, in 2012. He was awarded Master of Philosophy in 2013 in the field of mathematics with a specialized area of cryptography from Bharathidasan University, Trichy, India, and the Ph.D. degree from Thiruvalluvar University, Vellore, India, in 2017, in mathematics. He was a Junior Research Fellow with the Department of Science and Technology- Science and Engineering Research Board (DST-SERB), Government of India, New Delhi, India. He was a Post-Doctoral Research Fellow in the Research Center for Wind Energy Systems, Kunsan National University, Gunsan, South Korea, from 2017 to 2018. Currently, he is working as a Post-Doctoral Research Fellow in Department of Mathematical Sciences, Shibaura Institute of Technology, Saitama, Japan. His research interests are complex-valued neural networks, complex dynamical networks, control theory, sampled-data control, multi-agent systems, T–S fuzzy theory, and cryptography, etc. Dr. Gunasekaran serves as a reviewer for various SCI journals. He has authored and co-authored of more than 20 research articles in various SCI journals. Dr. Guisheng Zhai received his B.S. degree from Fudan University, China, in 1988, and he received his M.E. and Ph.D. degrees, both in system science, from Kobe University, Japan, in 1993 and 1996, respectively. After two years of industrial experience, Dr. Zhai moved to Wakayama University, Japan, in 1998, and then to Osaka Prefecture University, Japan, in 2004. He held visiting professor positions at University of Notre Dame from August 2001 to July 2002, and at Purdue University from March 2016 through February 2017. In April 2010, he joined the faculty board of Shibaura Institute of Technology, Japan, where he currently is a full Professor of Mathematical Sciences. His research interests include large scale and decentralized control systems, robust control, switched systems and switching control, networked control and multi-agent systems, neural networks and signal processing, etc. Dr. Zhai is on the editorial board of several academic journals including IET Control Theory & Applications, International Journal of Applied Mathematics and Computer Science, Journal of Control and Decision, and Frontiers of Mechanical Engineering. He is a Senior Member of IEEE, a member of ISCIE, SICE, JSST and JSME.