On periodic solutions of neural networks via differential inclusions

On periodic solutions of neural networks via differential inclusions

Neural Networks 22 (2009) 329–334 Contents lists available at ScienceDirect Neural Networks journal homepage: www.elsevier.com/locate/neunet Neural...

1MB Sizes 1 Downloads 24 Views

Neural Networks 22 (2009) 329–334

Contents lists available at ScienceDirect

Neural Networks journal homepage: www.elsevier.com/locate/neunet

Neural networks letter

On periodic solutions of neural networks via differential inclusions Xiaoyang Liu, Jinde Cao ∗ Department of Mathematics, Southeast University, Nanjing 210096, China

article

info

Article history: Received 25 August 2008 Revised and accepted 17 November 2008 Keywords: Neural networks Discontinuous activation functions Periodic solutions Differential inclusions Lyapunov method Linear matrix inequality

abstract Discontinuous dynamical systems, especially neural networks with discontinuous activation functions, arise in a number of applications and have received considerable research attention in recent years. However, there still remain some fundamental issues to be investigated, for instance, how to define the solutions of such discontinuous systems and what conditions can guarantee the existence and stability of the solutions. In this paper, based on the concept of Filippov solution, the dynamics of a general class of neural networks with discontinuous activation functions is investigated. Sufficient conditions are obtained to ensure the existence and stability of the unique periodic solution for the neural networks by using the differential inclusions theory, the Lyapunov–Krasovskii functional method and linear matrix inequality (LMI) technique. Two numerical examples are given to illustrate the theoretical results. © 2008 Elsevier Ltd. All rights reserved.

1. Introduction In general, due to the conventional definition and simple existence conditions of solutions for differential equations, most theoretical results of dynamical systems are established under the assumptions of smoothness or globally Lipschitz condition of the vector field. However, in various science and engineering applications, the system dynamics is discontinuous. Examples include impacting machines, dry friction, impacts in mechanical devices, systems oscillating under the effect of an earthquake, power circuits, forced vibrations, switching in electronic circuits, control synthesis of uncertain systems and many others. Discontinuities are also intentionally designed to achieve regulation and stabilization. For example, sliding mode control uses discontinuous feedback controllers for stabilization (Cortés, 2008). As a special kind of dynamical system, a neural network has many applications in signal processing, pattern recognition, parallel computation, complicated optimization problems and so on. Recently, there have been many results about neural networks emphasising on various types of stability, periodic oscillation, synchronization, bifurcation and chaos (Cao, 2001; Cao & Li, 2005; Huang, Ho, & Cao, 2005; J. Liang, Z. Wang, & X. Liu, 2008; J. Liang, Z. Wang, Y. Liu, et al., 2008; Liu, Chen, Cao, & Huang, 2003; Liu, Wang, & Liu, 2008; Song, Han, & Wei, 2005; Wang & Zou, 2004; Zhou, Liu, & Chen, 2004). However, all these papers are based on the assumption that the activation functions are continuous and even Lipschitzian. In fact, the authors of Forti and Nistri (2003) pointed out that the neural networks with



Corresponding author. E-mail addresses: [email protected], [email protected] (J. Cao).

0893-6080/$ – see front matter © 2008 Elsevier Ltd. All rights reserved. doi:10.1016/j.neunet.2008.11.003

discontinuous activation functions are more important, which frequently arise in practice. Take the classic Hopfield network as an example; under the standard assumption of high-gain amplifiers, the sigmoidal neuron activations closely approach a discontinuous function. Moreover, the analysis of the discontinuous case can reveal important traits of the dynamics such as the phenomenon of converging towards the limit cycle in finite time, the capability of calculating the global minimum of the underlying energy function and so on. From the theoretical point of view, the basic question is about the solution of the discontinuous dynamical systems. Does the classical definition of solutions still work for the discontinuous dynamical systems? How to ensure the existence and uniqueness of such solutions? Actually, the discontinuous dynamic system has been an old research topic for some decades. Several types of solutions have been available such as Caratheodory, Filippov, and sample-and-hold solutions (Cortés, 2008). Recently, much research interest has been focused on the notion of solutions with the Filippov framework (Filippov, 1988). Such a notion has been utilized as a feasible approach in the field of mathematics for discontinuous dynamical systems. In 2003 and 2005, under the Filippov framework, sufficient conditions were obtained for the global stability of the unique equilibrium point of neural networks with discontinuous activations (Forti & Nistri, 2003; Forti, Nistri, & Papini, 2005), which motivated the latter studies on the discontinuous neural networks. For example, in Huang and Cao (2008), multi-stability was studied for the neural networks with discontinuous activation functions. In Lu and Chen (2005, 2006), the stability of an equilibrium point was discussed for such neural networks with or without delays by constructing a sequence of dynamical systems with high-slope continuous activations.

330

X. Liu, J. Cao / Neural Networks 22 (2009) 329–334

For the periodic solutions of discontinuous neural systems, by utilizing the relatively conservative M-matrix conditions and the classical Lyapunov functional methods, the global exponential stability of periodic solutions was obtained for a delayed neural network with discontinuous activations (Papini & Taddei, 2005). Recently, in Huang, Wang, and Zhou (in press) and Wu (in press), 1 ,1 by using the special property of Sobolev space ‘‘WP ’’ and constructing a new operator ‘‘L’’, some new results were derived for the periodic solutions of discontinuous neural networks. Based on their previous works (Lu & Chen, 2005, 2006), Lu and Chen further obtained a series of results about the existence and uniqueness of almost periodic solution as well as its global exponential stability (Lu & Chen, 2008), where the diagonaldominant condition (M-matrix) was used mainly due to the timevarying interconnection matrices. Motivated by the above discussions, in this paper, without using the previous approximation method (Lu & Chen, 2006, 2008) or construction method (Huang et al., in press; Wu, in press), we employ the differential inclusions theory to deal with the issue of existence of the periodic solutions for the discontinuous neural networks in the sense of Filippov solution. Furthermore, combining the classical Lyapunov stability theory and the high-efficiency LMI method, several interesting results about the global exponential stability are derived. The rest of the paper is organized as follows. Section 2 gives some preliminaries. Section 3 presents a sufficient condition for the existence of periodic solutions for neural networks with discontinuous activations. In Section 4, a new linear matrix inequality (LMI) criterion ensuring the global stability of the periodic solution is given in Theorem 2. In Section 5, simulation results aiming at substantiating the theoretical analysis are reported. Main conclusions are presented in Section 6 where some future research topics are discussed.

Different from the Assumption (T), in Ref. Huang et al. (in press), three assumptions were proposed to ensure the global existence of periodic solutions, which were difficult to test in practice. The Assumption (T) is similar to the one in Ref. Wu (in press), but in the following, we do not need to construct a new space or a complex operator to obtain the global existence of periodic solutions as the author of Ref. Wu (in press) does. Some notations, definitions and lemmas are introduced as follows for further discussion. Let A ∈ Rn×n and define kAk2 = [λM {AT A}]1/2 , where λM {A} denotes the maximum of the moduli |λi | of all the eigenvalues λi of A. Given the column vector x = (x1 , x2 , . . . , xn )T , where the symbol ‘‘T’’ denotes the transpose of a vector, kxk denotes the Euclidean norm of x. Let C ([0, ω], Rn ), L1 ([0, ω], Rn ) and L∞ ([0, ω], Rn ) denote three Banach spaces, each composed by all continuous functions, Lebesgue integrable functions and essentially R ω bounded ones, and equipped with the norm max0≤t ≤ω |x(t )|, 0 |x(t )|dt and ess sup0≤t ≤ω |x(t )|, respectively. Definition 1. Let X be a Banach space and X ∗ its normed dual space, {fn } ⊂ X ∗ and f ∈ X ∗ , fn is said to weakly -* converge to f in X ∗ (denote ω∗ − limn→∞ fn = f ), if for all x ∈ X , limn→∞ fn (x) = f (x). Definition 2. Suppose E ⊂ Rn . Then x 7→ F (x) is called a setvalued map from E ,→ Rn , if to each point x of a set E ⊂ Rn , there corresponds a nonempty set F (x) ⊂ Rn . A set-valued map F with nonempty values is said to be upper-semi-continuous at x0 ∈ E if, for any open set N containing F (x0 ), there exists a neighborhood M of x0 such that F (M ) ⊂ N. F (x) is said to have a closed (convex, compact) image if for each x ∈ E, F (x) is closed (convex, compact). Consider the following system dx

2. Preliminaries

dt

In this paper, we consider a general class of neural networks described by x˙ (t ) = −Ax(t ) + Bf (x(t )) + I (t ),

(1)

where x(t ) = (x1 (t ), x2 (t ), . . . , xn (t ))T is the vector of neuron states; A = diag(a1 , a2 , . . . , an ) is an n × n constant diagonal matrix with ai > 0, i = 1, 2, . . . , n; B = (bij )n×n is an n × n interconnection matrix; f (x) = (f1 (x1 ), f2 (x2 ), . . . , fn (xn ))T : Rn → Rn is a diagonal mapping where fi , i = 1, 2, . . . , n, represents the neuron input–output activation and I (t ) = (I1 (t ), I2 (t ), . . . , In (t ))T is a continuous ω-periodic input function. To establish our main results, it is necessary to make the following assumption for system (1): (T). fi , i = 1, 2, . . . , n, are nondecreasing, bounded and have only a finite number of jump discontinuous points in every compact set of R. Remark 1. In Refs. Huang et al. (in press), Wu (in press) and Lu and Chen (2008), the coefficient matrices A and B were assumed to be ω-periodic functions with A(t ) = diag(a1 (t ), a2 (t ), . . . , an (t )) and B(t ) = (bij (t ))n×n . Without loss of generality, throughout this paper we can assume that A(t ) is a constant matrix because A(t ) must lie between the matrices A = diag(a1 , a2 , . . . , an ) and A = diag(a1 , a2 , . . . , an ), where ai = max0≤t ≤ω ai (t ) and ai = min0≤t ≤ω ai (t ), respectively. In Section 4, we will give a remark to explain in detail why B(t ) can be assumed to be a simple constant matrix.

= f (x),

(2)

where f (·) is not continuous. Definition 3. A set-valued map is defined as

φ(x) =

\ \

K [f (B(x, δ) \ N )],

(3)

δ>0 µ(N )=0

where K (E ) is the closure of the convex hull of set E, B(x, δ) = {y : ky − xk ≤ δ}, and µ(N ) is Lebesgue measure of set N. A solution in the sense of Filippov (1988) of the Cauchy problem for Eq. (2) with initial condition x(0) = x0 is an absolutely continuous function x(t ), t ∈ [0, T ], which satisfies x(0) = x0 and differential inclusion: dx dt

∈ φ(x),

for a.e. t ∈ [0, T ].

(4)

Now we denote K [f (x)] = (K [f1 (x1 )], . . . , K [fn (xn )]), + where K [fi (xi )] = [fi (x− i ), fi (xi )], i = 1, . . . , n. We extend the concept of the Filippov solution to the differential equation (1) as follows:

Definition 4. A solution (in the sense of Filippov) of the discontinuous system (1) with initial condition x(0) = x0 is an absolutely continuous function x(t ), t ∈ [0, ω], and x˙ (t ) ∈ −Ax(t ) + BK [f (x(t ))] + I (t ) for a.e. t ∈ [0, ω].

(5)

X. Liu, J. Cao / Neural Networks 22 (2009) 329–334

It is obvious that the set-valued map x(t ) ,→ −Ax(t ) + BK [f (x(t ))] + I (t ) has nonempty compact convex values. Furthermore, it is upper-semi-continuous (Aubin & Cellina, 1984) and hence it is measurable. By the measurable selection theorem (Aubin & Frankowska, 1990), if x(t ) is a solution of (1) on [0, ω], then there exists a bounded measurable function η(t ) ∈ K [f (x(t ))] such that for a.e. t ∈ [0, +∞], the following is true x˙ (t ) = −Ax(t ) + Bη(t ) + I (t ),

(6)

where η(t ) is measurable for almost all t ≥ 0. Let us further consider the following differential inclusion with periodic boundary value conditions x˙ (t ) ∈ −Ax(t ) + BK [f (x(t ))] + I (t )

for a.e. t ∈ [0, +∞),

x(0) = x(ω).

(7)

Define x∗ (t ) by x∗ (t ) = x(t − kω),

t ∈ [kω, (k + 1)ω], ∀k ∈ N .

(8)

It is easy to see that if x(t ) is a solution of differential inclusion (7), then x∗ (t ) must be an ω-periodic solution of discontinuous system (1). Therefore, the problem of finding the periodic solution of discontinuous system (1) is naturally transformed into that of seeking the solution of system (7). Definition 5. The ω-periodic solution x (t ) of system (1) is said to be globally exponentially stable, if for any solution x(t ) of (1), there exist constants ε > 0 and M > 0 such that ∗

331

By Assumption (T) and because I (t ) is a continuous ω-periodic function, the set-valued map Φ (t , x) , BK [f (x(t ))] + I (t ) is bounded. And we denote M =

sup x∈Rn ,t ≥0

kΦ (t , x)k.

Theorem 1. Under Assumption (T), there exists a solution of system (7), i.e., the discontinuous neural network (1) has an ω-periodic solution. Proof. For all x ∈ C ([0, ω], Rn ), we denote F (t , x(t )) = −Ax(t ) + BK [f (x(t ))] + I (t ) and x(0) = x0 . Define Hx := {y ∈ C [0, ω]|y(0) = x0 , y˙ (t ) ∈ −Ax(t ) + BK [f (x(t ))] + I (t ), for a.e. t ∈ [0, ω] and y˙ (t ) is measurable}. By the measurable selection theorem (Aubin & Frankowska, 1990), there exists a bounded measurable function v(t ) ∈ L1 [0, ω] such that v(t ) ∈ F (t , x(t )) for a.e. t ∈ [0, ω]. Let y(t ) = x0 +

t

Z

v(s)ds, 0

then y ∈ Hx. Therefore, for all x ∈ C [0, ω], Hx is nonempty and y˙ ∈ L1 [0, ω]. So y is absolutely continuous. In order to complete the proof, we divide the proof into three steps (Zhang, 1995) corresponding to Lemma 1.

kx(t ) − x∗ (t )k ≤ Me−εt .

Step (1). H maps bounded sets into relatively compact sets. In fact, let D ⊂ C [0, ω] be bounded set. i.e., ∃L > 0, ∀x ∈ D, kxk ≤ L. Then for y ∈ Hx, y˙ ∈ F (t , x(t )), for a.e. t ∈ [0, ω], we have

Lemma 1 (Douglas, 1972). Let X ∗ be dual space of Banach space X and S be closed unit ball of X ∗ . Then S is weakly-* compact.

k˙yk ≤ M + LkAk2 , and

Lemma 2 (Dugundji and Granas, 1982). If X is a Banach space, 2X = {C : C ⊂ X , C is nonempty, compact and convex}, and G : X → 2X is an upper-semi-continuous set-valued map which maps bounded sets into relatively compact sets, then one of the following statements is true: (a) the set Γ = {x ∈ X : x ∈ λG(x), λ ∈ (0, 1)} is unbounded. (b) the G(·) has a fixed point, i.e., there exists x ∈ X such that x ∈ G(x). Lemma 3 (Clarke, 1983). If V (x) : Rn → R is C-regular and x(t ) : [0, +∞) → Rn is absolutely continuous on any compact interval of [0, +∞), then x(t ) and V (x(t )) : [0, +∞) → R are differential for a.e. t ∈ [0, +∞), and we have d dt



V (x(t )) = ς ,

dx dt



,

∀ς ∈ ∂ V (x(t )).

3. Existence of the periodic solution In this section, we will present a theorem to ascertain the global existence of an ω-periodic solution for system (1) by means of differential inclusions theory. Based on the detailed discussions in Section 2, the set-valued map x(t ) ,→ −Ax(t ) + BK [f (x(t ))] + I (t ) is upper-semicontinuous with nonempty compact convex values, the local existence of a solution x(t ) with initial condition x(0) = x0 of (1) is obvious (Filippov, 1988). Moreover, under Assumption (T), K [f (x)] is bounded. Similarly to the proof of property 2 in Forti and Nistri (2003) or in Wu (in press), the boundedness of x(t ) can be derived, hence x(t ) is defined on [0, +∞).

kyk ≤ kx0 k + ω(M + LkAk2 ). Hence H (D) is uniformly bounded and is obviously equicontinuous. By the Arzela–Ascoli Theorem, H (D) is relatively compact in C [0, ω]. Step (2). H is upper-semi-continuous. From Step (1), we only need to prove that H is closed. i.e., ∀ x , xn ∈ C [0, ω], xn → x; yn ∈ Hxn , yn → y, we want to prove y ∈ Hx. Choosing ε = 1, ∃N > 0, when n ≥ N, we have

kxn (t )k ≤ kx(t )k + 1. Let un (t ) = y˙ n (t ), for a.e. t ∈ [0, ω], we get

kun (t )k ≤ M + kAk2 kxn (t )k ≤ M + kAk2 (kxk + 1). From Lemma 1, there exists a subsequence of un (still denotes un ) weakly -* converging to u. Specially, for all ϕ ∈ L∞ [0, ω], we have ω

Z

un ϕ dt →

ω

Z

0

uϕ dt .

0

i.e., {un } weakly converges to u in L1 [0, ω]. On the other hand, yn (t ) = x0 +

t

Z

y˙ n (s)ds = x0 + 0

t

Z

un (s)ds → x0 + 0

t

Z

u(s)ds. 0

So, y(t ) = x0 +

t

Z

u(s)ds. 0

By the convergence theorem (Aubin & Cellina, 1984), we have y˙ (t ) = u(t ) ∈ F (t , x),

for a.e. t ∈ [0, ω].

332

X. Liu, J. Cao / Neural Networks 22 (2009) 329–334

Proof. Consider the following Lyapunov–Krasovskii functional candidate for system (9):

This implies that y ∈ Hx. Step (3). Now, we will prove that the set

{x ∈ C [0, ω] | x ∈ λHx, λ ∈ (0, 1)} is bounded. For all 0 < λ < 1, x ∈ λHx, there exists y ∈ Hx such that x = λy. Then for a.e. t ∈ [0, ω], we have

k˙y(t )k ≤ M + kAk2 kxk.

i =1

ui (t )

Z Pi

fi∗ (s)ds.

=  e t uT (t )u(t ) + 2e t uT (t )(−Au(t ) + Bγ (t )) Z ui (t ) n X fi∗ (s)ds + 2e t γ T (t )P (−Au(t ) + Bγ (t )) + 2 e t Pi

dt

(M + kAk2 kx(s)k)ds Z t kx(s)kds. ≤ kx0 k + ωM + kAk2 0

i=1

≤  e u (t )u(t ) + 2e t uT (t )(−Au(t ) + Bγ (t )) n X + 2 e t Pi uTi (t )γi (t ) + 2e t γ T (t )P (−Au(t ) + Bγ (t ))

0

i=1

t

Z

0

t T

Denote

˙ t ) = kAk2 kx(t )k = λkAk2 ky(t )k ≤ kAk2 ky(t )k ≤ kAk2 ψ(t ), ψ(

= e t {uT (t )( E − 2A)u(t ) + uT (t )(2B + 2AP − 2PA)γ (t ) + γ T (t )(2PB)γ (t )}   u(t ) t T = e [u(t ), γ (t )] Ω γ (t )

and

≤ 0.

ψ(t ) = kx0 k + ωM + kAk2

kx(s)kds. 0

Then,

ψ(t ) ≤ ψ(0)et kAk2 ≤ (kx0 k + ωM )eωkAk2 .

So, V (t ) ≤ V (0),

Hence,

kx(t )k ≤ ky(t )k ≤ ψ(t ) ≤ (kx0 k2 + ωM )e

ωkAk2

.

and



By Lemma 2, H has a fixed point x , which obviously shows that x∗ is a solution of system (7), i.e., the neural network (1) has an ω-periodic solution. The proof of Theorem 1 is completed.  Remark 2. Apparently, our proof in this section is different from that of Wu’s (in press), although the main ideas are similar. Both Wu (in press) and this paper have used the Lemma 2 to obtain the existence condition, which is indeed the generalization of the famous Mawhin continuation principle (Gaines & Mawhin, 1977). 4. Global stability of the periodic solution In this section, a sufficient condition is established to ensure global exponential stability of periodic solution for the system (1). Let x∗ (t ) be an ω-periodic solution of system (1) described in Theorem 1, η∗ (t ) ∈ K [f (x∗ (t ))] be the output corresponding to the x∗ (t ); x(t ) be a solution of (1) with initial condition x(0) = x0 and u(t ) = x(t ) − x∗ (t ) be a translation of x(t ). Then u(t ) = (u1 (t ), u2 (t ), . . . , un (t ))T satisfies du(t ) dt

= −Au(t ) + Bγ (t ),

for almost t ,

(9)

where γ (t ) ∈ K [f (u(t ))], f (u) = (f1 (u1 ), f2 (u2 ), . . . , fn (un )) and fi∗ (ui (t )) = fi (ui (t ) + x∗i (t )) − ηi∗ (t ), i = 1, 2, . . . , n. ∗









Theorem 2. Under Assumption (T), suppose there exist a positive diagonal matrix P = diag(P1 , P2 , . . . , Pn ) and a small positive scalar  ≤ mini ai such that the following LMI holds:

Ω :=

(11)

0

Taking the derivative of V (u(t )) along the trajectories of (9), by Lemma 3 and  ≤ mini ai , we obtain

t

Z

n X

dV (u(t ))

So, for all t ∈ [0, ω],

ky(t )k ≤ kx0 k +

V (u(t )) = e t uT (t )u(t ) + 2e t

  E − 2A BT

B PB + BT P



≤ 0,

(10)

where E denotes n-dimension identity matrix. Then system (1) has an unique ω-periodic solution x∗ (t ) which is globally exponentially stable.

e t uT (t )u(t ) ≤ V (t ). Then uT (t )u(t ) ≤ V (0)e− t , and

ku(t )k2 ≤

p



V (0)e− 2 t ,

kx(t ) − x∗ (t )k2 ≤

p



V (0)e− 2 t

hold, and the proof of Theorem 2 is completed.



Remark 3. Theorem 2 provides a sufficient condition for the generalized neural networks (1) to ascertain the globally exponential stability of periodic solution. The condition is easy to be verified and can be applied in practice, which can be checked by exploiting the standard Matlab LMI toolbox. Moreover, if the input I (t ) is a constant scalar, in this case, our model (1) can be transformed into the one discussed in Forti, Grazzini, Nistri, and Pancioni (2006) and Forti and Nistri (2003), and the periodic solution degenerates to an equilibrium point. Hence, Theorem 2 can also guarantee the results of Refs. Forti et al. (2006) and Forti and Nistri (2003). Remark 4. In the earlier literature, the stability issue of either equilibrium point or periodic solution for the discontinuous neural networks has been discussed, but most of them utilized the famous M-matrix condition (Forti et al., 2005; Lu & Chen, 2008; Papini & Taddei, 2005; Wu, in press) or some similar conditions such as the Lyapunov diagonally stable condition (Forti et al., 2006; Forti & Nistri, 2003; Huang et al., in press; Lu & Chen, 2005) to obtain the stability results. Actually, there are two popular approaches to dealing with this case, one is the relatively conservative Mmatrix method, and the other is the highly efficient LMI approach. Here in this paper, in order to conveniently apply the popular LMI approach to solve the stability problem, we simply assume the interconnection matrix B to be time invariant, as done in Lu and Chen (2006).

X. Liu, J. Cao / Neural Networks 22 (2009) 329–334

333

global stability of a periodic solution has been obtained, which is easy to be checked and applied in practice. Two numerical examples have been given to illustrate the usefulness of our results. Forti and Nistri (2003) conjectured that all solutions of the delayed neural networks with discontinuous neuron activations and periodic inputs converge to an asymptotically stable limit cycle. In this paper, the conjecture without delays has been discussed, and we aim to prove the conjecture completely in our future research works with the approach of this paper. Meanwhile, in real nervous systems, the synaptic transmission is a noisy process brought on by random fluctuations from the release of neurotransmitters and other probabilistic causes. It has also been known that a neural network could be stabilized or destabilized by certain stochastic inputs (Blythe, Mao, & Liao, 2001; J. Liang Z. Wang, & X. Liu, 2008; J. Liang, Z. Wang, Y. Liu et al., 2008; Wang, Shu, Fang, & Liu, 2006). Hence, we further expect to solve the stability analysis problem for stochastic neural networks with discontinuous activations in the near future.

Fig. 1. Periodic solutions of Example 1 with 20 random initial points.

Acknowledgments The authors appreciate the editor’s work and the reviewer’s insightful comments and constructive suggestions. This work was supported by the Specialized Research Fund for the Doctoral Program of Higher Education under Grant 20070286003. References Fig. 2. Periodic solutions of Example 2 with 10 random initial points.

5. Two illustrative examples Example 1. Consider the second-order neural h network (1) i defined

by matrices A = diag(1, 1) and B = f2 (s) = sign(s), I (t ) =

h

0.5 + sin t cos t

−0.25 0.1

0.1 −0.25 , f1 (s)

=

i

.

h Employing iLMI toolbox, the LMI (10) holds with  = 1, P = 1.5501 0

0 1.5501

.

From Theorem 2, the globally exponentially stable periodic solution of (1) can be obtained. Simulation result is depicted in Fig. 1. Example 2. Consider the third-order neural network (1) defined by  matrices A =  and B  diag(1, 1, 2)

=

−0.25 0.1 0

0.1 −0.25 0.2

0.15 0 −0.25

sin t

, I (t ) = −cost , f1 (s) = f2 (s) = sign(s). sin t

Then the LMI (10) has the following feasible solution:  = 

1.3333, P =

1.8926 0 0

0 1.9608 0

0 0 1.9008

.

By Theorem 2, the globally exponentially stable periodic solution of (1) can be obtained too. Simulation result is depicted in Fig. 2. 6. Conclusions and expectations In this paper, we have considered the existence and global stability of the periodic solution for neural networks with discontinuous activation functions and periodic inputs. By using the differential inclusions and LMI method (Horn & Johnson, 1985) directly, a new sufficient condition ensuring the existence and

Aubin, J. P., & Cellina, A. (1984). Differential inclusions. Berlin: Springer. Aubin, J. P., & Frankowska, H. (1990). Set-valued analysis. Boston: Birkhauser. Blythe, S., Mao, X., & Liao, X. (2001). Stability of stochastic delay neural networks. Journal of Franklin Institute, 338, 481–495. Cao, J. (2001). A set of stability criteria for delayed cellular neural networks. IEEE Transactions on Circuits and Systems I, 48(4), 494–498. Cao, J., & Li, X. (2005). Stability in delayed Cohen-Grossberg neural networks: LMI optimization approach. Physica D, 212(1–2), 54–65. Clarke, F. H. (1983). Optimization and nonsmooth analysis. New York: Wiley. Cortés, J. (2008). Discontinuous dynamical systems. IEEE Control Systems Magazine, 28(3), 36–73. Douglas, R. G. (1972). Banach algebra techniques in operator theory. Academic Press. Dugundji, J., & Granas, A. (1982). Fixed point theory. In monografie matematyczne (p. 61). Warsaw, Poland: Polish Sci. Pub. Filippov, A. F. (1988). Differential equations with discontinuous right-hand side. In Mathematics and its applications (Soviet Series). Boston: Kluwer Academic Publishers. Forti, M., Grazzini, M., Nistri, P., & Pancioni, L. (2006). Generalized Lyapunov approach for convergence of neural networks with discontinuous or nonLipschitz activations. Physica D, 214, 88–99. Forti, M., & Nistri, P. (2003). Global convergence of neural networks with discontinuous neuron activations. IEEE Transactions on Circuits and Systems I, 50(11), 1421–1435. Forti, M., Nistri, P., & Papini, D. (2005). Global exponential stability and global convergence in finite time of delayed neural networks with infinite gain. IEEE Transactions on Neural Networks, 16(6), 1449–1463. Gaines, R. E., & Mawhin, J. L. (1977). Coincidence degree and nonlinear differential equations. Berlin: Springer-Verlag. Horn, R. A., & Johnson, C. R. (1985). Matrix analysis. New York: Cambridge University Press. Huang, G., & Cao, J. (2008). Multistability of neural networks with discontinuous activation function. Communications in Nonlinear Science and Numerical Simulation, 13(10), 2279–2289. Huang, H., Ho, D. W. C., & Cao, J. (2005). Analysis of global exponential stability and periodic solutions of neural networks with time-varying delays. Neural Networks, 18, 161–170. Huang, L. H., Wang, J. F., & Zhou, X. N. Existence and global asymptotic stability of periodic solutions for Hopfield neural networks with discontinuous activations. Nonlinear Analysis (in press). Liang, J., Wang, Z., & Liu, X. (2008). Exponential synchronization of stochastic delayed discrete-time complex networks. Nonlinear Dynamics, 53(1–2), 153–165. Liang, J., Wang, Z., Liu, Y., & Liu, X. (2008). Global synchronization control of general delayed discrete-time networks with stochastic coupling and disturbances. IEEE Transactions on Systems, Man, and Cybernetics - Part B, 38(4), 1073–1083.

334

X. Liu, J. Cao / Neural Networks 22 (2009) 329–334

Liu, Z. G., Chen, A. P., Cao, J., & Huang, L. H. (2003). Existence and global exponential stability of periodic solution for BAM neural networks with periodic coefficients and time-varying delays. IEEE Transactions on Circuits and Systems I, 50(9), 1162–1173. Liu, Y., Wang, Z., & Liu, X. (2008). On synchronization of coupled neural networks with discrete and unbounded distributed delays. International Journal of Computer Mathematics, 85(8), 1299–1313. Lu, W. L., & Chen, T. P. (2005). Dynamical behaviors of Cohen-Grossberg neural networks with discontinuous activation functions. Neural Networks, 18(3), 231–242. Lu, W. L., & Chen, T. P. (2006). Dynamical behaviors of delayed neural networks systems with discontinuous activation functions. Neural Computation, 18, 683–708. Lu, W. L., & Chen, T. P. (2008). Almost periodic dynamics of a class of delayed neural networks with discontinuous activations. Neural Computation, 20, 1065–1090.

Papini, D., & Taddei, V. (2005). Global exponential stability of the periodic solution of a delayed neural network with discontinuous activations. Physics Letters A, 343(1–3), 117–128. Song, Y. L., Han, M. A., & Wei, J. J. (2005). Stability and Hopf bifurcation analysis on a simplified BAM neural network with delays. Physica D, 200, 185–204. Wang, L., & Zou, X. F. (2004). Capacity of stable periodic solutions in discrete-time bidirectional associative memory neural networks. IEEE Transactions on Circuits and Systems II, 51(6), 315–319. Wang, Z., Shu, H., Fang, J., & Liu, X. (2006). Robust stability for stochastic Hopfield neural networks with time delays. Nonlinear Analysis, 7(5), 1119–1128. Wu, H. Q. Stability analysis for periodic solution of neural networks with discontinuous neuron activations. Nonlinear Analysis (in press). Zhang, F. B. (1995). Two existence theorems for the solutions to differential inclusions. Journal of Sandong University, 30(2), 141–148 (in Chinese). Zhou, J., Liu, Z. R., & Chen, G. R. (2004). Dynamics of periodic delayed neural networks. Neural Networks, 17, 87–101.