The breaking of a delayed ring neural network contributes to stability: The rule and exceptions

The breaking of a delayed ring neural network contributes to stability: The rule and exceptions

Neural Networks 48 (2013) 148–152 Contents lists available at ScienceDirect Neural Networks journal homepage: www.elsevier.com/locate/neunet The br...

484KB Sizes 0 Downloads 41 Views

Neural Networks 48 (2013) 148–152

Contents lists available at ScienceDirect

Neural Networks journal homepage: www.elsevier.com/locate/neunet

The breaking of a delayed ring neural network contributes to stability: The rule and exceptions T.N. Khokhlova a , M.M. Kipnis a,b,∗ a

Department of Mathematics, South Ural State University, Chelyabinsk 454080, Russia

b

Department of Mathematics, Chelyabinsk State Pedagogical University, Chelyabinsk 454080, Russia

article

info

Article history: Received 14 January 2013 Received in revised form 10 August 2013 Accepted 11 August 2013 Keywords: Stability Neural networks Linear and ring neural configuration Time delay The stability cone

abstract We prove that in our mathematical model the breaking of a delayed ring neural network extends the stability region in the parameters space, if the number of the neurons is sufficiently large. If the number of neurons is small, then a ‘‘paradoxical’’ region exists in the parameters space, wherein the ring neural configuration is stable, while the linear one is unstable. We study the conditions under which the paradoxical region is nonempty. We discuss how our mathematical modeling reflects neurosurgical operations with the severing of particular connections in the brain. © 2013 Elsevier Ltd. All rights reserved.

1. Introduction The ring neural networks are widely distributed in the real world. The nematodes C. elegans and sea stars have ring-shaped neural subnetworks (Ruppert, Fox, & Barnes, 2004; Ware, Clark, Crossland, & Russell, 1975). Many biological networks, including some nuclei of the human brain, fall into the class of the ‘‘smallworld’’ networks (Sporns & Zwi, 2004). A simplest model of the small world network is a ring with a short-cut (Zhao & Wang, 2012). Social networks are often named as rings. If you take away one of the links in the ring, you get a linear configuration (Marchesi, Orlandi, Piazza, & Unchini, 1992). This paper is devoted to the comparative analysis of the stability of the ring and linear neural configuration with delayed interactions. As a basis we took a simple model of a delayed ring neural network (see Ivanov & Kipnis, 2012; Kaslik, 2009; Kaslik & Balint, 2009; Khokhlova & Kipnis, 2012; Yuan & Campbell, 2004). Actually, our artificial neural network model is far from the real world, but it may be considered as an approximate description of the real world. Our question is the following: if the ring neural network is broken and becomes a linear configuration, whether the stability domain will be expanded. In this paper we give a positive answer to the question in the case when the ring network is sufficiently large. Generally the answer is

positive when the number of neurons is not large. However there are some exceptions. We found a ‘‘paradoxical region’’ in the space of parameters wherein the neural ring is stable while the linear neural configuration is unstable. The paper is organized as follows. In Section 2, the changes are shown in the stability region when the ring of neurons is broken and it turns into a linear configuration. In Section 3, we prove that the stability domain of a large neural network expands when the ring is broken. In Section 4, the conditions are determined for which paradoxical points exist in the space of parameters. In Conclusion, we discuss how our abstract results reflect the successes and failures in neurosurgical operations of severing the connection between parts of the brain. 2. Breaking of the ring The following system of equations describes the ring network (Fig. 1) with the delay τ in the neural interactions: x˙ 1 (t ) + x1 (t ) + a xn (t − τ ) + b x2 (t − τ ) = 0, x˙ j (t ) + xj (t ) + a xj−1 (t − τ ) + b xj+1 (t − τ ) = 0, x˙ n (t ) + xn (t ) + a xn−1 (t − τ ) + b x1 (t − τ ) = 0,

∗ Corresponding author at: Department of Mathematics, South Ural State University, Chelyabinsk 454080, Russia. Tel.: +7 9222323936; fax: +7 3512647753. E-mail addresses: [email protected] (T.N. Khokhlova), [email protected] (M.M. Kipnis). 0893-6080/$ – see front matter © 2013 Elsevier Ltd. All rights reserved. http://dx.doi.org/10.1016/j.neunet.2013.08.001

(1)

2 6 j 6 (n − 1), n > 2.

Here xj (t ) (1 6 j 6 n) is the signal of the j-th neuron at the moment t, while the coefficients a and b are the powers of the neural interaction. System (1) is the linear approximation of the Marcus–Westervelt neural network model (Marcus & Westervelt,

T.N. Khokhlova, M.M. Kipnis / Neural Networks 48 (2013) 148–152

149

Fig. 1. Ring and linear neural networks.

1989) near an equilibrium point. This system belongs to the class of time-delay systems of the form x˙ (t ) + Ax(t ) + Bx(t − τ ) = 0,

(2)

where n × n matrices A, B are simultaneously triangularizable. We call the linear system (2) (asymptotically) stable if the null solution of (2) is (asymptotically) stable. The matrix form of (1) is equation (2) with A = I, where I is the unit matrix and a circulant matrix B given by



0 a 0 

B=  ..

. 0 b

b 0 a

.. .

0 0

.. .

··· ··· ··· .. .

0 0 0

0 0

··· ···

a 0

0 b 0

.. .

0 0 0

.. .

0 a



a 0  0

3. The breaking of the large neural ring extends the stability region The next theorem shows that the large neural ring does not lose stability when it is broken. Theorem 1. For all real a, b, τ > 0, there exists an integer n0 such that for every n > n0 the asymptotic stability of (1) implies the asymptotic stability of (4). Proof. Fix a, b, τ > 0. Consider the ellipse in R2 given by

..  . . b

(3)

u1 = τ (a + b) cos θ , j

u2 = τ (a − b) sin θ ,

0 6 θ < 2π . (8)

j

If (u1 , u2 ) (1 6 j 6 4) are the vertices of the ellipse, then 1,2

0

1,2

= 0,

3,4

= 0,

u1

= ±τ |a + b|,

If the link between, say, first and last neuron is broken, then (1) transforms to

3,4 u2

= ±τ |a − b|.

x˙ 1 (t ) + x1 (t ) + b x2 (t − τ ) = 0,

Consider the stability oval (Khokhlova & Kipnis, 2012; Khokhlova, Kipnis, & Malygina, 2011) in R2

x˙ j (t ) + xj (t ) + a xj−1 (t − τ ) + b xj+1 (t − τ ) = 0,

(4)

2 6 j 6 (n − 1), x˙ n (t ) + xn (t ) + a xn−1 (t − τ ) = 0,

x˙ (t ) + I x(t ) + D x(t − τ ) = 0,

(5)

0

0

b 0 a

.. .

0 0

.. .

··· ··· ··· .. .

0 0 0

0 0

··· ···

a 0

0 b 0

.. .

0 0 0

.. .

0 a



0 0  0

..  . . b

a 0 G= 0 

0 bc

0 0 0 b 0 a

0

(11)

(7)

With c = 1 system (7) is equivalent to system (1) for a six-neuronal ring. When c varies from 1 to 0, the coupling strength of the first and the sixth neuron is reduced, and when c = 0 the ring becomes a line, which is described by (4). A transformation of the stability region is shown in Fig. 2 with τ = 0.1.

2π j n

+ iτ (a − b) sin

2π j n

.

(12)

All the points (u′1jn , u′2jn ) defined by (12) lie on the ellipse (8). When n → ∞ the points are dense in the ellipse. So there exists a natural number n0 such that for every n > n0 there exists a ′ lies outside the number j (1 6 j 6 n), such that the point Mjn stability oval. Hence (1) is unstable for the given values of a, b, τ , n > n0 . CASE 2: all the vertices (9) are inside the stability oval or on its boundary. The eigenvalues of D ∈ Rn×n (see (6)) are µ′′jn =



 ac 0 0 . 0  b 0

π / 2 6 ω1 6 π .

u′1jn + iu′2jn = τ (a + b) cos

x˙ (t ) + I x(t ) + G x(t − τ ) = 0, 0 0 b 0 a 0

τ = −ω1 / tan ω1 ,

(6)

polynomial of the second kind. We shall show the dynamics of the stability domain in the space of parameters of the neural ring when it is broken and it becomes a line. Let us consider the following system for a six-neural network:

0 b 0 a 0 0

where ω1 is the least positive root of the equation

2π j

The characteristic polynomial ϕn (λ) of D ∈ R is given by √ √ ϕn (λ) = ( ab)n Un (λ/(2 ab)), Un (λ) being the n-th Chebyshev

b 0 a 0 0 0

(10)

−ω1 6 ω 6 ω1 ,

i(a − b) sin n , 1 6 j 6 n. According to Theorem 1 of (Khokhlova ′ et al., 2011) let us construct the points Mjn = (u′1jn , u′2jn ), such that

n×n

0

(9)

CASE 1: one of the vertices (9) is outside of the stability oval (10), 2π j (11). The eigenvalues of B ∈ Rn×n are µ′jn = (a + b) cos n +

where the n × n matrix D has the form

a 0  D=  .. . 0

u1

u1 (ω) = −τ cos ω + ω sin ω, u2 (ω) = τ sin ω + ω cos ω,

n > 2.

We call (4) the system for linear neural configuration. The matrix form of (4) is as follows:



u2

πj

2 ab cos n+1 , 1 6 j 6 n. According to Theorem 1 of (Khokhlova ′′ et al., 2011) let us construct the points Mjn = (u′′1jn , u′′2jn ) such that u′′1jn + iu′′2jn = 2τ

√ ab cos

πj n+1

.

(13)

Let (1) be asymptotically stable. √ πj CASE 2.1: ab > 0. Then u′′1jn = 2τ ab cos n+1 , u′′2jn = 0 in (13). As j

j

the points (u1 , u2 ) (j = 1, 2) (see (9)) are inside the stability oval,



′′ in view of 2 ab 6 |a + b| all the points Mjn are inside the stability oval, which gives the asymptotic stability of (4).

150

T.N. Khokhlova, M.M. Kipnis / Neural Networks 48 (2013) 148–152

Fig. 2. Dynamics of the stability domain in the process of breaking the ring. The boundaries of the stability domains for (7) in the ab-plane are shown when c varies from 1 to 0.

√ πj ab cos n+1 in (13). √ The rest of the√proof runs as for case 2.1 with 2 ab 6 |a + b| being replaced by 2 |ab| 6 |a − b|. Theorem 1 is proved. CASE 2.2: ab < 0. Then u′′1jn = 0, u′′2jn = 2τ

By Theorem 1 the stability domain is not narrowed, when the large neural ring is broken. In fact, the domain expands. Indeed, if a = 0, then (1) is unstable for large b, while (4) is stable for each b. Thus, the breaking of the neural ring expands the stability domain when the number of neurons n0 is sufficiently large. If j j a, b, τ are such that the four points (u1 , u2 ) (1 6 j 6 4) defined by (9) lie inside the stability oval, then n0 = 3 is ‘‘sufficiently large’’ (3 is the minimal number of neurons in the ring). If a, b, τ are such that some of the four points lie outside the stability oval, then, generally speaking, the closer the points to the boundary of stability oval, the greater the n0 .

Theorem 2. 1. If n is divisible by 4, then the paradoxical region is empty for any delay τ . 2. If n is even, but not divisible by 4, then the paradoxical region is nonempty for any delay τ . 3. If n is odd, and if the delay τ is sufficiently small, then the paradoxical region is nonempty. 4. If n is odd, and if the delay τ is sufficiently large, then the paradoxical region is empty. Proof. 1. Let n = 4k, k ∈ Z+ . Set jr = rk, 1 6 r 6 4. If ab > 0 then from (13), (12) we get u′′2jn = 0 (1 6 j 6 n) and u′2j2 n = u′2j4 n = 0 and max{u′1j2 n , u′1j4 n } = τ |a + b| > 2τ

√ ab cos

= max u′′1jn ,

4. Paradoxical region in the parameters space

16j6n



A principle ‘‘The breaking of the ring neural network contributes to stability’’ is confirmed by Theorem 1 for large neuron networks. In practice, this principle is almost always true also for systems with a limited number of neurons. However, we have found a violation of this principle in some regions of the parameters of networks with a small number of neurons. Such regions will be called paradoxical. An exact definition is as follows.

min{u1j2 n , u1j4 n } = −τ |a + b| 6 −2τ

Definition 1. Let n ∈ Z+ , τ > 0. An ordered pair (a, b) of real numbers is said to be paradoxical, if system (1) for the ring neural network is asymptotically stable at these parameters, while system (4) for linear neural configuration is unstable. By the paradoxical region we mean the set of all paradoxical pairs.

min{u′2j1 n , u′2j3 n } = −τ |a − b| < min u′′2jn .





ab cos

π n+1

(14)

= min u′′1jn . 16j6n

If ab < 0, then u′′1jn = 0 (1 6 j 6 n) and u′1j1 n = u′1j3 n = 0 and max{u′2j1 n , u′2j3 n } = τ |a − b| > max u′′2jn , 16j6n

In Fig. 3, the paradoxical region is significant at n = 3, it is empty at n = 4, and it is very small at n = 5, 6. Conditions for the existence of paradoxical points are described by the following theorem.

π n+1

(15)

16j6n

′ From (14), (15) we deduce that if all the points Mjn are inside the ′′ stability oval, then all the points Mjn are inside the stability oval. Hence the asymptotic stability of the ring configuration implies the asymptotic stability of the linear configuration. Thus the ab-plane contains no paradoxical points. 2. Let n = 4k + 2, k ∈ Z+ . The task is now to find the paradoxical point of the form (a, −a) with a > 0. Put a = −b > 0. Then u′1jn =

T.N. Khokhlova, M.M. Kipnis / Neural Networks 48 (2013) 148–152

151

Fig. 3. Stability/instability regions in the ab-plane for the ring- and line-shaped neuronal networks. 1—both ring and line are stable. 2—ring is unstable, and line is stable. 3—both ring and line are unstable. 4—paradoxical region: ring is stable, while line is unstable. Additionally, we show the paradoxical region for n = 5, 6 enlarged.

u′′1jn = 0. As max16j6n sin (12) and (13) give max u′′2jn

16j6n

2π j 4k+2

= − min16j6n sin 4k2π+j2 = cos 4kπ+2 ,

√ π |u′1nn | = τ |a + b| > 2τ ab cos = max |u′′1jn |.

π π = τ 2a cos > τ 2a cos 4k + 3 4k + 2 = max u′2jn , 16j6n

′′

min u2jn = −τ 2a cos

16j6n

π π < −τ 2a cos 4k + 3 4k + 2

n

(16)

= min u′2jn . 16j6n

Let a1 > 0 is defined by requiring system (4) to be asymptotically stable at a = −b < a1 and unstable at a = −b > a1 . Let a2 be the same point for system (1). From (16) it follows that a1 < a2 . Hence if a1 < a < a2 and b = −a, then (1) is asymptotically stable, while (4) is unstable. Therefore the paradoxical region is nonempty. 3. Let n = 2k + 1, k ∈ Z+ . Let 1 2 cos

π 2k+2


1

. 2 cos 2kπ+1

(17)

We next prove that (a, a) is a paradoxical point for all sufficiently small τ > 0. Since a = b > 0, (12) and (13) show that u′2jn =

2π j u′′2jn = 0. As min16j6n cos 2k+1 = − cos 2kπ+1 and a = b > 0, from (17) and (12), (13) we deduce

π < −τ , 2k + 2 π = −τ 2a cos > −τ . 2k + 1

min u′′1jn = −τ 2a cos

16j6n

min u′1jn

16j6n

CASE 4.1: ab > 0. By (12), (13) (19)

16j6n

If τ → ∞ then the stability oval (10), (11) is approaching to the ′ circle of radius τ . Hence from (19) it follows that if Mnn is inside ′′ the stability oval then all the points Mjn are inside the stability cone. So, domain ab > 0 contains no paradoxical points in case τ is sufficiently large. 2π j CASE 4.2: ab < 0. As max16j6n sin 2k+1 = cos 2kπ+1 , by (12) and (13) ′′ we conclude that u1jn = 0 and max |u′2jn | = τ |a − b| cos

16j6n

= max |u′′2jn |.

 π π > 2τ |ab| cos 4k + 2 2k + 2 (20)

16j6n

The rest of the proof runs as for case 4.1 with (19) replaced by (20). Theorem 2 is proved. Note 1. By Theorem 2 for every odd n, there exist real numbers τ0 , τ1 such that τ < τ0 implies the nonemptiness of the paradoxical region, while τ > τ1 implies the emptiness of one. In our numerical experiments τ0 = τ1 but we cannot prove it. Numerical experiments show that the paradoxical region (if any) is narrowed with increasing τ . 5. Conclusion

(18)

Suppose τ is so small that max16j6n u′1jn < π2 . Then (18) implies the asymptotic stability of (1) and instability of (4). So the paradoxical region is nonempty at sufficiently small τ . 4. Let n = 2k + 1, k ∈ Z+ . We next prove that the paradoxical region is empty for all sufficiently large τ .

Principle ‘‘The breaking of the ring neural network contributes to stability’’ brings to mind the history of lobotomy operations to stabilize the human psyche by dissecting some connections in the brain (Pressman, 1998). In our ring neural network model the disturbance repeatedly traverse the ring, making it difficult to reach a steady state. The gap of the ring makes it impossible to circulate the disturbance and hence one can expect improvement of stability. The inventors of lobotomy also have decided to destroy

152

T.N. Khokhlova, M.M. Kipnis / Neural Networks 48 (2013) 148–152

the fixed connection in the brain to prevent the circulation of pathological impulses, and their efforts were successful. This story is reflected by our Theorem 1. However, the lobotomy operation has been criticized because for a number of mentally ill people a personality change was observed as a result of a surgery. Perhaps, the existence of paradoxical points reflects the problems that led to a sharp limitation of these operations. Acknowledgments This work was supported by grant 1.1711.2011 from the Ministry of Education of Russia. We thank the referees for insightful comments that led to an improvement of this manuscript. References Ivanov, S. A., & Kipnis, M. M. (2012). Stability analysis of discrete-time neural networks with delayed interactions: torus, ring, grid, line. International Journal of Pure and Applied Mathematics, 78(5), 691–709. Kaslik, E. (2009). Dynamics of a discrete-time bidirectional ring of neurons with delay. In Proceedings of int. joint conf. on neural networks (pp. 1539–1546). IEEE Computer Society Press.

Kaslik, E., & Balint, St. (2009). Complex and chaotic dynamics in a discrete-time delayed Hopfield neural network with ring architecture. Neural Networks, 22(10), 1411–1418. Khokhlova, T. N., & Kipnis, M. M. (2012). Numerical and qualitative stability analysis of ring and linear neural networks with a large number of neurons. International Journal of Pure and Applied Mathematics, 76(3), 403–419. Khokhlova, T. N., Kipnis, M. M., & Malygina, V. V. (2011). The stability cone for a delay differential matrix equation. Applied Mathematics Letters, 24, 742–745. Marchesi, M., Orlandi, G., Piazza, F., & Unchini, A. (1992). Linear data-driven architectures implementing neural network models. International Journal of Neural Networks, 3(3), 101–120. Marcus, C. M., & Westervelt, R. M. (1989). Stability of analog neural networks with delay. Physical Review A, 39, 347–359. Pressman, J. D. (1998). Cambridge studies in the history of medicine, Last resort: psychosurgery and the limits of medicine. Cambridge University Press. Ruppert, E. E., Fox, R. S., & Barnes, R. D. (2004). Invertebrate zoology: a functional evolutionary approach (7th ed.). Thomson, Brooks/Cole. Sporns, O., & Zwi, J. (2004). The small world of the cerebral cortex. Neuroinformatics, 2, 145–162. Ware, R. R., Clark, D., Crossland, K., & Russell, R. L. (1975). The nerve ring of the nematode C. Elegans: sensory input and motor output. The Journal of Comparative Neurology, 162, 71–110. Yuan, Y., & Campbell, S. A. (2004). Stability and sinchronization ring of identical cells with delayed coupling. Journal of Dynamics and Differential Equations, 16, 709–744. Zhao, D.-X., & Wang, J.-M. (2012). Exponential stability and spectral analysis of a delayed ring neural network with a small-world connection. Nonlinear Dynamics, 68, 77–93.