Learning of limit cycles in discrete-time neural network

Learning of limit cycles in discrete-time neural network

NEUROCOMPUTING Neurocomputing 13 (19%) l-10 Learning of limit cycles in discrete-time neural network Toru Kumagai *, Ryoichi Hashimoto, Mitsuo Wada N...

541KB Sizes 2 Downloads 94 Views

NEUROCOMPUTING Neurocomputing 13 (19%) l-10

Learning of limit cycles in discrete-time neural network Toru Kumagai *, Ryoichi Hashimoto, Mitsuo Wada National Inslitute of Bioscience and Human Technology, I-I, Higashi, Tsukuba, Ibaraki, 305, Japan

Received 3 April 1995; accepted 12 July 1995

Abstract

We propose a model of a fully interconnected neural network composed of neurons having a refractory period and integrating afferent signals. It can memorize some different couples of input-output periodic patterns. In this paper we describe the initial condition that generates a certain limit cycle. We also describe the impulse stimulation and a weight matrix between stimulus and neurons that evoke transition from one limit cycle to another one. Keywords: Recurrent neural network; Discrete-tune neural network; Binary neuron model; Limit cycles -

1. Introduction Many kinds of biological neural network have chaotic or oscillatory dynamics. Neural networks lay a basis of motor pattern generation, visual and olfactory pattern recognition. However, neural networks with non-convergent dynamics have received relatively little attention so far, compared to the ones with convergent dynamics. Nagumo-Sato’s neuron model is a typical neuron model that produces a complex limit cycle by a single neuron 121.We have extended that model and propose a neural network that memorizes sets of complex limit cycles. We also have derived necessary and sufficient conditions for memorizing sets of limit cycles in a network [ 1,3]. In this paper, we show the condition for converging to a certain limit cycle from a certain initial state in the network that memorizes plural limit cycles. Moreover, we demonstrate the performance of the network. It can memorize complex limit cycles in a few neurons and shifts between limit cycles caused by activation of external inputs.

* Corresponding author. Email: [email protected] 0925-23 12/%/$15.00

8 1996 Elsevier Science B.V. All rights reserved

SSDI 0925-2312(95)00073-9

2

T. Kumagai et al./Neurocomputing

13 (1996) l-10

2. Neural network model

In this paper, we will treat a neural network that is composed of discrete time, binary output neurons. Each neuron has mutual connections. The block diagram of a neuron is shown in Fig. 1. The output of a neuron is fed back and it takes a refractory period. Moreover, each neuron gets a stimulus from an external input. Incoming data are weighted, integrated in a leaky way inside the connection paths, and then summed up. The result is compared with a threshold. Each neuron simultaneously updates its output value as follows: Xi(k+l)=

~“ij~T’Uj(k-r)-

j= 1

~Wij~T’y,(k-r)-hi

r=O

j= 1

(1)

r=O

YiCk) =f(xi(k>>

(2)

where f( X>is the function which outputs 0 if ( x < O), or outputs 1 if ( x > = 0). y,(k) is the output value of neuron i at time k. uj(k) is the external input value to neuron j, wij is the weight of connection from neuron i to neuron j and uij is the weight of connection from external input i to neuron j. hi is the threshold value of neuron i, T (0 G T < 1, in this paper T = 0.5) is the decay parameter. xi(k) is defined in Eq. (I), we call it the state of neuron i. N is the number of neurons. M is the number of the external inputs. By setting T = 0 and M = 1, our neural network model coincides with the Hopfield model [4]. It is worth noting that the two models differ, however, in their timing nature of updating rules, which is synchronous in the former and asynchronous in the latter. This model is equivalent to Nagumo-Sato model when N = 1 and M = 1.

3. Learning algorithm The neural network described above can memorize the limit cycle Y when the following condition is satisfied [ll: Theorem 1. There exists an initial internal state x(O) such that it produces Y, if and only if the following expression is satisfied:

Vi(i= 1 . ..N). Vs(s=O...qfi
limit cycle

1):

orc
Fig. 1. Ihe neuron model.

(3)

T. Kwnagai et al./Neurocomputing

3

13 (1996) l-10

(4)

(5)

~ = I

Min{Liq+” 1yi,rq+s = 0)

ifif

+m

otherwise

Max{ L:,+’

1 Yi,rq+s = 1)

ifit

--co

u =

%,I

**.

%,q-

I

“y0

u*, _’

...

u2,q-

I

uhf.0

UY.1

.**

UY,q-

1

y

w

=

=

(7)

\

YIJ

***

Yl,,-

I

Y2 I _’

...

Y2,,-

I

\YN,O Yfv.1

***

YN,p-

I/

Yf.0

(:6)

e.ds

otherwise

Ul .o

‘Y I .o

&ts

\

WI,1

WI.2

0..

Wl,N

“y

w22 _’

...

W2,N

\wN,l

wN,2

...

wN,N) \ “2.M

“N,MI

td= (Td-’

. ..T

1)’

where V is an (N X M) matrix of which the elements are vij. W is an (N X N) matrix, of which the elements are wij. Y is an (N X p> matrix, of which the elements are yj( k). q is the period of the external input, p is the period of output sequence, 1 is a positive integer where q, p and 1 satisfy p = lg. ef is the ith column of the dth order unit matrix. 1: is obtained by rotating the columns of a identity matrix in left direction c times. Above theorem shows whether a network has initial states that oscillate sets of limit cycle Y. Network parameters, of which a network oscillates Y, are derived as a solution of the simultaneous inequalities Eq. (3). If there is not any solution of Q. (31, the network never oscillates Y. One of the most important characteristics of the proposed

4

T. Kumagai er al./Neurocomputing

I3 (1996) l-10

neural network is that it can memorize plural limit cycles in a single one. For instance, if the network satisfies Eq. (3) with limit cycles Y, and Y,, the network has the initial states that produce Y. and Y,. An initial state decides which limit cycle appears.

4. Relationship between initial states and oscillated limit cycle 4.1 Condition for oscillating the Limit cycle Y from initial state x(O) Theorem 1 shows the condition of the network for oscillating the sets of limit cycles. However, it does not show the initial state for oscillating a certain limit cycle. An initial state decides which limit cycle appears when the network memorizes plural limit cycles. In the following, we derive a condition for producing the limit cycle Y from an initial state x(O). Supposing that the simultaneous inequalities Eq. (3) is satisfied and the network starts to oscillate the limit cycle Y from the initial state x(O), x(k) is derived from Eq. (1) as below: xi( t-q + s + mp) = TmPxi( rq + x) + (1 - TmP)( (lf - L:,+’ - hi)

(8)

if supposing m --) 02 then xi(mp+rq+s)=Ut-L;q+s-hi

(9)

Eq. (8) and (9) show that the network keeps producing the limit cycle Y if it generates one period of the sequence. Supposing that the initial state is x(O) and the network generates the limit cycle Y from the pattern y(n), the state x(k) is as follows: if k=O

xi(o)

Tkxi(0) +(l +T+ ... +Tk-‘)hi(Tk-l - ,FOw:y( ( j + n) mod p)Tk-‘-j

Xi(k) =

1) if k#O

(10)

1. wi = (Wi,, wi,2 . . . WJ Y(k) = (Y,(k) Y,(k) . . . Y&N Consequently, we get the following condition for starting to oscillate the limit cycle Y from an initial state x(O): 3n(O
l)Vi(O
xi(k)(2yi((k+n)

mod p) - l)>O

or xi(k) =O

and

yi((k+n)

mod P) >O

(11)

T. Kumagaietal./Neurocomputing13(19%)

The solution of the simultaneous oscillates the limit cycle Y.

inequalities

5

l-10

Eq. (11) is the set of initial states that

4.2 Condition for converging to the limit cycle Y fi-om an initial state x(O) We consider the convergence to the limit cycle Y after sequence Z, of which the length is g. Supposing that the network produces the limit cycle Y after producing the sequence Z, the state of network x(k) is as follows: if k=O

‘Xi(0) Tkxi(0)

+ (1 + T+

a.. +Z+‘)hi(T-

1) ifO
-~+(j)P-‘-j

xi(k)

Tkxi(0)

=

+(l

+T+

...

+7+‘)hi(T-

1)

-,@z(j”“’ (12)

k-l -

c

wTy(( j + n) mod p)T’-‘-’

if g
j=g \z = (z(0)

z( 1) . . . z( g - 1))

z(k) = (z,(k) dk) . . . ZN(~))~ The condition follows:

for converging

to the limit cycle Y through

the sequence

Z is as

3n(O
whose length is g(0 =Gg < gmax)):

Vi(O 0 (xyk)

=Oand

z,(k)

>O)

Vk(g
(k + n) mod p) - I) > 0

or (xi(k)

=Oand

yi((k+n)

mod p) >O)

(13)

The solution of the simultaneous inequalities Eq. (13) is the set of the initial states x(O) that converges to the limit cycle Y through the sequence Z.

T. Kumagai et al./Neurocomputing

6

13 (1996) I-10

5. Model performance In the previous section we showed how to decide network parameters to memorize sets of a limit cycle and how to know the initial state that converges to a certain limit cycle. In this section we give some examples and investigate the performance of the network model. 5.1 Memorization of sets of limit cycles into the network Example 1. Let us consider to memorize the following three limit cycles into a neural network (Fig. 2) which is composed of two neurons:

We obtain the following network parameters from the simultaneous inequalities (3): Wll

=

W *,

=

-0.3 0.0

w,* = 0.6 wZ2= 0.3

h, = -0.3 h, = -0.3

(15) The relationship between the initial state and the generated limit cycle is shown in Fig. 3, which is derived from the simultaneous inequalities Eqs. (12,13). The network memorizes Eq. (14) and no more limit cycle. Example 2. Let us consider to memorize the following four limit cycles into the neural network that is composed of two neurons.

0 1

0 1

0 1

0 1

(16)

We get the following network parameters from the simultaneous inequalities Eq. (3): w,*= -9.00 h, = 15.3 WI1= -7.50 wz, = 3.00 wZ2= 4.50 h, = - 6.30

( 17)

Output of Neuron1

Output of Neuron2

Fig. 2. Structure of the neural nehvork.

T. Kumagai et al./Neurocomputing

13 (1996) I-10

m m

area enters the limit cycle Ya after finite steps area generates the limit cycle Ya

B 0

area enters the limit cycle Yb after finite steps area generates the limit cycle Yb

m m

area enters the limit cycle Yc after finite steps area generates the limit cycle Yc

-

boundary boundary

between limit cycles between some conditiones

Fig. 3. The map of the initial states.

The relationship between the initial state and the generated limit cycle is shown in Fig. 4, which is derived from the simultaneous inequalities eqs. (12,131. The network memorizes Eq. (16) and no other limit cycle. 5.2 Shifh’ng between limit cycles by the external input When the external input remains U, the network keeps the limit cycle that is decided by the initial state. Conversely, if the external input changes, the network may shift from one limit cycle to another. The condition for shifting is derived from Eqs. (1,3,8,13). Here we design v that causes a shift of the limit cycle for the network of Example 1. We suppose that the shift is caused by activation of u,, from Ya to Yb and from Yb to Yc. v is designed easily as follows: V = (0.051 O)T

(M = 1)

(18)

The result of the simulation is shown in Fig. 5. The shift occurs when the activation of U, is kept for a period. The shift does not occur when U, is activated temporally. The limit cycle shifted by stimulus is remained after the stimulus goes out.

T. Kumagai et al./Neurocomputing

I3 (1996) I-10

area enters the limit cycle Yd after finite steps area generates the limit cycle Yd area emters the limit cycle Ye after finite steps area generates

the limit cycle Ye

area enters the limit cycle Yf after finite steps area generates the limit cycle Yf area enters the limit cycle Yg after finite steps area generates

the limit cycle Yg

boundary

between limit cycles

boundary

between some conditiones

Fig. 4. The map of the initial states.

1

iv10 1

x2 0 Xl ;

Fig. 5. Transition

between limit cycles evoked by external input.

T. Kumagai et al./Neurocomputing

5.3 Performance

13 (19%) l-10

9

of the network

The proposed network model can memorize plural limit cycles in a few neurons. Moreover, a transition of the limit cycle for the network is made by the external input. This model can compose various kinds of sequential machines. It can work synchronously, using periodical external input from other equipment. It can work as a discriminator for time sequence data. It can be a complex system, using modular networks and communication between modules. It is known that the decentralized neural network system has advantages in some cases [51. However, the proposed system must have hidden neurons to produce arbitrary limit cycles, especially when the period is long, and the number of limit cycles is large. When the network has hidden neurons, there are many solutions that satisfy the request. There are possibilities for the network to memorize the limit cycles that are not requested.

6. Conclusion We showed the condition for converging to a certain limit cycle in the fully interconnected neural network. Moreover, we demonstrated the performance of the network. It can memorize complex limit cycles in a few neurons. The shift between limit cycles is activated by external stimulus. In future we would like to study the design of more complex networks.

References [I] S. Gardella, T. Kumagai, R. Hashimoto and M. Wada, On the dynamics and potentialities of a discrete-time binary neural network with time delay, Proc. IFSA, FLU, KIT, SOFA and JNNS 2nd ht. Conj: on Fuzzy Logic & Neural Networks, Iizuka, Japan, (July 1992) 493-499. [2] J. Nagumo and S. Sato, On a response characteristic of a mathematical neuron model, Cyberneric 10 (1972) 155-164. [3] S. Gardella, T. Kumagai, R. Hashimoto and M. Wada, On the dynamics and applications of a discrete time binary neural network with time delay. J. Intelligent and Fuzzy Systems (1994) 221-228. [4] I.J. Hoplield, Neural networks and physical systems with emergent collective computational abilities, Proc. Nat. Acad. Sci. 79 (1982) 2554-2558. [5] T. Kumagai, R. Hashimoto, M. Wada, M. Tanaka and Y. Yoshida, Control of an active mass damper using layered neural networks, Proc. 1993 JSME ht. Conf. on Advanced Mechatronics, Tokyo (1993) 943-948.

Toru Kumagai received the B.E. degree in mechanical engineering and the M.E. degree in mechanical control engineering from University of Electro-Communication. Janan in 1989 and 1991. In 1991. he ioined Industrial Products Research Institute’ (IPRI), MITI, Japan. He is curr&ly -working in the National Institute of Bioscience and Human Technology (NIBH), MITI. His research interests include learning system using neural networks. He is a member of the Society of Instrument and Control Engineers @ICE), Robotics Society of Japan (RSJ) and the Japan Society of Mechanical Engineers (JSME).

10

T. Kumagai et al./Neurocomputing

13 (1996) l-10

Ryoichi Hashimoto received the B.E. and M.E. degrees in mathematical engineering from University of Tokyo, Japan, in 1980 and 1982. In 1982, he ioined IPRI, MiTI, Japan. He is~currently work’mg in NIBH, MITI. His research in&sts include control of a robot system. He is now engaged in the research of learning control of a robot system using neural network. He is a member of the SICE, RSJ, Institute of Electronics, Information and Communication Engineers (IEICE) and IEEE.

Mitsuo Wada received the B.E. degree in applied physics from Tokyo Institute of Technology, Japan, in 1971 and joined industrial Products Research Institute (the present NIBH), MITI, Japan. He received the Dr. Eng. degree in mathematical engineering from University of Tokyo in 1987. His research interests include neural network model and the applications to robotics. Since 1995, He is a professor of Hokkaido University, Japan. He is a member of tbe IBICE, SICB and RSJ.