Global behavior of homogeneous random neural systems

Global behavior of homogeneous random neural systems

Global behavior of homogeneous random neural systems Erol Gelenbe Ecole des Hautes Etudes en Informatique, UniversitP Rem! Descartes, Paris, France ...

840KB Sizes 21 Downloads 47 Views

Global behavior of homogeneous random neural systems Erol Gelenbe Ecole des Hautes Etudes en Informatique,

UniversitP Rem! Descartes,

Paris, France

Andreas Stafylopatis Department of Electrical Engineering, University of Athens, Athens, Greece

Computer Science Division, National

Technical

We define a simple form of homogeneous neural network model whose characteristics are expressed in terms of probabilistic assumptions. The networks considered operate in an asynchronous manner and receive the inj7uence of the environment in the form of external stimulations. The operation of the network is described by means of a Markovian process whose steady-state solution yields several global measures of the network’s activity. Three different types of external stimulations are investigated, which represent possible input mechanisms. The unalytical results obtained concern the mucroscopic viewpoint and provide a quick insight into the structure of the network’s behavior. Keywords: neural networks,

behavior,

homogeneous,

Introduction Several models have been developed for the representation and analysis of systems involving a large number of interacting components, whether these systems concern living organisms or machine implementations. Although neural networks were originally considered as models of brain function, modern technology has enabled the development of distributed structures consisting of a large number of computing elements suitable for highly parallel computation. Thus it now seems reasonable to synthesize systems that have some of the properties of real neural systems.’ On the other hand, the development of neural net models has been closely related to theoretical advances in computer science in the attempt to achieve brainlike performance in computing systems. The study of neural systems involves the development of abstract models, allowing the analysis of the system’s behavior at various levels. Early theoretical studies on computers motivated the effort of comparing computer organization to the brain and natural systems. Pioneer work done in the 1940s by von Neumann set the foundations of the theory of cellular automata, whereas at about the same

Address reprint requests to Dr. Stafylopatis at the National Technical University of Athens, Department of Electrical Engineering, Computer Science Division, 157 73 Zographou, Athens, Greece. Received

534

10 May

1990; accepted

30 May

Appl. Math. Modelling,

1991

1991, Vol. 15, October

random networks

time, McCulloch and Pitts2.3 developed a model of the nervous system by considering a network of many interconnected components, operating as threshold automata. Other automata network models have been used for biological systems, such as genetic nets,4 in which genes are represented as binary elements that change state according to Boolean functions of their inputs. The dynamical behavior of Boolean networks, which are a general model of major theoretical interest in computer science, has been investigated in terms of their cycle structure and spatial organizational properties.5,6 Since the exact structure of such large systems is generally unknown, connections and other characteristics are usually assumed to be distributed at random. The general approach, based on automata theory, focused on the logical structure of natural and artificial networks and, until recently, constituted the main line of research in this area.5,7-9 The renewal of interest in the field is largely due to ideas originating from physics, which establish the analogy between neural networks and disordered physical systems, such as spin glasses.‘0-‘4 The main suggestion was to use networks of threshold automata as associative memories by storing a set of predefined states, or, in physical terms, by constructing appropriate attractors in the energy landscape of the system. 15-” This modelling framework stemming from statistical mechanics13~‘s~‘9 enlarges the concepts introduced in previous approaches regarding the dynamics of automata networks.

0 1991 Butterworth-Heinemann

Homogeneous

random

Another approach concerns models originating from the theory of stochastic processes,20*2’ which seem to be very useful in addressing organizational properties of neural networks. We develop here an asynchronous probabilistic model in order to study some aspects of the dynamical behavior of such systems at a global activity level. The networks considered are homogeneous and consist of elementary cells, which exhibit features analogous to those of the “formal neurons” of McCulloch and Pitts. The changes of state depend upon interactions among cells and upon the effect of external stimulations; we have considered three types of external stimulation mechanisms, resulting in slightly different approaches. In the next section we present definitions determining the general modelling framework. The following section concerns the model and its solution in the stationary state for the three cases examined, and some numerical examples are then discussed.

l

General description

l

l

In order for the cell to fire, no inhibitory input and at least Z excitatory inputs must be tiring; otherwise, the cell becomes quiet. This is the case of absolute inhibition, in the sense that a single inhibitory input can block firing of the cell regardless of the amount of excitation. In order for the cell to fire, the difference between the number of tiring excitatory inputs and the number of firing inhibitory inputs must be greater than or equal to the threshold Z; otherwise, the cell becomes quiet. This is the case of subtractive inhibition.

systems:

E. Gelenbe

and A. Stafylopatis

Obviously, if we consider no inhibitory inputs at all, both rules reduce to that the number of firing inputs must be greater than or equal to the threshold. In the McCulloch and Pitts model, changes of state occur synchronously in discrete moments, that is, a cell’s state at time t + 1 depends upon the state of its inputs at time t. In our model, cells operate asynchronously, and their behavior is described by a continuous time random process. Let us consider the following general assumptions concerning the operation of a neural network composed of cells with the properties described so far.

l

Our neural networks are assembled by interconnecting basic elements called “cells.” These cells are threshold automata, which operate in an asynchronous manner. Their main properties are analogous to those of “formal neurons” in the McCulloch and Pitts mode12.3 but will be expressed in a purely probabilistic way. From each cell emanates an output line that may branch out after leaving the cell, and each branch must terminate as an input connection to another or perhaps the same cell; in the latter case we say that feedback is allowed. There are two types of termination, called excitatory input and inhibitory input. Any number of input connections may terminate to a cell. At any time a cell can be in one of two possible states: quiet or firing (0 or 1). Accordingly, the associated output signal can be considered as a pulse for the firing state and as no pulse for the quiet state. Changes of state occur as a function of the pulses received at the inputs of each cell. There is a number Z associated with each cell, called the threshold of that cell, which determines the behavior of the cell in what concerns state transitions. We can distinguish two different transition rules depending on the principle adopted for the role of inhibitory inputs. Let us consider the input connections to a particular cell.

neural

l

l

l

The network is made of a finite number of interconnected cells. Connections between cells are established at random: From each cell there exists an input connection to any one of the other cells (or to itself if feedback is allowed) with a fixed probability u. Each input connection to a cell is excitatory with probability o and inhibitory with probability I - u. This assumption holds for the two types of inhibition described above. The value Z of the threshold is the same for all cells in the network. Any change of state, due to the application of the rule corresponding to the type of inhibition adopted, implies a response delay of the cell concerned. Response delays are independent identically distributed random variables following a negative exponential distribution with rate r. Apart from the interaction with other cells, the operation of each cell is also affected by the existence of external stimulations. In that sense our neural networks are nonautonomous, since their behavior depends upon both their self-organization properties and the influence of the environment. We have considered three different representations of stimulation mechanisms corresponding to the three modeling cases discussed in the next section. In the first case, stimulation is produced by “special” cells, which change state independently of the other cells. In the second case there is an external sequence of pulses, which can force cells to change state. In the last case we consider that there is an external stimulation exciting a number of cells each time the network becomes quiet.

Before concluding this section we define the following quantities, which will be useful in the analysis of the model: gk = Pr (a Cd moves from the quiet state to the tiring state, given that k cells are in the firing state) hk = Pr (a cell moves from the tiring state to the quiet state, given that k cells are in the firing state) In what concerns hk, if feedback is allowed, then the cell considered is one of the k tiring cells; otherwise, there are k tiring cells plus the one considered. This distinction will be made apparent in the next section.

Appl. Math. Modelling,

1991, Vol. 15, October

535

Homogeneous

random neural systems: E. Gelenbe and A. Stafylopatis

We will compute these probabilities under the assumptions described above for the two types of inhibition. We first notice that, according to the rules, in both cases gk

=

hk = 1

0,

Osk
Absolute inhibition gk is the probability of the event that the cell is connected to at least Z among the k firing cells and all these connections are excitatory: gk=i:

i=z

ui(l

‘: 0 1

_

U)k-iUi

k?Z

(1)

hk is the probability of the event that either the cell is connected to less than Z among the k firing cells or it is connected to at least Z firing cells but there is at least one inhibitory connection: u’(l

_

U)k-i

- ~)~~~(l - ui)

krZ

(2)

Subtractive inhibition gk is the probability of the event that the cell is connected to at least Z among the k firing cells and the difference between the numbers of excitatory and inhibitory connections is greater than or equal to Z. Suppose that the cell is connected to i among the k firing cells with i I Z and j among these i connections are excitatory. Then the firing condition can be written Therefore ui(]

_

.)k-i

k 2 Z

(3)

hk is the probability of the event that either the cell is connected to less than Z among the k firing cells or it is connected to at least Z firing cells but the difference between the numbers of excitatory and inhibitory connections is less than Z. We have Z-l

hk = x k [=I)0 1

k

k +~ i=Z

U’(l - .)k-i

0

i

[(i+z)/21-

u’(1 - U)k-i

1

c.

&(l _ u)i-j

j=O

k 2 Z

(4)

In all cases we have hk = 1 - gk, k 2 0. The model We now present the model developed under the assumptions of the previous section and its stationary solution for the three cases of external stimulation. The

536

Appl. Math. Modelling,

1991, Vol. 15, October

networks considered are of homogeneous structure in the sense that all cells have the same stochastic characteristics. This formulation has previously been adopted in different contexts7*8 and represents the macroscopic approach to understanding the behavior of neural networks. The operation of the network is studied in terms of the total activity level (number of active elements), which can be considered as a measure representing the macroscopic state of the network. This approach can be generalized to nonhomogeneous networks by considering several classes of elements’ or by studying them as systems composed of smaller homogeneous components.

Input cells Consider a neural network composed of N cells. The operation of the network is affected by the presence of I special cells, which act as external inputs in the sense that their behavior expresses the influence of the environment and evolves according to some externally imposed pattern. Like ordinary cells, at any time an input can be quiet or firing, but its operation is independent of other cells, either ordinary or input. The sojourn times of each input in the quiet (tiring) state are independent identically distributed random variables following a negative exponential distribution with rate m (m’). From an ordinary cell’s point of view, inputs behave exactly like ordinary cells, namely: l

l

from each input cell there exists an input connection to any one of the ordinary cells with probability U, and each input connection is excitatory with probability u and inhibitory with probability 1 - u.

Thus state transition rules for cells and the probabilities gh, hk defined in the previous section concern the influence of both ordinary cells and inputs. In the sequel, ordinary cells will be referred to simply as cells. The behavior of the system can be described by the Markov process {(R(t), S(t)), t 2 0}, where R(t) is the number of firing cells and S(t) is the number of firing inputs at time t. The state-space is {(i, j), 0 I i 5 N, 0 I j 5 Z} and can be represented by a rectangular grid. The state transition rates can be expressed in terms of the probabilities gk, hk, which include the effect of the type of inhibition adopted. In what concerns the existence of feedback we introduce the parameter f, which takes the value 1 or 0 if feedback is allowed or not allowed, respectively. Figure I shows the part of the state transition graph concerning a general state (i, j). Considering the values of the probabilities gkr one can easily verify that if Z < Z, the set of states {(O,j), 0 5 j 5 Z} is closed, since no state outside it can be reached from any state in it. In what follows, we shall assume that I 2 Z; in that case we have a finite state irreducible Markov process, so there exists a steadystate distribution that we will denote by + = [ri,j], where mi.j = lim,_, Pr [R(t) = i, S(t) = ~1. The limiting

Homogeneous

random

neural systems:

E. Gelenbe

and A. Stafylopatis

since the network is an equilibrium,

we must have

Y = Y’ irh t+jsl+f

We can consider that cells arrive to each one of the above systems with a mean rate y, spend some time in it, and then leave the system. The conditions sufficient for application of Little’s formula are met, since the two systems have regeneration points whenever the corresponding sets are empty. Thus the mean times spent by a cell in the firing and quiet states are

(N-i+ 1 h+,-r

jm’

(j+l)m’

(i+l)rhI+l+f

T = R/y

/

(N-Org i+i

0

T’ = (N - R)ly respectively, variable R.

i+l, j

Figure 1.

State transitions for the input cell model

probabilities ‘iri,jmust satisfy a system of linear equations with the general form [(I -j)m =

+ jm’ + (I

-j

+

irhf+j_l+f.

+

(N

-

i)Ugi+jlT;,j

l)WZ7T;,j_ I + (j + l)WZ’?Ti.j+1

+ (N - i + l)rgi+j_ I7Tp 1.j + (i+

fWG+j+fT+l,j

O
(8)

05jll

(5)

as well as the normalizing condition Ci,j ri.,j = 1. Of course, when equation (5) is applied to states on the border of the state-space grid, certain terms will be missing following the case. The steady-state distribution + can be readily obtained numerically, and from it the marginal distributions of the random variables R and S, the numbers of firing cells and firing inputs, respectively, in the steady state. The random variable S behaves independently of R; in fact, S could be interpreted as the number of customers in an MIMl4lMl queue (infinite number of servers, finite customer population), which is a classical queueing case. 22 In what concerns the random variable R we can easily obtain its marginal distribution and moments from +. Let us now think of the set of firing cells as a separate “system”; then the mean arrival rate of cells to this system will be N-l I y = 2 2 W - ihi+jr;,j

(9)

where i? is the mean value of the random

External stimulations Consider a neural network composed of N cells as before. In this case the operation of the network is affected by the arrival of stimulating pulses produced by some external source. In what follows, arriving pulses will be simply called “stimulations.” We make the following assumptions concerning the effect of stimulations: Stimulations arrive to each cell according to a Poisson process with rate b. Each arriving stimulation is connected to the cell concerned with probability u (can change its state). Each stimulation connected to a cell is excitatory with probability u and inhibitory with probability 1 -

v.

A quiet cell fires if it is connected to an excitatory arriving stimulation. Accordingly, a firing cell becomes quiet if it is connected to an inhibitory arriving stimulation. So a stimulation can force a cell to change state, regardless of its connections to other cells. Changes of state due to arriving stimulations are considered instantaneous. Otherwise, cells interact with each other according to the state transition rules described in the previous section. The behavior of the network can be described by a Markov process {R(t), t 2 0} with state-space {i, 0 I i 5 NJ, where R(t) is the number of firing cells at time t. Transitions between states depend upon stimulation arrivals and the interactions among cells. The part of the state transition graph concerning a general state i is shown in Figure 2. Again, the effect of inhibition is

(N-i+l)(buv+rg

Similarly, the mean arrival rate of cells to the set of quiet cells will be

,_, )

(N-i)(buv+rg ,)

)

(i+l)(bu(l-v)+rh,+,)

m----i-l (7) i(bu(l-v)+rh,,+, The mean arrival rate to each one of the two sets is in fact the mean departure rate from the other one. Hence

Figure 2.

Appl.

State transitions for the external stimulations

Math.

Modelling,

1991, Vol.

15, October

model

537

Homogeneous

random

eter f. We have a finite-state

neural

systems:

irreducible

E. Gelenbe

Markov process,

and A. Stafylopatis

. . . , nN], where ri = lim,_, Pr [R(r) = i]. The steadystate probabilities must satisfy the following system of linear equations:

N(buv + rg,&, = (bu(1 - U) + rhf)q [i(bu(l - U) + rhi_l +J + (N - i)(buu + rgJ]q

= (N - i + l)(buu + rgjp ,)7rp, + (i + l)(bu(l

N(bu(1 - u) + rhhi_l+Jq,

well as the normalizing condition EEO rj = 1. The solution of the above system is straightforward and yields

(13)

where r. is obtained by applying the normalizing condition ?To=

--I

(14)

(By the usual convention an empty product is unity by definition.) From the distribution of the random variable R in the steady state and using the same notation as in the previous case we obtain, proceeding exactly as before, N-1 y

=

x

(N - i)(buu + rgi)ni

i=O

=

2 i(bu( 1 i=

U) + rhi-

1 +f)ri

(13

I

T= i?ly

lSi(N-

- U) + rhi+f)ri+l

1

(II)

= (buu + rg,,_,)7r,+j

as

15 i 5 N

(10)

(12)

interval with the same initial state probability distribution ai. If we consider that an external stimulation corresponds to the storage of information in the network, then the relaxation interval can be viewed as the time duration for which the network can hold the information without renewal of the external stimulation. Alternatively, the inverse of the relaxation interval may be considered to correspond to the mean rate at which information must be refreshed in order for the network to remain active. The state transition graph for the above defined process is shown in Figure 3. The process is an irreducible continuous-time Markov chain with finite state-space, so there exists a steady-state probability distribution, which we will denote by + = [s-,, n2, . . . , TN], where rr = lim,+, Pr [R(r) = il. To obtain the steady-state probabilities TiTTi, we write down the equilibrium equations by considering the conservation of flow across a sequence of closed boundaries, the first of which surrounds state 1, the second of which surrounds states 1 and 2, and so on, each time adding the next higher-numbered state to get a new boundary. We thus obtain a set of linear equations of the general form

(16)

T’ = (N - R)/y

(17)

(N - i + l)rgi_i7rp1

+ rgajrr

= irhiri

j=i

i = 2, . . . , N Initial excitation Consider a neural network composed of N cells, as in the previous case, whose behavior can be described by a random process {R(t), t 2 0}, where R(r) is the number of firing cells at time t. Let us define the relaxation interval of the network for a given initial state as the time interval until the system reaches state 0, considering that the operation of the network is autonomous during that interval. According to our probabilistic assumptions, this will eventually take place with nonzero probability. We will further assume that upon leaving state 1 (with rate r) the process moves to state i (1 5 i 5 N) with probability ai, where CE I ai = 1. We can interpret this assumption by considering that, as soon as the network becomes quiet, there is an external stimulation that instantaneously excites i cells, thus marking the beginning of another relaxation

538

Appl.

Math.

Modelling,

1991, Vol. 15, October

which must be satisfied by the probabilities

2r

Figure 3.

Zrhz

(Z+ l)rhz+,

(18)

ri in ad-

Nrh,

State transition graph for the initial excitation model

Homogeneous

random

dition to the normalizing condition Cz, q = 1. Since gi = 0 for i < Z, we can write from (18)

systems:

E. Gelenbe

and A. Stafylopatis

or lTz = -

N

r2

neural

1

N 2

UjTl

(20)

Zhzj=z UjTl

=

i=2,...,Z-

ir7Ti

1

By applying the general equation (18) for i = Z + 1, N and using the expression for rrz we finally get

j=;

or lN 7ri = 7zaj7rl

i=2,...,Z-

lj=i

1

(19)

‘.

’ ~iz$[$z&@~)$k;j~l

We then have i = Z,

. . . ,N

(21)

N

Zrhzrrz

= r 2

ajvl

j=Z

The value of 7rrlis readily obtained by substitution the normalizing equation:

in

Z-1

m, =

2

(22)

i=l

In this case we will be interested in the mean relaxation interval W, which can be found as the mean recurrence time of state 1 (or the inverse of the mean departure rate from this state): W = (r7r,)-’

(23)

where r is the effective exit rate from state 1 at the end of a relaxation

interval. We thus obtain

(24)

As an interesting special case of the above result. we can have the mean relaxation initial state I (1 I Z 5 N), that is, Ui =

1

i=I

0

i#I

interval given a specific

(25)

for which (24) yields

(26)

w=

_1 r

Numerical

examples

We show in this section some numerical examples illustrating the results developed in the preceding sec-

lion. Given that the number of parameters and measures involved is rather large, we have chosen to display some simple representative cases. The number of cells considered is small with respect to real physiological

Appt. Math. Modelling,

1991, Vol. 15, October

539

Homogeneous

random neural systems: E. Gelenbe and A. Stafylopatis

contexts, but we may yet obtain some insight into the organizational properties of such networks. Figure 4 shows results obtained for the case of input cells. Chart (a) displays the variation of R/N, the mean percentage of firing cells, versus the threshold Z, with all the other network parameters fixed; chart (b) represents T, the mean time spent by a cell in the firing state, as a function of r/m, the ratio of the rates with which cells and inputs change state. Figure 5 shows

w

20

a

u=o.2 -

16

u=o.15

_____

,,*’

, ,,’ ,’

,’ ,’ ,’ *’ ,’ .’

12 8 4

u=O.l

I/ IJ.

’ 0

_____---

____.d-.T:-

_,*’ .......

21I

c

I

20

...............

40

60

80

100

N=lOO Z=lO

N=lOO

.“.

,,..

I=10 u=O.l

U

OO

0.1

0.2

0.3

0.4

0

v=l

,

T BOO,

f=O m=m’=l

Initial excitation

results obtained for the case of external stimulations. Chart (a) shows the variation of R/N versus the connection probability U, for a small-sized network, under the effects of feedback and the two types of inhibition considered in the model. Chart (b) shows the variation of T as a function of r/b, the rate with which cells change state over the arrival rate of external stimulations. The case of initial excitation is displayed in Figure 6, in which the charts represent the variation of the mean relaxation interval W as a function of the amount of initial excitation Z (considering one initial state) and the interconnection probability U.

r/m Figure 4.

Figure 6.

Input cells

R/N I 0.0

Conclusion

0.6

N=lO z=3 b=l

0.4 0.2

r=5

0

0.2

0.4

0.6

0.8

.

U

T 300 \

Figure 5.

540

N=200

N=iOO

b=l u=O.l v=l f=O

External stimulations

Appl. Math. Modelling,

1991, Vol. 15, October

We have developed a continuous-time probabilistic model in an attempt to clarify some aspects of the macroscopic behavior of neural networks. We have considered networks made of elementary cells with properties analogous to those exhibited by the McCulloch and Pitts “formal neurons.” Network features such as interconnection, feedback, excitation, and inhibition are expressed through simple model parameters. The behavior of the mode1 in the steady state was studied under the influence of three different types of external stimulation mechanisms. Our approach makes use of direct analytical computations requiring a very low computational complexity and can be extended to the study of larger nonhomogeneous random networks composed of subnetworks, each of which is internally homogeneous. Our experience has shown that models based upon simple probabilistic assumptions can be useful in the study of complicated systems composed of many in-

Homogeneous

random neural systems:

teracting parts. It should be interesting to develop more general models, as well as to perform a time-dependent analysis of such models.

II

12

References 1

IEEE ASSP

2

13

Lippman, R. P. An introduction to computing with neural nets. Magazine

1987, 4(2), 4-22

115-133

3 4 5

6

7

14

McCulloch, W. S. and Pitts, W. A logical calculus of the ideas immanent in nervous activity. BuU. Ma&. Biophy. 1943, 5, Minsky, M. L. Computation: Finite and Infinite Machines. Prentice-Hall, Englewood Cliffs, N.J., 1967 Kauffman, S. Behaviour of randomly constructed genetic nets. Towards a Theoretical Biology, ed. C. H. Waddington. Edinburgh University Press, 1970 Atlan, H., Fogelman-Soulie, F., Salomon, J. and Weisbuch, G. Random Boolean networks. Cabernet. Svst. 1981. 12. 103121 Fogelman-Soulie, F. Parallel and Sequential Computation on Boolean Networks. Theoretical Computer Science 1985, 40, 275-300 Amari, S.-I. Characteristics of random nets of analog neuronlike elements. IEEE Trans. Systems Man Cybernef. 1972, 2(5),

16

Anninos, P. A., Beek, B., Csermely, T. J., Harth, E. and Pertile, G. Dynamics of Neural Structures. J. Theoref. Biol. 1970,

18 19

10

May, R. M. Simple mathematical models with very complicated dynamics. Nature 1976, 261, 459-467 Amit, D. J., Gutfreund, H. and Sompolinsky, H. Spin-glass models of neural networks. Disordered Systems and Biological

Peretto, P. On the dynamics of memorization processes. Neural Networks 1988, 1, 309-322 Peretto, P. Collective properties of neural networks: A statistical physics approach. Biol. Cybernet. 1984, 50, 51-62 Peretto, P. and Niez, J.-J. Stochastic dynamics of neural networks. IEEE Trans. Systems Man Cybernet. 1986, SMC-16(l), 73-83

20 21

26, 121-148

9

1982, 79, 2554-2558

Palm, G. On the storage capacity of an associative memory with randomly distributed storage elements. Biol. Cyhernet. 1981, 39, 125-127 Peretto, P. and Niez, J.-J. Long term memory storage capacity of multiconnected neural networks. Biol. Cybernet. 1986, 54, 53-63

17

643-657

8

Organization, ed. E. Bienenstock, F. Fogelman and C. Weisbuch. Springer-Verlag, New York, 1986 Amit, D. J., Gutfreund, H. and Sompolinsky, H. Statistical mechanics of neural networks near saturation. Ann. Phys. 1987, 173, 30-67 Choi, M. Y. and Huberman, B. A. Dynamic behaviour of nonlinear networks. Phys. Rev. 1983, A-28, 1204-1206 Clark, J. W. Statistical mechanics of neural networks. Phys. Rep. 1988, 158(2), 91-157 Hopfield, J. J. Neural networks and physical systems with emergent collective computational abilities. Proc. Nat. Acad. Sci. U.S.A.

15

E. Gelenbe and A. Stafyhpatis

22

Appt.

Gelenbe, E. and Stafylopatis, A. Temporal behaviour of neural networks. IEEE First International Conference on Neural Networks, San Diego, Calif., June 1987 Stafylopatis, A., Dikaiakos, M. and Kontoravdis, D. Spatial organization of neural networks: A probabilistic modeling approach. IEEE Conference on Neural Information Processing Systems-Natural and Synthetic, Denver, Colo., Nov. 1987 Kleinrock, L. Queueing Systems. Vol. I: Theory. John Wiley, New York, 1975

Math. Modelling,

1991, Vol. 15, October

541