Unit circuit neural networks of the cortex

Unit circuit neural networks of the cortex

0895-7177/89 $3.00 + 0.00 Copyright 0 1989 Pergamon Press plc Math/ Comput. Modelling, Vol. 12, No. 6, pp. 673-694, 1989 Printed in Great Britain. Al...

2MB Sizes 1 Downloads 56 Views

0895-7177/89 $3.00 + 0.00 Copyright 0 1989 Pergamon Press plc

Math/ Comput. Modelling, Vol. 12, No. 6, pp. 673-694, 1989 Printed in Great Britain. All rights reserved

UNIT

CIRCUIT

NEURAL

NETWORKS T.

OF THE CORTEX

TRIFFET

College of Engineering and Theoretical Physics Division, Arizona Research University of Arizona, Tucson,AZ 85721,U.S.A.

Laboratories,

H. S. GREEN Department

of Mathematical

Physics,

University

of Adelaide,

Adelaide.

South Australia

5001

(Received July 1988; accepted for publication August 1988) Abstract-A discrete finite-dimensional model of cortical functions based on unit neuronal circuits is developed. The unit circuits of each primary cortex system consist of all common types of cells with known synaptic connections, and the model is capable of representing refractory and potentiated states as well as the firing and lowest resting states of the neurons they contain. A new neural network equation which takes account of interactions with the extracellular field is derived to simulate electrical activity in these circuits. Also a coherent theory of cortical functions is presented that accounts for many of the observed phenomena, including those associated with the development of long-term potentiation and sequential memory.

1. INTRODUCTION

Over a long period of time most theoretical studies of the nervous system have been directed towards the understanding of the action of individual neurons. One of the best known studies of this type was due to Hodgkin and Huxley [ 11,but it was lacking in some respects, most importantly in not taking account of the role of the calcium ion, but also in underestimating the role of graded potentials of < 10 mV within many neurons and of potenials of the same order of magnitude in the extracellular environment. In recent years, the authors [2,3] have developed a physically-based electrochemical theory to account for these and other phenomena, partly motivated by an early attempt [4] to model the activity in an extended neural network. In the last few years there has been a growing interest in the computer simulation of neural networks [5], with a dual aim of contributing to the understanding of the nervous system and the brain, and of applying the knowledge gained to the development of artificial intelligence. Much of the recent work on parallel distributed processing [6] has been directed towards one or the other of these ends. The greatest difficulty in this area has been the fact that, in spite of the accumulation of a wealth of experimentally derived knowledge concerning the physical structures and electrical phenomena associated with the brain, there is even today only a fragmentary understanding of the functions of the structures revealed and the significance of the electrical activity. Probably for this reason, most of the work on the modelling of neural networks has made little attempt to imitate the known biological structures or to simulate the internal or external electrical fields with any degree of fidelity. The early work of Lorente de No [7] and Rall [8] was exceptional in this respect. In our most recent work [3], we have studied the transmission of electrical potentials through the neural membrane, and found that inward transmission with undiminished amplitude is possible, but only if rather stringent conditions affecting the frequency of the potential and the geometrical configuration of the membrane are satisfied. There is a great deal of evidence [9] to the effect that much of the activity in biological neural networks is synchronized with extracellular potentials with rather well-defined frequencies. Our work suggests that the explanation for this synchronization lies in the inward transmission of potentials with a frequency determined by the geometrical properties of the membrane. Partly in order assess the importance of inward as well as outward transmission of potentials, we have also developed a rather faithful model of one of the elementary neural circuits of the cerebellum, and examined the influence of the extracellular input on the functioning of this circuit [3]. The present paper could be regarded as an extension of this work, in which some of the other elementary circuits of the cortex have been assimilated and provided with synaptic connections. We 673

614

T. TRIFFETand H. S. GREEN

have also tried to improve our model for the neuron to the point where it is capable of faithful simulation of nearly all gross functions of its biological counterpart, including of course the interaction of the neuron with an extracellular field. The resulting model may be compared with those developed recently by several authors [lo], as well as our own. A noticeable feature of most of these models is a similarity to some models of statistical mechanics. This has suggested to us the possibility of providing a deeper theoretical foundation for the modelling of neural networks, by making use of what is known in equilibrium statistical mechanics as the transfer matrix. The techniques used may, however, also be regarded as finite-dimensional analogs of those well-known in quantum mechanics, where the state of any system is represented by a vector in a linear space. The two sections following this Introduction are devoted to developing the theory of the transfer matrix in the present context, and illustrating some of its uses. Briefly, the transfer matrix can be used to investigate various features of the development in time of a neural network without specifying the actual states, and in this respect has the advantage that only linear algebra is required for the purpose, even where the usual formulation in terms of activation levels yields non-linear equations. However, the equations for the activation levels will be derived in terms of the transfer matrix, in a form comparable with those previously used [lo, 111 in the modelling of neural networks. These equations are without doubt most convenient, if not indispensible, for computational purposes. The formulation of a neural network model in terms of the transfer matrix is most useful in discussing general properties of the model. One of these is the ability of the network, with a given input, to execute a “program”, consisting of a cycle of states beginning and ending with the same state. For this purpose the state of the network is determined by the activation levels of the neurons forming the network, and, in general, the extracellular field which is their immediate environment. If the network is initially in a state which is not part of the program, it will evolve through an initial learning sequence of states not belonging to the program, before entering the program. Thereafter, if the input is the same, the program is repeated indefinitely and may therefore be regarded as a memorized sequence of states. A different input will in general result in a different program, and it is important to determine whether two programs are compatible with one another, in the sense that they can be executed one after the other. As we shall demonstrate, this question, among others, is most simply discussed in terms of the transfer matrix, and we shall be led to formulate a criterion for the efficient performance of different programs by a network which we conjecture is a common feature of biological systems, developed partly by evolution but also by the process of sensitization in learning. We shall show that, to satisfy this criterion, there must be important constraints on the connectivity of the network, which may not, therefore, be as arbitrary as it seems in nature. The same criterion, of course, is applicable to the efficient design of artificial networks. In the final sections of the paper, we illustrate these ideas with computer simulations based on the equations for the activation levels derived from the transfer matrix, and the networks which we have constructed following the biological models. Since our models are very specific, we shall begin by outlining the biological information and hypotheses on which they are based. The cortex consists of three main subdivisions: the cerebellar cortex, the cerebral cortex and the allocortex, which communicate with one another directly and via various relay nuclei. We wish to consider each of these subdivisions in turn, from the point of view of structure and function. The cerebellar cortex is regarded as the repository of memory relevant to motor activity; as described by Eccles [12], this memory is accessed whenever a motor action is initiated from the cerebrum. It is at least roughly organized in zones or columns, each of which contains hundreds of cells, but all of a few distinctive types, of which only one or two representatives are shown in Fig. 1. Although highly simplified, this figure displays the synaptic connections which are actually present in the biological system, and provides the basis for the “connectivity matrix” used later in this paper to model the cerebellum. For a variety of reasons, some of them given elsewhere [3], we regard the granule cells as the principal respository of operational memory. It has been found [13] that granule cells with very similar properties in the allocortex display a kind of long-term potentiation which is characteristic of the process of memory formation. It is very noteworthy that the granular layer receives input from two primary sources: firstly, via the mossy fibers which

Unit neuronal circuits of the cortex

Fig. 1. Unit cerebellum circuit.

provide the final relay of input from the cerebrum; and secondly from the extracellular field, whose contribution has often been disregarded in the past. The allocortex is regarded as the most primitive, from the point of view of evolution, of the three subdivisions, and is usually considered as consisting of interconnecting columns of cells of the type shown in Fig. 2, though it should be pointed out that these columns are curved, so that the topology as well as the content of the figure is much simplified. It has been known for some time that the allocortex has a role in the formation of some types of long-term memory, which can be best described as sequential and relational memory. The phenomenon of long-term potentiation displayed by the granule cells has already been referred to, and it has been found experimentally [14] that this potentiation is accompanied by physical changes in the spines of the granule cells which could also serve the purpose of retention of memory. In addition, it has been found [I 51 that two regions, adjacent to the granule cells and the smaller pyramids respectively, are the source of the O-rhythm, which appears to function as a readiness potential for the allocortex and is recognized as an important low-frequency component of the extracellular field. The synaptic

+Lr P2

__

t

PP

Fig. 2. Unit allocortex circuit.

616

T. TRIFFETand H. S. GREEN

connections shown in Fig. 2 include those which have been found by a variety of experimental techniques. The perforant pathway provides the relay from sensory areas of the cerebral cortex, and the commissural pathway from the contralateral allocortex as well as from the septum. The cerebral cortex may be regarded as the repository of factual memory derived from the senses as well as the principal location of associative processing and the source of motor action. It is also roughly organized in columns, analogous to the zones of the cerebellar cortex. Typical cells belonging to one of these columns are shown in Fig. 3, together with the synaptic connections as described by Eccles 1161. Again we emphasize the highly simplified nature of this representation of a column, which in nature contains very many more cells of every kind; however, in its essential features it is sufficiently realistic to provide a reasonably faithful model. It will be seen that the spiny stellate cells occupy the same position in this circuit as the granule cells in the cerebellar and allocortical circuits shown in Figs 1 and 2, and perform the same function with respect to both the input and excitation of the cells responsible for output. There is experimental evidence [17] that they may display the same characteristic long-term potentiation, and, taking account of considerations which we have discussed elsewhere [3], we conclude that the spiny stellates perform the same function in respect to memory in the cerebrum as the granule cells in the cerebellum and the allocortex. Our representation of the activation levels of the neurons is also more elaborate than used in earlier neural network models, and we summarize here the biological information, mostly well-known, on which it is based. We regard the activation level as a non-linear function of the intracellular potential, which is an adequate measure of the internal state of a neuron. The most significant levels are: (1) The threshold level, at which the neuron remains inactive but at which any further excitation, from other cells or from the extracellular fluid, will cause the neuron to fire. This firing is accompanied by an action potential in the axon of the cell. It is important to note that the action potential is normally preceded by the development of smaller, graded potentials of a few mV in the dendrites and cell body, excited in many instances at synapses with other neurons but in other instances by transmission from the extracellular fluid. --

__

rc t Fig. 3. Unit cerebral circuit.

---

Unit neuronal circuits of the cortex

67-l

(2) The level of action potentials, usually > 50 mV. After a firing level is reached, the activation level of any cell with which the axon of the neuron makes synaptic connection may be raised (if the synapse is excitatory) or lowered (if the synapse is inhibitory), depending on the type of neuron making the synapse. In addition we recognize a sequence of levels below threshold, in which a neuron will not fire. (3) The upper levels of this sequence are resting levels, like the threshold and just below it. Below the lowest resting level are the levels of the refractory phase. The levels of the refractory phase, which a neuron enters after firing and in which (4) it is insensitive to external stimuli, are below the lowest resting level. The lowest level of all is reached immediately after firing, and the neuron activation then ascends from one level to another until a resting level is reached. From this description it will be seen that a reasonably faithful discrete model of the activity of a cell is obtained by assigning about 10 distinct activation levels to each neuron. We have chosen a scale so that the potential 4 within the cell is given by 4 = & + 6 exp(Ka),

(1)

in the ath level, where CpO,4, and K are suitably chosen constants. This is the intracellular potential, and is normally 50 mV or more below the extracellular potential, except in the course of an action potential. The variations in the extracellular potential are much smaller (of the order of a few mV), but comparable ‘with those which take place internally in the course of a graded potential. These potentials are of course continuous, but, as suggested by equation (l), it is sufficient to consider a discrete subset of values for the purpose of this paper. Details of the representation adopted are given in the next section.

2. EXPLICIT

CONSTRUCTION

OF

THE

TRANSFER

MATRIX

We consider a system of N neurons, with arbitrary synaptic connections, in an extracellular field consisting of potential waves with M distinct frequencies I++(‘), Ic/(*),. . . , ticM). The state of this entire system at time t will be represented by a vector V(t) with components V(i, t) (i = 1,2, . . .). Only the discrete values 0, t, 2t, . . . of the time will be considered, so that the state of the system evolves according to the law V(t + 7) = TV(r)

(2)

or I’(& I + T) =

c T(i,

i’)V(i’,

t),

I”

where T is the transfer matrix. Clearly, V(t + VT)= TV(t),

(4)

so that if the transfer matrix and V(0) are known, the state vector V(t) can be determined for all values of t. To construct the transfer matrix explicitly, we must first describe the form of the vectors V(t) to which it is to be applied. We shall do this in four steps: Step 1, in which a vector II representing a single neuron is introduced; Step 2, in which a vector U representing the system of N neurons is constructed; Step 3, in which a vector v representing an extracellular wave of frequency o is introduced; and, finally, Step 4, in which the vector V representing the whole system is constructed from these elements. Step I. The state of a single neuron in the 0th activation level (where a takes the values 1,2, . . . , m), is represented by an m-dimensional vector u(a) with components ui(a) = ai,.(i = 1,2,, . . , m) which are all zero except that with i = a, which is equal to 1. The state

678

T. TRIFFETand H. S. GREEN

vector of lowest potential is u(l), with components (1, 0, . . . , 0). We define the cyclic matrix c such that cu(a) = u(a + l),

(a = 1,2, . . . , m - 1) cu(m) = u( 1)

This satisfies cm = 1 (the unit matrix) and its transpose is E = cm-‘. We also define the projection matrix p,, satisfying p, = pi, and with only one non-vanishing element, by p,u(a) = &,u(l).

(6)

We can then define other projection matrices, such that PC+(k) = &Vu(~),

(7)

pa=ca-’

(8)

by p, CY+l.

The usefulness of these matrices can be seen in the following way. In the neuron model adopted, we suppose that a = m - 1 corresponds to the threshold level, so that there is just one higher level (with a = m), corresponding to an action potential. Following the firing associated with an action potential, we suppose the neuron sinks to level 1, m - 2 steps below threshold, and then enters a refractory phase during which the level rises by one step in time t, until the rth level is reached, m - r - 1 steps below threshold. Applied to the vector u(a), the cyclic matrix c will effect the required transition from the firing state to the lowest level, or from one level to the next in the refractory phase. The projection matrices pO, on the other hand, can be used to isolate the components of u(a) to which a matrix like c is to be applied. Thus, P,=p,+pz+‘..+p,_1

projects onto the refractory

(9)

levels, while P,=pr+.**+p,_,

(10)

projects onto the stationary (resting) levels, and pr= P,

and

P,=P,+P,

(11)

project onto the firing state and the dynamical levels, respectively; the latter, by definition, include both refractory and firing levels. Step 2. To construct the vector U(n(‘), a(‘), . . . , uCM))representing a set of M neurons, in the levels #‘, #) ,,**, a@‘) respectively, we may now simply take a direct product of vectors ~(a”‘), U(U’2’) 1*.*, u(a’w) of the type described in Step 1: U(u”‘, u’2’,. . . , u@y = U(U”‘)U(d2’) , . . u(u’“).

(12)

This is an m”-dimensional vector, with components indexed by the indices i(l), i@),. . . , 8" of the m-dimensional vectors forming the direct product. We can now introduce cyclic matrices c”‘(j=l,..., M), which have the same effect on the u(u(j)) as c on u(a) in equation (5) above, and are defined formally by c(j)=ixi~...~

c

x...x~,

(13)

a direct product of Mm-dimensional matrices which are all unit matrices except the jth, which is c. Similarly we can generalize p,, as defined in equation (8), by writing P~~=lxlx~~~x

P,

x~.~x~.

(14)

Step 3. A discrete representation of the extracellular field can be developed in a similar way. To begin with, we consider a single frequency w. The state of an extracellular field with only this frequency can be represented by an n-dimensional vector v(b) with components v,(b) = 6,b(b, q = 1, . . . n), where b - 1 is a measure of the amplitude of the wave, e.g. in mV.

Unit neuronalcircuitsof the

679

cortex

To provide for the absorption and generation of a wave with this frequency, we introduce the step matrix s, and its transpose 5, by means of the definitions sv(b) = v(b - 1)

(b > l),

sv(1) = 0,

= v(b + 1)

(b < n),

G(n) = 0.

(15) h(b)

With this normalization,

a matrix 6 with the property Bv(b)

=

bv(b)

(16)

is defined by B=l+Ss+S2s?+...+Sn-Isn-I.

(17)

When n + co, these matrices are related to the boson matrices of quantized field theory, but here we have preferred a finite-dimensional representation. The projection operator onto the state with amplitude b - 1 is given by gb=Sb-ISb-I

-3bSb

(19

4. When there are N distinct frequencies o(‘), wc2),. . . , co”“), with amplitudes b(l) - I, . . . 36’” - 1, in the extracellular field, the state of the field is represented by an n”‘-dimknsional vector v(b”‘)v(bc2’) . . . v(b(“?) which is a direct product of N vectors, of the type Step b”‘-I

described in Step 3 for a single frequency. The various matrices defined in equations (15)-(18) are easily generalized for application to the n”‘-dimensional vectors; e.g. the matrix B”’ with the eigenvalue b(‘) is defined as the direct product B(‘)=lxlx.~.x

B x...xl,

(19)

in which all the factors are n-dimensional unit matrices except the rth, which is B, as defined in equation (17) above. The electric potential at a point x in the extracellular fluid is then 4 = XI”=, (B”’

-

1) cos(d)t

-

4”.

x + 6”‘),

(20)

where 6) is the local velocity of propagation and 6(‘) is the local phase of the component with frequency o(‘); since the wave is much influenced by the geometry of the system, both v(‘)and #‘) must vary with the position x. Also, as already noted in the Introduction, o(‘) and 6) must satisfy an eigenvalue condition if the potential is to be transmitted by the membrane of a particular neuron. We may now define the state vector V for the system of neurons in the extracellular field given by equation (20) as a direct product v = U(a”’ 3a(2),.*.,

a”+‘))v(b”‘)v(b’2’). . . v(bcw)

(21)

of the m M-dimensional vector defined in equation (11) and the n ‘-dimensional vector defined above to represent the extracellular field. Clearly this is an (m”nN)-dimensional vector to which all of the matrices defined in Steps 2 and 4 above are applicable. Since the activation levels a(“, . . . , d”) and also the field amplitudes b(l) - 1, . . . , b(w - 1 in general depend on the time, V is a function of the time t, as indicated in equations (2)-(4). The representation of the state vector V(t) is now completely defined, and we may proceed to construct the transfer matrix T in this representation. This is a product of factors:

T=

,r”I, ,fi, Tkj 1 3

z

[

(22)

s

where, since the matrices Tij do not commute in general, it is to be understood that the factors shall appear (reading from right to Zejl) in the order T,, , Tzl, . . . , T,, , T,z, . . . , TM2, . . . , T IA-f>‘**> T MM> which would naturally be followed in a computer program. With permissible exceptions where factors commute, this should also be the order in which synaptic interactions take place in the neural network, and, as we shall see in Section 4, it is necessary to take account of this requirement in numbering the neurons of the network. If there is no synaptic connection

680

T. TRIFFETand H. S. GREEN

between the kth and the jth neurons, the factor Tkj is 1, i.e. the unit matrix. If there is more than one synapse between the kth and thejth neurons, the factor Tkj may itself be regarded as a product of factors, one for each synapse: T,j= nTkaj3 Cl

(23)

where c takes as many values as required, for given k and j. We now give explicit forms for the factors Tk, of the T-matrix, corresponding to the specific model which is adopted in this paper. If the jth neuron receives no input from the extracellular field, we have (setting k =j): Tjj = TX’ = p’,” + c(j) py’

(24)

where the projection matrices are as defined in equations (9)-(11); applied to any vector, this matrix has the effect of raising the activation level of the jth neuron by one step if it is in the refractory phase, or depressing it to the lowest level if it has an action potential. However, if there is one unit of extracellular input from the field component with frequency o(‘) (which implies of course that this component is not zero), Tjj = Tj;’ = c(j) (Py) + s(‘)P’,“).

(25)

The last two formulas can be combined into one which is always valid if thejth neuron is sensitive to the frequency: T,, = TJ;’+ (T$’ - TJy)ql”,

(26)

which reduces to equation (24) if the component with frequency o(‘) is zero, and otherwise to equation (25), since 91’) projects onto states with b(‘) = 0. If there are x(j) units, instead of just one unit of input with this frequency, the r.h.s. of equation (26) must be replaced with a product of x(j) identical factors. We have not yet provided for the generation of an extracellular potential of one of the frequencies w(‘) to which the system of neurons can respond. This is done by modifying Tjj, as defined in equation (24) in a somewhat different way. If, on exceeding the threshold level of activation, the jth neuron raises the amplitude of this frequency by one unit, we replace equation (24) by Tjj = p;j) + c(j) (pjj) + $) Py)),

(27)

where S(” has the effect of changing b”’ to b”’ + 1. When k #j, the factor Tkj represents synaptic input from the kth to the jth neuron. If there is just one synapse, which raises the activation level one step when the kth neuron has a graded or action potential, unless the jth neuron is in the refractory phase, T,j has the basic structure Tkj = Ti:’ = Plk)(c(j) - 1) Pij’ + 1,

(28)

where 1 represents the (m”nN)-dimensional unit matrix. More generally, if there is more than one synapse, and the 0th synapse changes the activation level by w,(k,j) steps (possibly a negative amount), Tkj = [T$$w(kJ),

(29)

where, as can be seen from equation (23), w(G)

= c w,(U) 0

(30)

is a sum of contributions from the different synapses. Because of the property pkpj = 8k.j pj Of the projection matrices, equation (29) can be evaluated, even for negative values of w(k, j), but care is needed [as shown in equations (42) and (43) below], because c(j) and PY’ do not commute. This completes the explicit construction of the transfer matrix in terms of the cyclic and projection matrices c(j) and P\j) for the neurons and the step matrices s(‘)and 3”’ of the extracellular field. It will be noticed that, unlike the state vector V(r), the transfer matrix T does not appear

Unit neuronal circuits of the cortex

681

to depend on time. However, the possibility is not excluded that, as suggested in the Introduction, some cells become sensitized, so that their transition from the lowest resting state to a firing state is accelerated. We have, in fact, assumed that this may happen in some of our computer simulations, and it is easily provided for, either by allowing the value of r in equations (9) and (10) to change for the affected cells, or by changing the values of the weights w(ij) in equation (30). We shall use the latter method in the simulations to be described later (in Section 4), but first discuss the principles to be used in making such changes. 3. THE CONCEPT

OF NEURAL

PROGRAMMING

Because of the very large dimensions of the transfer matrix, for neural networks of any size, its use is not recommended for purposes of numerical computation. In this respect it resembles the transfer matrix of statistical mechanics. Later in this section we shall develop an equation, equivalent to equation (2) and much better adapted to numerical work. However, the transfer matrix lends itself to a variety of theoretical uses, some of which will appear in this section. The vector V(l) has as many components (m”nN) as states of the system considered, including the extracellular field; in a deterministic theory all components but one are zero. Although the non-vanishing component varies with the time, since the number of states is finite, in a sufficiently long time sequence it must happen that two states are the same, i.e. V&t) = V(pz + VT) for some integral values of p and v. This is the analog of the “ergodic theorem” of statistical mechanics. Provided that the T-matrix has not changed in the interval (in the manner described at the end of the last section), and the external input Z to the system also remains unchanged, it will follow with the help of equation (4) that, for n = 1,2, . . . , V(p)

= V&T + nvz).

(31)

Then the system of neurons will execute a repetitive pattern of activity in time vt, which we shall call a program. Because of the requirement that the input should be the same, this concept of a neural program has a limited application to very large systems of neurons. But since the theory of this and the previous section is applicable to systems of any size with well-defined input, it is, in particular, applicable to the zones and columns into which the cortex may be divided, each containing relatively few neurons. In this context, the concept of a program is readily intelligible, and is mirrored by the action of the computer program used to simulate the system. What happens normally is that the state vector representing the strip or column evolves through a set of ~1states (say) unrelated to the program for that subsystem, and then repeats the program indefinitely or as long as the input remains the same. If in the initial sequence of states the system is fully sensitized and the T-matrix acquires a stable form, this program will become part of the standard repertoire or memory of any larger system to which the system belongs. A very special program results if there is no external input to a system of neurons; this may well consist of the “transfer” of the system from a particular resting state to that same state. But if excitations, including the inhibitions of inhibitions, prevail over inhibitions within the system, it is also possible to find repetitive firing sequences, which are similar to those sometimes observed in actual systems of neurons. Suppose P, is the matrix which projects from an arbitrary vector V onto the set of components which correspond to states of a program p. The transfer matrix can then be written in the form T=C,+T,,

(32)

where C, = TP, = P,T and T, = T(l - P,), so that (T,)‘:+’ = 0, if I is the maximum value of p, the number of applications of the transfer matrix needed to reach one of the states of the program sequence. Clearly, C, is a matrix which effects a cyclic interchange of the states of the program. From these properties of the projection matrix P,, we have T2=C;+PpTp+T;, .. .... .... .....I..................................................... T”=C~+C~-‘T,+...+C,~-‘+~~, T”fi=C,“F

&=1,2,...).

(33)

682

T. TRIFFETand H. S. GREEN

In particular, since C; = 1, we have T‘+” = 7, which is the minimum polynomial identity satisfied by the transfer matrix of the system with the prescribed input. Now suppose that the same system receives a different input I’, possibly due to the activation of different afferent fibers or due to a different extracellular field. Then the program p’ executed by the system will also be different; some examples of this will appear in the numerical simulations which we shall describe later in this paper. It is clear that the same system may execute a variety of programs, corresponding to different input patterns. But the matrices C; and C;, corresponding to different programs will not necessarily commute on any proper subspace of vectors; and, if they do not, the programs p and p’ will interfere with one another, in the sense that, when the first program has been executed, the system will be left in a state not belonging to the sequence which constitutes the second. Though this possibility cannot be ruled out, it would certainly detract from the readiness of the system to execute either program. We therefore conjecture that, as a result of evolutionary and learning processes affecting synaptic and extracellular transmission, matrices corresponding to different programs routinely executed by a system of neurons should commute at least weakly; this means that there should be at least one state vector to which the order of application of the program matrices is immaterial. But this condition, imposed in the interests of efficiency, implies stringent constraints on the synaptic connections, multiplicities and strengths (weights) of the system, and the corresponding extracellular input. As a first illustration of this “Principle of Efficiency”, which may, presumably, be attributed at least partly to evolution, consider the system of neurons constituting a column of the cerebellar cortex, already shown in much simplified form in Fig. 1. For the reasons given in the Introduction, we suppose that the granule cells are those which are principally influenced by extracellular fields in the granular layer. Then it is clear, and can be seen from the explicit form of the T-matrix given in the previous section, that the projection matrices corresponding to programs initiated by similar extracellular and external synaptic inputs will commute, since both types of input are directed to the same (i.e. granule) cells which are affected by the extracellular field. A second illustration is provided by the mechanism of learning, as presently understood in the light of experiments [14]. This involves the sensitization of the cells responsible, for example, for motor activity, primarily as a result of calcium currents associated with graded and action potentials affecting them. Cells sensitized in this way will participate in a program in which they are readily excited from their lowest resting states to firing states, and thence return by way of the refractory states to lowest resting states. A system of such neurons may execute a wide variety of programs which begin and end in the lowest resting state of the system, and these programs may be executed sequentially in any order. The condition for weak commutation of the matrices representing these programs is thus established in this instance as a result of the learning process. We shall now consider the reformulation of the fundamental equations (2) and (22) satisfied by the transfer matrix as a set of non-linear equations in the activation levels a(j) of the system of neurons and the amplitudes b(‘) of the extracellular field components. In this way we shall simplify the equations from the point of view of computer simulation, and also link with earlier work, including our own, in this area, which we have already referred to in the Introduction. The excitation level a(j) of the jth neuron of the system at time t is the eigenvalue, in the state represented by the vector V(r), of the time-independent matrix

(34) with the projection matrix py) as defined in equation (14). So we may express u(j) (t) as the scalar product a(j)(t) = V(r) = V(t)A(j)V(r), where V(r) is the dual of the vector V(r) (with the same components),

V(r)V(r) = 1.

(35) and satisfies (36)

(35) and

(2), we

can determine ucn(t + T) in terms of the aCk’(t). This requires

the evaluation of G(t

+ r) = O(t)TA(j)TV(t),

(37)

where T is the transpose of the transfer matrix T. It follows from equations (36) and (2) that TT = 1 (the unit matrix), and it is also true that the factors of T satisfy the same relation: TkjTkj = TkajTkoj = 1,

(38)

as indeed one can verify directly from the formulas (24)-(29) using C(j)c(j)= (c(‘))~- ’ c(J) = 1 etc. In evaluating the r.h.s. of equation (37), the transfer matrix is expressed in terms of its factors, which for convenience we denote temporarily by T, , T,, . . . (so that T = T, T, . . . T,), and use is made of the matrix identity A”‘(T,

. . . T,) = (T, . . . T,)A”’

+ (A”‘T,

- T, A”‘)T 2 . . . T, + . . . . ..+T...

. T,

_ ,

(A”‘T,

- T,Acj’).

(39)

When this is substituted into equation (37), the latter is reduced to a sum of terms, the first of which is simply a(J)(t), and the rest are in correspondence with the factors of T: U(j’(l + 5) = U(j)(f) + f

Akj,

(40)

k=l

where Akj = ~kj(t)Tkj(Ack)Tkj - TkjAck’) Vkj(t),

1

k-l n Tij v(t).

Vkj(t) = [

i=I

In the further evaluation of the Akj, we consider first terms withj # k. Then, if w(k,j) is positive, it follows from equations (28)-(30) and (34) that uW)

Tk,(A(j’Tkj - T,,A”‘)

= P1”) C &j) [ i= I

1

pi”,

@j) = 1 - mpg< i+,

.

(42)

The subscript m - i + 1 in this formula is to be evaluated as mod m, if negative. When the synaptic output from the kth to the jth cell is inhibitory, w(k,j) is negative; however, formula (29) is still applicable, and the above result is still valid if i is replaced by -i. Again we interpret the subscript m - i + 1 as mod m. The eigenvalue of the sum in formula (42) is then always w(k,j) (mod m). We denote the eigenvalue of P)“) for k >j, which in the formulation in terms of activation levels is the output variable of the kth cell, by ock)(t); this has the value 1 if the cell is in a firing state at time t, and 0 otherwise. Since this state may be changed by the matrix Tkk, the eigenvalue of Py’ will then be ock)(t + T) for k
=

w(k,j)

(k >A,

(k -=.i), (mod m),

(43)

T. TRIFFETand H. S. GREEN

684

It is sufficient to evaluate Akj mod m, because the activation levels of equation (40) take only integral values between 1 and m. The notation wjk has been adopted (rather than wkj) to conform with that of other authors [6]. For k =j, we have, from equations (24)-(27) and (34), T,j(A’j’Tjj - T,,A”‘) = A\” Pf’

(44)

if there is no extracellular input, and

Tjj(A”‘Tjj - T,,A’j’) =

A\j) + A$j) py’

(45)

for unit input of frequency w from the extracellular field. We introduce the eigenvalue d(j)(t) of Py), equal to 1 if the jth neuron is in a dynamical (i.e. a firing or refractory) state at time t, and otherwise 0. In a similar way, we define e(j)(t), equal to 1 if the jth neuron is sensitive to a component of the extracellular field of some frequency w, and otherwise 0. Then Ajj = d(j)(t) + x.e(j)(t) I

(mod m 1,

(46)

where xj = 1 for unit input from the extracellular field, but can have other integral (including negative) values in general. It is noteworthy that o(‘)(t), i(j)(t) and d(j)(t) are non-linear functions of a(j)(t), of the form

where the summation is restricted to values of the activation number a which correspond to firing, excitable and dynamical levels, respectively. Similarly, e(j)(t) can be expressed as a non-linear function of the appropriate field amplitude b(‘)(t). However, in some applications it is convenient to use the term xje(j)(t) to represent other forms of external (e.g. synaptic) input. The fact that a descent from the top to the bottom level normally occurs after an action potential in any neuron is another circumstance contributing to the non-linearity of the equations (40). It is obvious, however, that these equations are very well-suited to computer simulations, and they will be used for this purpose in the following.

4. NEURAL

NETWORK

MODELS

OF THE

CORTEX

The fundamental equations (40) for the activation levels of an arbitrary neural network may now be restated as &)(f + z) = u(j)(t) + d(j)(t) +x,,(j)(t)

+

C P)(f)~~~o(~)(f + zk) k#j

(mod ml,

(47)

where Tk = z if k j. The u(j)(t) take values from 1 to m, but, as indicated, may be treated mod m. The variable d(j)(t) depends on u(j)(t) and has the value 1 if u(j)(t) has one of the values m, 1, . . . Y, and 0 otherwise. Similarly, o(‘)(t) has the value 1 if u”)(t) has the value m, and zero otherwise; and i(j)(t) has the value 1 if u(j)(r) > r, but is zero otherwise. Also, e(‘)(t) = 1 if the jth neuron is sensitive to a particular non-zero component (with frequency o) of the potential of the extracellular field, or some other type of external input, but 0 otherwise. In the resulting computer simulation, if k
Unit neuronal circuits of the cortex

to be discussed in the next section. The connectivity cerebellar network shown in Fig. 1 is CF

..I -

0 0 2 i 0

685

matrix formed by the wik for the simple

MF

Gl

G2

B

St

0 0 2 2

0 0 0 0

0 0 0 0

0 0 0 0

0 0 o-1 -1

Go

P

0 0

0 0 0 0

-1

CF MF Gl G2, B St

Go P (48b) It will be noticed that the rows and columns of the matrix are numbered in the same order as the cells respond to external input. The components of the vector xj listed below are maximum values, and may be replaced by 0 as conditions require; where (as for Gl, G2, St and P) the input is extracellular, they may be reversed in sign. The non-vanishing values of the wjk are in correspondence with known synapses as shown in Fig. 1. The signs of these entries are positive or negative according as the cells making the synapses are known to be excitatory or inhibitory. The magnitudes of the entries were at first determined approximately from experimental estimates of the synaptic multiplicities, but, as described below, some have been modified to satisfy the requirements of efficient performance, which could be thought of as realized by means of the selective sensitization of cells in the process of learning. We have first chosen the values m = 11 and Y = 7 for the total number of activation levels and the number of refractory levels, respectively. The resting levels then correspond to a = 8, 9 and 10, so that 1I is the firing level, i.e. the level of an action potential. To specify the state V, of the system of neurons at time t = 0, arbitrary resting levels were at first chosen for each of the neurons. Also, a particular choice of external input to the system was made, including extracellular input to cells with dendrites in the outer molecular layer and/or the granular layer of the cerebellum, and/or, in general, synaptic input via either the climbing or the mossy fibers (but not both). Equations (47) were then solved, to obtain new activation levels for the times Z, 22, . . . , with the help of the computer program presented in the Appendix. It was found that the system soon settled into a new resting state V,. The same input was then applied to the system, and the system was allowed to settle into still another resting state VZ. In this way, a sequence of resting states V,, V, , Vz, . . . was found, corresponding to the extracellular and external synaptic input (if any). As the number of different resting states is finite, this sequence begins to repeat itself after an initial learning sequence; in the terminology of the previous section, the repetitive subsequence is a program, and can be regarded as the basic function of the cerebellar unit with this particular external input. For this purpose we have used various types of input, by themselves and in different combinations:

(1) extracellular input to St and P (molecular layer); (2) extracellular input to Gl and G2 (granular layer); (3) synaptic input to CF (the climbing fiber); (4) synaptic input to MF (the mossy fiber), The extracellular input may have either sign, so there are altogether 24 variations. By following the computational procedure outlined above, we have determined programs which we denote by PI, Pi, P29 PI29 Pi29 P3, P131 PI239 *. * 9 the subscripts indicating the forms of input which are coactive, and the primes those with negative extracellular input. For a general choice of the non-vanishing matrix elements in equation (48a, b), we found that most of these programs do not have any single resting state in common. Since this would greatly detract from the efficiency of the system in

686

T. TRIFFETand H. S.

GREEN

responding to external input of various types, we concluded that it should not happen after an initial learning process in nature, or in a well-devised simulation. However, from experience and consideration of the way sensitization (involving an adjustment of the weights) eliminates the initial learning sequence, we obtained the values (48a, b) of the wjk and xj, not usually very different from our original estimates, for which all 24 programs have a common resting state. This state is in fact, the lowest resting state of the whole system of neurons. Thus, if the system is in this ground state, it will return to the same state after the execution of any program, and will be ready to execute the same or any other program. The connectivity matrix w, in equation (48a) is not unique in securing this result, since it is the outcome of the initial choice of the w,,, and the sequence of different combinations of input used to determine mutually compatible programs. In the process, it was necessary to increase the values of certain entries in equation (48a) in a particular order, sometimes including the xj, which determine external input. The final form of w, shown in equation (48a) can thus be regarded as the result of a particular training experience. We display below a small subset of the programs by listing the neurons which fire in each sequence, in the correct temporal order. Colons and semicolons indicate places in the sequence where input was applied-colons denote non-negative values and semicolons negative values of the extracellular input: p, = (:P:St), pZ= (:Gl,G2,Go,P:Gl,G2,B,Go,P), p,> = (:P:Gl,G2,St,Go,P), P,~ = (:P:CF,St,P:P:CF,Gl,B,St,P), pi3 = (;CF,P;CF,Gl ,B,P), plZ3= (:P:CF,Gl,G2,B,ST,Go,P), p,24= (:P:MF,Gl,G2,St,Go,P), p& = (;MF;MF;MF,Go).

(49)

As above, the subscripts attached to the p, correspond to different types of input: e.g. p13 is the program resulting from extracellular input to the stellate cell (St) in the outer molecular layer, and synaptic input to the climbing fiber (CF). As shown by the program P,~~, if there is only excitatory input, the Purkinje cell (P), which is responsible for the extracerebellar output of the column, would fire, but most of the cells (like Go) would have no discernible function. But alternating positive and negative extracellular input to St and P result in alternating inhibition and facilitation of the Purkinje cell in this column, so that, as we shall see in more detail in the next section, a moving pattern of facilitation will occur in a model network containing many zones and columns of the type we are now considering. The action of the basket cell (B) is to inhibit the firing of Purkinje cells in neighboring columns, and the stellate cell (St) is similar but in the outer molecular layer where it is in a good position to receive input from the extracellular field. The Golgi cell (Go) can also produce a pattern of facilitation, but has a direct (inhibitory) effect on Gl and G2, and hence on memory. From a detailed examination of the various programs listed, it will be seen that each cell in the column has an essential and interesting role in relation to the functions of the column. The fact that only an extracellular field of a particular frequency can be transmitted by the neural membrane provides a natural explanation for the generation of potentials with a well-defined frequency in the extracellular fluid. As shown by the programs p, and p2 above, extracellular input produces an immediate firing in the cells affected, when the external potential reaches its maximum; such cells will therefore help to generate the field, which may be self-sustaining even in the absence of synaptic input to the cerebellar column.

Unit neuronal

circuits

of the cortex

687

We shall next consider the allocortical circuit shown in Fig. 2 in a similar way. By following a procedure essentially the same as that already described for the cerebellar circuit. we have obtained the following connectivity matrix: CP

PP

0 0 2 0 0 3 0 3

0 0 3 2 0 0 2 0 2

Wb=

Xb’[2

Bl

B2

B3

PI

P2

0 0 o-2 0 0 0 3 0

0 0

0 0 0 0 0 0

0 0 0 0 0 0 0

0 0 0 2 0 0 0 2

0 0 0 0 3 0 0 0

2

2

1

0

2 I.

Gr

0 0 0 o-2 0

o-2 0

CP PP Gr Bl , B2 B3 Pl P2

(504

(5Ob)

The entries listed again represent the final results obtained from a learning routine in which the initial non-vanishing values of the w,~ and xj were all + 1. With various forms of input, corresponding to different choices of non-vanishing xj, sequences of states, starting from the lowest resting state, were determined and if there was no return to the initial state the values of the wjk most used were increased in magnitude, together with the corresponding xi in some instances. This procedure may be regarded as the application of Hebb’s rule [IO] to our model. The learning process was considerably longer for the allocortical unit than for corresponding units of the cerebellum and cerebrum. An interesting feature of this process, when the assumed non-vanishing input included extracellular input to the granular layer, was the repeated appearance of states in which the granule cell was left at its threshold level, instead of the lowest resting level. Of course, at this level, the granule cell fired in response to the least additional input of any type, and its sensitivity to both synaptic and extracellular input was automatically increased (as shown by the final entries above). In this way the model displays the basic features of long-term potentiation, which has been observed in nature, and is regarded as an essential component in the process of formation of memory in the allocortex. But our model strongly suggests the importance of extracellular input for the process of long-term potentiation. The process is known to occur elsewhere in the cortex, and in fact it a common feature of the operation of our model wherever extracellular input occurs. The external connections of the allocortex, which indicate that it receives and returns input from sensory areas of the cerebrum, will be discussed in the next section. Finally we show the connectivity matrix w, of the simple cerebral unit show in Fig. 3, with the external input vector x, below:

w, =

CC

TC

Sl

S2

M

N

Al

A2

Bl

B2

C

0 0 2 2 1 2 1 0

0 0 2

0 0 0

0 0 0

0 0 0

1

0

0

0

0

0

1

0

0 0

0 0 0

1

0

1

0

0

0 1 3

1 0 2

0 3 9

1 I 0 1

0 0 0 2

0 0 0 0 0 0 0 0 0 0 0

0 0 0 0 0 0 0 0 0 0 0

0 0 0 0 0 0 0 0 0 0 0

0 0 0 0 0 0 0 0 0 0 0

0 0 0 0 0 0

0 0

0 2 1

0 0 0 0 2 0 0 0 0 0 0 0 0

-3

-2 0

-3 2

-3 0

0

PY 0

cc

0 0

TC Sl s2 M

0 0 0 0 0

-1 -1 -1 -1

0

0 0

0 0 0

0

0 I*

N

Al ’ A2 Bl B2 C PY

(514

WV

T. TRIFFETand H. S. GREEN

688

In this instance, the final entries are little changed from our original estimates, reflecting the fact that not much learning experience was required to achieve programmed behavior with all forms of input considered. Only the stellate cells and those in neighboring layers were much sensitized in the process. Since the cerebrum is the destination of all types of sensory data, relayed in most instances by the thalamus, this could be related to the need for rapid adaptation in processing sensory data. Taking account of the relatively large proportion of stellate cells, and experimental indications that they are capable of long-term potentiation [17], the suggestion, which is provided by our model, that they are the principal repository of “factual” memory derived from the senses, seems quite realistic. 5. STRUCTURE

AND PROPERTIES

OF A MODEL

OF THE CORTEX

In the last section, we discussed the modelling of the columns or zones of each of the three main subdivisions of the cortex. On this basis, it is possible to formulate a model of the entire cortex, or one of its hemispheres, consisting of n, cerebellar units, nb allocortical units and n, cerebral units with connectivity submatrices given by equations (48a), (5Oa) and (5la), respectively. The dimension of the connectivity matrix w for the entire system is then M = (n, + nb) + 12 n,, and this matrix will contain some non-vanishing elements, not yet specified, representing synaptic connections between different units. If n,, nb and n, are large, it is not feasible to display the entire matrix, but since it is sparse, it is possible to define the non-vanishing elements, and for computational purposes to list them, by giving their row and column numbers and values. The non-specific synaptic connections are of two types: local connections, between cells belonging to different units of the same type in the same subdivision of the cortex, and usually not far removed from one another; and distant connections, between cells in units of different types, in different subdivisions. In most instances the connections correspond to those well known from microscopic, including electron microscopic studies, in nature, but in modelling some distant connections we have simplified matters by omitting cells belonging to various nuclei, whose main function is to relay action potentials from one cortical unit to another. From neuroanatomical studies [18, 191 it may be inferred that many local connections are designed to secure inhibition, by one column or zone, of neighboring columns and zones in directions perpendicular to the direction of propagation of the extracellular field. In the cerebellum, the direction of propagation is that of the parallel fibers, which are axons of granular cells after they reach the molecular layer (see Fig. 1). The effect of these inhibitions is to ensure that activity is confined to columns in a strip containing the parallel fibers. The inhibitory connections are between neighboring Purkinje cells in a plane perpendicular to these fibers, and between basket cells in one column and Purkinje cells in neighboring columns in the same plane. On the other hand, the parallel fibers have the function of exciting Purkinje cells for some distance in the direction of propagation of the extracellular field. In our model, as can be seen from the programs listed in equations (49), only those Purkinje cells which receive excitatory input from the field will fire, even when the column has input from the mossy fibers; and it is therefore to be expected that there should be moving patterns of facilitation and inhibition produced by the local connections, in conjunction with the extracellular field. We have been able to observe such effects in the operation of our model, in both cerebellar and cerebral units. The distant connections are of interest in determining the overall functions of the model cortex. In a system of cerebellar, allocortical and cerebral ulnits such as we are now considering, the thalmocortical cells of the cerebral units are those which receive the external sensory input, and the pyramidal cells form distant connections with cells of the perforant pathway to the allocortex and the climbing and mossy fibers of the cerebrum. The pyramids of the hippocampal region of the allocortex provide return output to the cerebrum. Finally, the Purkinje cells of the cerebellum are important in determining the motor output of the entire system. The model which we have developed enables us to examine the relation between the sensory input to the cerebrum and the motor output from the cerebellum. Without the mediation of the allocortex, this relation is simple and determined directly by the distant connections, but the influence of the extracellular field is seen in determining those cells of the cerebellum which respond to input from the cerebrum to the mossy fibers, as might be expected from the programs shown

Unit neuronal circuits of the cortex

689

in equations (49). With the mediation of the allocortical units, the relation between the sensory input and the motor output is more complex, since the fact that the individual units are programmed does not guarantee that the system as a whole is programmed, and some units do not return to their lowest resting state. It thus appears that an additional learning process should ensue, in which some units are further sensitized, and some of the weights in the connectivity matrix are therefore changed, depending on the type of sensory input assumed. We conclude with some general remarks concerning the application of the model to rather large systems of neurons. We have found that it is a feature of the model, even with systems of several hundred neurons, that a resting state is reached after a time < 157, provided that no new external input occurs. Moreover, the actual firings of the neurons are usually confined to a time ~42, after which the approach of those neurons in refgractory states to their lowest resting states is predictable. With a periodic external input at realistic intervals of about 14~, it is therefore possible in the solution to equation (47) to restrict actual computation to intervals of <4r, with a corresponding reduction in computer time. Although the model is somewhat more complex than others of its kind, and necessarily so to achieve the enhanced similarity to the biological prototype, it is modular in construction and very economical in its requirements for computational memory. REFERENCES I. A. L. Hodgkin and A. F. Huxley, J. Physiol. 117, X%544 (1952). 2. T. Triffet and H. S. Green, J. rheor. Biol. 86,344 (1980); Murhl Modelling 5,383-399 Math1 Modelhng 1,41-61 115, 43-64 (1985).

3. 4. 5. 6. 7. 8. 9. IO. 11. 12. 13. 14. IS. 16. 17. 18. 19.

(1984). H. S. Green and T. Triffet, (1980); Math1 Modelling 3, 161-178 (1982); J. theor. Biol. 100,649-674 (1983); J. theor. Biol.

H. S. Green and T. Triffet, J. theor. Biol. 131, 199-221(1988); J. theor. Biol. 136, 87-116 (1989). T. Triffet and H. S. Green, J. biol. Phys. 3, 53-76, 77-93 (1975). J. J. Hopfield, Proc. n&n. Acad. Sci. U.S.A. 79,2554-2558 (1982); Proc. num. Acad. Sci. U.S.A. 81, 3088-3092 (1984). D. E. Rumelhart et al., Parallel Distributed Processing, Vols 1 & 2. MIT Press, Cambridge, Mass. (1987). R. Lorente de No, Stud. Rockefeller Insr. med. Res. 131; 132 (1947). W. Rail, Biophys. J. 2, 145-167 (1962). W. Rail and G. M. Shepard, J. Neurophysiol. 31, 844-915 (1978). T. A. Pedley, R. Traub and E. S. Goldensohn (pp. 255-269) and R. K. S. Wong and P. A. Schwartzkroin (pp. 238-254), In Cellular Pacemakers (Edited by D. 0. Carpenter). Wiley, New York (1982). D. 0. Hebb The Orgunizarion ofBehaviour. Wiley, New York (1949). R. M. J. Cotterill, In Physics ofthe Brain; Lecture Notes in Physics, No. 284, pp. 138-151. Springer, Berlin (1987). G. E. Hinton and J. A. Anderson (Eds), PuraNel Modes of Associuriue Memory. Erlbaum, Hillsdale, N. J. (1981). See also Refs [5, 6). J. C. Eccles, In Cerebro-CerebeNur Inreracrz’ons(Edited by J. Massionand K. Sasaki), pp. I-18. Elsevier,Amsterdam (1979). T. Bliss and T. Lomo, J. Physiol. (Land.) 232, 331-356 (1973). T. Bliss and A. Gardner-Medwin, J. Physiol. (Land.) 232, 357-374 (1973). T. J. Teyler and P. D. Scenna, A. Rev. Neurosci. 10, 131-161 (1987). E. Fifkova and A. van Harreveld, J. Neurocyrol. 6, 211-230 (1975). G. Lynch, S. Halpain and M. Baudry, In Neurobiology of rhe Hippocampus (Edited by W. Seifert), pp. 253-264. Academic Press, New York (1983). S. E. Fox, S. Wolfson and J. B. Ranck Jr In Neurobiology of the Hippocampus (Edited by W. Seifert), pp. 303-317. Academic Press, New York (1983). J. C. Eccles, In Cerebra/ Correx, Vol. 2 (Edited by E. G. Jones and A. Peters), pp. 1-36. Plenum Press, New York (1984). K. S. Lee, In Neurobiology of the Hippocampus (Edited by W. Seifert), pp. 265-272. Academic Press, New York (1983). S. L. Palay and V. Chan-Palay, Cerebellar Cortex. Springer, New York (1974). E. G. Jones and A. Peters (Eds), Cerebral Cortex, Vol. 1. Plenum Press, New York (1984). M. Ito, The Cerebellum and Neural Conrrol. Raven Press, New York (1984).

APPENDIX The program, shown overleaf in generalized language and modular format, calculates the activity levels a(i) of all neurons in a cerebral-allocortex-cerebellum network. Specific input patterns to the cerebral system result in specific output patterns from the cerebellum. A default pattern featuring thalamocortical fiber input to the first and third unit circuits and extracellular input to certain neurons in the first four circuits (L41) is included for purposes of illustration. As presented, each system contains five interconnected unit circuits, and three cycles are executed after each of six identical input phases, though these values (c, dn, f) and all other basic parameters (an, un, tn, sn, vn, xn) can easily be changed (L36-L37). Intercircuit and intersystem connections are also specified but can readily be modified (L38-L40). To reduce the effects of time dependence, firing flags are accumulated (o(i), as(i), osn(i)) and activation level changes postponed to the end of the appropriate cycle. The main program is concentrated in the first nine lines, the third to sixth of which may be reordered and duplicated to produce a wide variety of networks. Variable substitution occurs from L3 to L12, and primary processing from L13 to L34. Most of the remaining lines are devoted to simplifying setup operations and data entry in connectivity matrices, a sparse matrix technique being used for the intercircuit and intersystem cases. The table of the latter which appears near the end merely displays the non-zero elements of these matrices. A version of the program optimized to reduce processing time has been published elsewhere (31.

T. TRIFFET and H. S. GREEN

690

Ll

L2 L3

Lb

gosub L34 gosub L3 gosub L6:gosub L9 gosub L7:gosub LlO gosub L6:gosub Lll gosub LB e-a+l:if e-f then L2 gosub L3:goto Ll end for i-1 to gL:if exL(i)OO then aL(i)-aL(i)+exL(i) next:for i-1 to c:aL((i-l)*aL+l)-iLl(i):aL((i-l)*aL+P)-iLP(i):next print:print"Input Pattern Number";e+l:print if fO-1 then L-6 print "Corticocortical fiber by circuit number:" :for i-l to c:print iLl(i);" ";:next:fO-1:print print"Thalamocortica1 fiber by circuit number:" :for i-1 to c:print iLZ(i);" ";:next:print print"Extracellular by neuron number:" j-1:for i-l to gl:print exL(i);:if i-j*aL then j-j+l:print next:print return

L5 j-1:for i-1 to g:print a(i);:if i-j*a then j-j+l:print next:print:retum L6 a-aL:d-dL:u-uL:t-tL:s-sL:v-vL:x-xL:g-gL for i-1 to g:a(i)-aL(i):if b(i)00 then a(i)-b(i) b(i)-O:o(i)-O:t(i)-tL(i):x(i)-xL(i):next for i-l to a:for j-l to a:wn(i,j)-wL(i,j):next:next for i-l to c:for j-l to c:ic$(i,j)-L$(i,j):next:next:gosub for i-1 to g:aL(i)-a(i):tL(i)-t(i):xL(i)-x(i) :osL(i)-osL(i)+os(i):os(i)-0:next print"L":gosub L5:b-0:return

L12

L7 a-ax:d-dx:u-ux:t-tx:s-sx:v-vx:x-xx:g-gx for i-l to g:a(i)-ax(i):if b(i)00 then a(i)-b(i) b(i)-O:o(i)-C:t(i)-tx(i):x(i)-xx(i):next for i-1 to a:for j-l to a:wn(i,j)-wx(i,j):next:next for i-l to c:for j-l to c:ic$(i,j)-x$(i,j):next:next:gosub for i-1 to g:ax(i)-a(i):tx(i)-t(i):xx(i)-x(i) :osx(i)-osx(i)+os(i):os(i)-0:next print"x":gosub LS:b-0:retum

L12

a-am:d-dm:u-um:t-tm:s-sm:v-vm:x-xm:g-gm for i-l to g:a(i)-am(i):if b(i)00 then a(i)-b(i) b(i)-O:o(i)-O:t(i)-tm(i):x(i)-xm(i):next for i-l to a:for j-1 to a:wn(i,j)-wa(i,j):next:next for i-1 to c:for j-1 to c:ic$(i,j)-m$(i,j):next:next:gosub L12 for i-l to g:am(i)-a(i):tm(i)-t(i):xm(i)-x(i) :osm(i)-osm(i)+os(i):next print"m":gosub L5:b-0: 'cerebellum circ:";:for i-1 to c:print i;:next:print "purkinja nauron.*";:for i-1 to c:print osm(i*a);:next :for i-1 to gL:os(i)-O:next:return L9

LlO

Lll

L12 L13 L14

I

for i-l to c:for j-1 to c:ic$(i,j)-Lx$(i,j):next:next for n-l to gL:if osL(n)OO then p(n)-osL(n):o(n)-1 next:ab-ax:ac-aL:gosub L27:return for i-l to c:for j-1 to c:ic$(i,j)-xL$(i,j):next:next for n-l to gx:if osx(n)OO then p(n)-osx(n):o(n)-1 next:ab-aL:ac-ax:gosub L27:return for i-1 to c:for j-l to c:ic$(i,j)-Lm$(i,j):next:next for n-l to gL:if osL(n)OO then p(n)-osL(n):o(n)-1 next:ab-am:ac-aL:gosub L27:return k-l if WC then L25 al-(k-l)*a+l:a2-al+a-1 for i-al to a2 a(i)-a(i)+x(i) if a(i)->t(i)+u then o(i)-l:os(i)-os(i)+l if a(i)<-s then a(i)-a(i)+1 next Gl:m-1 for i-al to a2:for j-al to a2 if a(j)-* and a(j)<-s or o(j)-0 then L15 a(i)-a(i)+WL,m)*o(j)

Unit neuronal circuits of the cortex L15

L16 L17 L18

L19

L20 L21 L22 L23 L24 L25 L26 L27 L28

L29 L30 L31 L32

L33 L34

in-m+l:next:GL+l:rl next for i-l to g if a(i)& then a(i)-v if o(i)-1 then L16 else L18 a(i)-a(i)+v if a(i)->t(i)+u then L16 next for x-l to c:for v-1 to c I.-l:if ic$(x,y)-“; then L24 L$-mid$(ic$(x,y),L,l):r$-r$+L$ if L$-“” then L24 if L$-” “then L21 if L$-“, “then L23 L-L+l:goto L19 if f2-1 then L22 n-vaL(r$):r$-““:f2-l:goto L20 m-vaL(r$):r$-““:f2-O:goto L20 D-WiL(r$)

:r$-“”

&m+(y-l)*a!n-n+(x-l)*a a(m)-a(m)+p*o(n):m-O:n-O:p-0:goto next:next for i-al to a2:o(i)-O:next:k-k+l:goto b-b+l:print”*“: if b-d then L26 else L13 return for x-l to c:for y-l to c L-1:if ic$(x,y)-“” then L33 L$-mid$(ic$(x,y) ,L,l) :r$-r$+L$ then L33 if L$-“” if L$-” ” then L30 if L$-“,” then L32 L-L+l:goto L28 if f2-1 then L31 n-vaL(r$):r$-““:f2-l:goto L29 m-vaL(r$):r$-“‘:f2-O:goto L29 q-vaL(rS):r$-“” &m+(y-l)*ab:n-n+(x-l)*ac b(m)-b(m)+p(n)*o(n):m-O:n-O:p-0:goto next:next:return

L20 L14

L29

Cerebra(l)-Allocorte(x)-Cerebra(l)-Cerebellu(m) Circuit”:print “To run the program, accepting default values for all constants, connectivities and initial values of variables, press (ent); To modify these values enter c(ent):” input c$:if c$-“c” then L35 fl-1:goto L36

print”

L35

,

print”Integer c f

values must be assigned to the following constants: (number of unit circuits for all brain systems) (number of cerebral pattern input repetitions) (dimension of square connectivity matrices) (number of system cycles for each pattern input) un (neuron activation at the firing level) level) tn (neuron threshold potential at the highest refractory level) sn (neuron activation at the lowest refractory level) vn (neuron activation potential level) xn (neuron extracellular where n-L,x and m, the cerebral, allocortex and cerebellum unit circuits. Enter (ent) to accept or d(ent) to change default values:” d$:if d$-“d” then list L36-L37

z

input L36

c-5:aL-12:dL-3:uG2:tL-O:st-2:vG-7:xL-O:gL-c*aL f-6:ax-8: dx-3:ux-2:tx-O:sx--2:vx--7:xx-O:gx-c*ax am-g: dm-3:um-2:tm-0:Sm-*2:vm--7:xm-O:gm-c*am dim a(gL),t(gL),x(gL),o(gL),os(gL),p(gL),b(gL) dim wn(aL,aL),ic$(c,c),nl(c),n2(c) dim wL(aL,aL),wiL(aL,aL),LS(c,c),iLl(gL),iL2(gL),exL(gL) dim aL(gL),tL(gL),xL(gL),osL(gL) dim W((ax,ax),wfx(ax,ax),x$(c,c),ixl(gx),ix2(gx),exx(gx) dim ax(gx),tx(gx),xx(gx),osx(gx) dim wm(am,am),wim(am,am),m$(c,c),iml(p),im2(gm),e~(gm) dim dim

T. TRIFFETand H. S. GREEN

692

L37

L38

L39

L40 LA1

L42

for i-1 to gL:aL(i)-sL:tL(i)-tL:xL(i)-xL:next for i-1 to gx:ax(i)-sx:tx(i)-tx:xx(i)-xx:next for i-1 to gm:am(i)-sm:tm(i)-tm:xm(i)-xm:next for i-1 to c:iLl(i)-sL:iL2(1)-L:next for i-1 to al:for j-1 to al:read wl(i,j):next:next for i-l to ax:for j-l to ax:read wx(i,j):next:next for i-1 to am:for j-l to em:read wm(i,j):next:next L$(1,2)-"10 12 -1,12 2 l,":L$(2,3)-"10 12 -1.12 2 1,” L$(3,4)-"10 12 -1,12 2 l,":L$(4,5)-"10 12 -1,12 2 1." L$(2,1)-"8 12 -1,":L$(3,2)-"8 12 -1," L$(4,3)-"8 12 -1,":L$(5,4)-"8 12 -1," x$(1,2)-"5 2 -1,7 2 1, ":x$(2,3)-"5 2 -1,7 2 1," x$(3,4)-"5 2 -1,7 2 1. ":x$(4,5)-"5 2 -1,7 2 1," x$(2,1)-"4 8 -1,":x$(3,2)-"4 8 -1," x$(4.3)-"4 8 -1,":x$(5.4)-"4 8 -1," m$(1,2)-"4 2 1.8 8 -1, ":m$(2,3)-"4 2 1,8 8 -1," m$(3,4)-"4 2 1,a a -1, ":m$(4,5)-'4 2 1,s a -1," m$(2,1)-"5 a -1,3 1 l,a a -1, ":m$(3,2)-"5 8 -1.3 11,8 8 -1," m$(4,3)-"5 8 -1,3 1 1.8 8 -1, ":m$(5,4)-"5 8 -1,3 1 1,8 8 -1," Lx$(l.l)-"12 2 5, ":Lx$(2,2)-"12 2 5,":Lx$(3,3)-"12 2 5," Lx$(4,4)-"12 2 5,":Lx$(5,5)-"12 2 5." XL$(l,l)_"8 1 5, ":xLS(2,2)-"8 1 5,":xL$(3,3)-"8 1 5," XL$(4,4)_"8 1 5, ":xL$(5,5)-"8 1 5," Lm$(1,1)-"12 1 S,":Lm$(2,2)-"12 1 S,":Lm$(3,3)-"12 1 5," Lm$(4,4)-"12 1 S,":Im$(S,S)-"12 1 5," iL2(1)-2:X2(3)-2: exL(12)-2:exL(15)-l:exL(36)-2:exL(39)-1 print"Integer values must also be assigned in advance to: - elements of the unit circuit connectivity matrices m(i.j) win(1.j) - elements of the intercircuit connectivity matrices - elements of the intersystem connectivity matrices nn(i,j) - cerebral fiber input activation levels LP(i) where p-l for cortical and 2 for sensory fibers. Enter wn(ent) to modify the default wn(i,j), c(ent) followed by the neuron number and its activation, threshold and extracellular levels to change any initial value (noting that the neurons are to be numbered from 1 to gn in each system), or press (ent) to continue. Type d(ent) to display tabulated default values of the win(i,j), nn(i,j) and Lp(i):" input dS if d$-"wL" then list L51-L52 if d$-"wx' then list L52-L53 if d$-"wm" then list L53-L54 if d$-"d" then list L54-L55 if d$-"c" then I.42 else L43 print"Input n, then new values for i, an(i),tn(i) and xn(i), continuing with '(ent) and ending with (ent)." input"Enter L,x or m:";n$ if n$-"L" then input i,aL(i),tL(i),xL(i) if n$-"x" then input i,ax(i),tx(i),xx(i) if n$-"m" then input i,am(i).tm(i),xm(i) if n$--"I" then L.42 else L.43

IA3 print"List the non-zero elements of the win(i,j) by first entering n, then typing the numbers of the interconnected circuits ordered from output to input (ent). When elements are requested enter the subscripts (1 to an, each followed by one space) and weights of their connectivity matrix, ending each element entry with (ent), each circuit entry with '(ent), and data entry with ,(ent)." 7.44 input"Enter L,x,m(ent), or d(ent) to change default values:";n$ if n$-"d" then list L38-L39 gosub L47:if n$-"" then L45 else 7.44 L45 print"List the non-zero elements of the nn(i,j) by first entering n(out)n(in), then typing the numbers of the intraconnected circuits (noting that unit circuits are to be numbered from 1 to c in each system). When elements are requested enter the element subscripts (1 to an,each followed by one space) and weights of the connectivity matrix, ending each element entry with (ent), each circuit entry with '(ent) and data entry with ,(ent).":print L&6 input"Enter Lx,xL,Lm,mL(ent), or d(ent) to change default values:";n$ if n$-"d" then list L39-I.40 gosub L.47:If n$-"" then L50 else L46 L47 L input"i,j-"; i,j:k-1:if i-0 and j-0 then print:goto LA9

Unit neuronal circuits of the cortex L48 U9

L50

693

print”Element”;k:input e$(k):if e$(k)-"'" then L.47 c$(i,j)-c$(i,j)+e$(k)+",":k-k+l:goto U8 if n$-"L" then for i-1 to c:for j-1 to c:L$(i,j)-c$(i,j):next:nexr then for f-l to c:for j-1 to c:x$(i,j)-c$(i,j):next:next if n$-"x" if n$-"m" then for i-l to c:for j-1 to c:m$(i,j)-c$(i,j):next:next if n$-"Lx" then for i-l to c:for j-l to c:I.x$(i,j)-c$(i,j):next:next if n$-"XL" then for i-1 to c:for j-1 to c:xL$(i,j)-c$(i,j):next:next if n$-'I.m" then for i-l to c:for j-l to c:Lm$(i,j)-c$(i,j'):next:next if n$-"mL" then for i-1 to c:for j-l to c:mL$(i,j)-c$(i.j):next:next for i-l to c:for j-1 to c:c$(i,j)-"":next:next:return

’ print"Press (ent) to set the levels of the input cerebral fibers and run the program, or d(ent) to change default values. Input fibers will fire first and firing will continue until (brk) is pressed, followed by any key to resume or ctl-c to end; activation levels of all neurons will increase by 1 at the start of each cycle unless they are in a resting state (-l,O,l default), and all fiber inputs will be repeated after every dn cycles." input n$:print:if n$-"d" then list Ul print"Enter the input pattern to the cerebral circuits as fiber activation levels:" for i-1 to c:print"iLl(";i;")":input iLl(i):next for i-1 to c:print"iL2(";i;')":input iL2(i):next return

L51

’ data

L52

12 CC 0, 0, 2, 2, 1, 2, 1, 0. 1, 0, 1, 3,

TC 0, 0, 2, 1, 0, 1, 0, 0, 0, 1, 0, 2,

3 Sl 0, 0, 0. 0, 0, 0, 0, 0, 0, 0, 3, 9,

4 S2 0, 0, 0, 0, 0, 2, 1, 1, 1, 1, 0, 1,

5 H 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 2,

10 6 7 8 9 N Al A2 Bl B2 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0. 2, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0. 0. 0, 0, -3, -2, -3, -3,

11 C 0, 0, 0, 0, 0, 0, -1, -1, -1, -1, 0, 0,

12 Py 0: 0: 0: 0: 0: 0: 0: 0: 0: 0: 0: 0:

Out /In 'CC 1 'CC 2 'Sl 3 'S2 4 ‘M

5

'N 'Al 'A2 'Bl 'B2 'C 'Py

6 wL(i,j) 7 8 9 10 11 12

’ 5 6 7 8 out 12 3 4 CP PP Cr Bl 82 B3 Pl P2 / In data 0, 0, 0, 0, 0, 0, 0, 0: 'CP 1 0, 0, 0, 0, 0, 0, 0, 0: 'PP 2 2, 3, 0, -2, 0, 0, 0, 0: 'Cr 3 0, 2, 0, 0, 0, 0, 2, 0: 'Bl 4 wx(i,j) 0, 0, 0, 0, 0, 0, 0, 3: 'B2 5 3, 0, 0, 0, 0, 0, 0, 0: '83 6 0, 2, 3, 0, -2, 0, 0, 0: 'Pl 7 3, 0, 0, 0, 0, -2, 2, 0: 'P2 8

L53



data

L54

12 CF 0,

MF 0,

0,

03

2, 0, 2, 2, 0, 7,

2, 2, 0, 0, 1, 0,

4 C2 0,

5 B 0,

6 St 0,

7 Go 0,

8 P 0:

0,

0,

0,

0,

0,

0:

0, 0, 2, 0, 2, 2,

0, 0, 0, -1, 0: 0, 0, -1, -1, 0: 0, 0, -2, 0, 0: 1, -2, 0, 0, 0: 2, 0, 0, 0, -2: 2, -1, -3, 0, 0:

3 Gl 0,

’ ’ Interconnecting

Out /In

‘CF 1 ‘MF 2

'Gl 'G2 'B 'St 'Go 'P

3 4 5 6 7 8

wm(i,j)

Output Input Output Input Connection i m its n f P ' Circuits Systems Circuit Circuit Neuron Neuron Ueight 12 -1 1 I 2 10 I$ , 2 1 12 12 -1 10 2 I3 I 2 1 12 12 -1 3 ,4 10 2 1 12 , 12 -1 4 3 5 10 , 2 1 12 12 -1 2.1 8 3 , 2 , 12 -1 8 12 -1 4 I3 8 ,

694

T. TRIFFET and H. S. GREEN

,

XS

mS

hxS

,

XLS

h$

L55

5 1

I 4 , 2

2

, 3

3

I 4

4

I 5

2 3 4 5 1

I . , , I

2

I 3

3

34

4

, 5

2

81

3

, 2

1 2 3 4

2

4

83

5

14

91 2 I 2 3 83 4 , 4 5 , 5 1 , 1 2 I 2 3 , 3 4 I 4 5 I 5 1 I 1 2 I 2 3 3 3 4 I 4 5 , 5 1

0 5 7 5 7 5 7 5 7 4 4 4 4 4 8 4 a 4 8 4 a 5 3 8 5 3 8 5 3 a 5 3 a 12 12 12 12 12 a 8 8 a 0 12 12 12 12 12

12 2 2 2 2 2 2 2 2 8 8 8 8 2 8 2 a 2 8 2 0 0 1 a 8 1 8 8 1 a 8 1 8 2 2 2 2 2 1 1 1 1 1 1 1 1 1 1

-1 -1 1 -1 1 -1 1 -1 1 -1 -1 -1 -1 1 -1 1 -1 1 -1 1 -1 -1 1 -1 -1 1 -1 -1 1 -1 -1 1 -1 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5