1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47
Neurocomputing 38}40 (2001) 965}971
Rule-dynamical approach to hippocampal network夽 Masami Tatsuno *, Yoshinori Nagai, Yoji Aizawa Laboratory for Mathematical Neuroscience, RIKEN, Brain Science Institute, 2-1 Wako-shi, Saitama 351-0198, Japan Center for Information Science, Kokushikan University, 4-28-1 Setagaya, Setagaya-ku, Tokyo 154-8515, Japan Department of Applied Physics, Waseda University, 3-4-1 Okubo, Shinjuku-ku, Tokyo 169-8555, Japan
Abstract To approach a complex system such as brain, we proposed a new constructive strategy based on rule dynamics. Firstly, the similarity between brain and rule-dynamical cellular automata (CA) was pointed out, and we showed that the rule-dynamical CA could be represented by 2-layered neural networks. Based on these "ndings, hippocampal network from a rule-dynamical point of view was constructed, and it was shown that the temporal pattern in each region was dependent on the input pathway, that is, the multi-synaptic input produced a spatiotemporal pattern, while the direct input produced a periodic pattern. 2001 Elsevier Science B.V. All rights reserved. Keywords: Rule dynamics; Cellular automata; Hippocampus; Neural networks
1. Introduction The brain is the highest example of a complex architecture, and it is composed of a huge number of neurons in mutual interactions. To understand brain functions, there have been a number of computational neuroscience researches such as a computer simulation based on a detailed model of neurons [4] and information-theoretic analysis on spike statistics [3].
夽
This work was supported by Research Grant 626-52045 from RIKEN. * Corresponding author. Tel.: #81-48-467-9664; fax: #81-48-467-9693. E-mail address:
[email protected] (M. Tatsuno). 0925-2312/01/$ - see front matter 2001 Elsevier Science B.V. All rights reserved. PII: S 0 9 2 5 - 2 3 1 2 ( 0 1 ) 0 0 4 2 0 - 9
966
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45
M. Tatsuno et al. / Neurocomputing 38}40 (2001) 965}971
Here we present another approach to the brain function using rule dynamics, which was originally developed by Aizawa et al., in cellular automaton (CA) study [1]. The key concept of rule dynamics is coexistence of local and global interaction, that is, each unit has an interaction not only with neighboring units, but also with global activities of the system. Rule dynamics describes the time evolution of CA by a combinatorial change of the fundamental rules, and the 2-state CA with nearestneighbor interaction was successfully described by the rule dynamics. Coexistence of local and global interaction is also a signi"cant feature of the nervous systems. For example, neurons often receive not only local synaptic inputs but also global neuro-modulatory e!ect, or depending on the distribution of axonal and dendritic terminals, synaptic connection can be not only local but also global. Therefore, it is clear that the brain and rule-dynamical CA share a common feature, and it is important to investigate how we can describe the brain from a rule-dynamical point of view. In this paper, we explain rule-dynamical CA brie#y, emphasizing that this simple system shows a rich temporal behavior and self-generated complexity. We then show that this rule-dynamical CA can be represented by 2-layered neural networks. Finally, we construct a rule-dynamical neural network of hippocampus, and investigate a temporal pattern which develops in each region.
2. Rule-dynamical cellular automata Let us consider a 2-state cellular automaton with nearest-neighbor interaction. According to rule dynamics, all the 256 rules in this CA system, which represent local interaction, are rewritten by a combination of the following 5 fundamental rules: f "S #S #S , G\ G G> f "S S #S S #S S , G\ G G G> G> G\ f "S S S , (1) G\ G G> f "(S !S )(S !S ), G G\ G G> f "f f (mod 2 for all), where S represents the state of the ith cell taking the value of either 0 or 1. Time G evolution of each unit is written as a combination of the 5 fundamental rules, S (t#1)" (t) f (S (t), S (t), S (t)), (2) G I I G\ G G> I where (t) denotes the coe$cient for the kth fundamental rule at time t, which takes I the value of 0 or 1. Here, the (t) is controlled by the average activity, and one of the I simple forms is written as
1 , (t)" S (t)!C I I N H I H
,
(3)
M. Tatsuno et al. / Neurocomputing 38}40 (2001) 965}971
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45
967
Fig. 1. An example of the rule dynamics on 2-state cellular automaton with nearest-neighbor interaction. (a) Temporal pattern of the cell states. The x-axis indicates the site number and the y-axis the time step. (b) Time course of the rules. The 32 rules composed of 5 fundamental rules are labeled accordingly. (c) Time course of normalized average activity.
where ( ) ) represents a step function, represents $ sign, and C is the threshold for I I the kth fundamental rule. Through (t), the global interaction in#uences the time I evolution of the local units. One of the examples of the time course of this system is shown in Fig. 1. Here, a selfsustained complex pattern develops (Fig. 1a), and both the changes of the rule (Fig. 1b) and the average activity (Fig. 1c) show clear intermittency. In other words, this simple system shows a variety of temporal patterns and generates a complex pattern through its local and global interaction.
3. Rule-dynamical neural networks We now consider a neural network description of cellular automata. The simplest case can be written as S (t#1)"(S #S #S !¹ ), (4) G G\ G G> G where S (t) represents the state of the ith McCulloch}Pitts units, which takes the G value of either 0 or 1, and ¹ is the threshold for the ith unit. Although this is G
968
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45
M. Tatsuno et al. / Neurocomputing 38}40 (2001) 965}971
Fig. 2. Time evolution of excitatory units in 2-layered neural networks. The left panel shows a periodic pattern (class 2) and the right panel an aperiodic pattern (class 3).
the most simple McCulloch}Pitts neural network, several fundamental rules such as f , f and f are obtained by adjusting the value of the threshold. This threshold adjustment is also achieved if we introduce inhibitory e!ect from surrounding inhibitory neurons. Therefore, we construct 2-layered neural networks consisting of excitatory and inhibitory neuron layers. The time evolution is typically written as
S#(t#1)" G
,# ,' w# S#(t)# w' S' (t)!¹# , GH H GI I G H I
(5)
,# ,' S' (t#1)" w# S#(t)# w' S' (t)!¹' , I IG G IH H I G H where S# and S' represent the states of the ith excitatory unit and the kth inhibitory G I unit, w# and w' represent the excitatory connection from the jth to the ith unit and GH GI the inhibitory connection from the kth to the ith unit, and ¹# and ¹' represent the G I threshold of ith excitatory unit and kth inhibitory unit, respectively. Here, if the connection range between excitatory and inhibitory neurons is su$ciently large, inhibitory e!ect re#ects the global activity of excitatory neurons. That is, the excitatory unit is driven not only by the local connection from neighboring excitatory units, but also by the global connection from inhibitory units which re#ect the average activity of excitatory ones. One of the examples of the time evolution of this 2-layered neural network is shown in Fig. 2. Depending on the connection range between excitatory}excitatory neurons, this network shows a "xed point behavior (not shown), periodic behavior and aperiodic behavior, which correspond to classes 1, 2, and 3 behaviors of Wolfrum's classi"cation, respectively [5].
M. Tatsuno et al. / Neurocomputing 38}40 (2001) 965}971
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45
969
Fig. 3. Temporal pattern of pyramidal units in CA1 region. The left panel is the case for a multi-synaptic input, while the right panel represents the case for a direct input.
4. Rule-dynamical network of hippocampus In the preceding sections, we have shown that (1) brain and rule-dynamical CA have similar characteristics, (2) rule-dynamical CA can be represented by 2-layered neural networks, which we call rule-dynamical NN. We now apply rule-dynamical NN representation to hippocampus, and investigate a temporal pattern in each hippocampal region. We modeled each region of hippocampus (EC layer II, EC layer III, EC layer V, DG, CA3, CA1 and Subiculum) by rule-dynamical NN, and connected them anatomically. We allow long-range excitatory}excitatory connections only in DG and CA3. In numerical simulation, two input pathways were considered; one was a multisynaptic pathway which started from EC layer II and proceeded to DG, CA3, CA1, Subiculum and EC layer V, and the other was a direct pathway which started from EC layer III, and proceeded directly to CA1, Subiculum and EC layer V. We generated a random pattern as an input to EC layer II or EC layer III, and investigated the pattern which developed in each region. The temporal pattern observed in CA1 is shown in Fig. 3. Here we choose CA1 because it is the area where two inputs meet and integration may take place. The left panel shows the time evolution when the multi-synaptic pathway is stimulated, and CA1 sustains an aperiodic pattern. On the other hand, a periodic pattern is developed when the direct pathway is stimulated. Furthermore, if we stimulate both input pathways, a mixed pattern appears in CA1 region (data not shown). Therefore, it is interesting to consider where these inputs are coming from. Anatomically, it is suggested that the multi-synaptic pathway is related to the episodic memory, and the direct pathway to the semantic memory [2]. Our result that the multi-synaptic input produces temporal pattern while the direct input produces periodic pattern coincides with this fact, because the episodic memory changes temporally while the semantic memory not.
970
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45
M. Tatsuno et al. / Neurocomputing 38}40 (2001) 965}971
5. Summary To approach a complex system such as brain, we proposed a constructive strategy. Firstly, we presented a rule-dynamical CA, which exhibits a set of complex behaviors, and pointed out that there exists a similarity between brain and rule-dynamical CA. Secondly, we showed that rule-dynamical CA could be represented by 2-layered neural networks. Thirdly, we constructed a whole hippocampus model by ruledynamical NN, and showed that the temporal patterns were dependent on the input pathway. The concept of rule dynamics and the model we presented here are rather simple, but this approach is able to depict an essential feature of the nervous systems.
References [1] Y. Aizawa, I. Nishikawa, Toward the classi"cation of the patterns generated by one-dimensional cell automata, in: G. Ikegami (Ed.), Dynamical Systems and Nonlinear Oscillations, World Scienti"c, Singapore, 1986, pp. 210}212. [2] H. Duvernoy, The Human Hippocampus, Springer, Berlin, 1998. [3] F. Rieke, D. Warland, R.D.R.V. Steveninck, W. Bialek, Spikes, MIT Press, New York, 1997. [4] R.D. Traub, R.K.S. Wong, R. Miles, H. Michelson, A model of a CA3 hippocampal pyramidal neuron incorporating voltage-clamp data on intrinsic conductances, J. Neurophysiol. 66 (1991) 635}650. [5] S. Wolfram, Theory and Application of Cellular Automata, World Scienti"c, Singapore, 1986.
Masami Tatsuno received his Ph.D. in Physics from Waseda University, Japan, in 1997. From 1996 to 1999, he was a Research Associate of the Department of Applied Physics, Waseda University. And since 1999, he has been a postdoctoral researcher of Laboratory for Mathematical Neuroscience, RIKEN Brain Science Institute, Japan. He is currently working on the formulation of a new description of the brain in the non-equilibrium condition, and the elucidation of memory information representation in the hippocampus.
Yoshinori Nagai, present appointment is Professor at Faculty of Political and Economic Sciences, and Center for Information Science (additional post) in Kokushikan University. He received his Ph.D. in Science from Waseda University in 1983. From 1983 to 1995, he was a full time sta! of Azabu University. From 1995 to the present, he has been a full time sta! of Kokushikan University. He was a Visiting Fellow of Applied Mathematics Department in Research School of Physical Sciences and Engineering of the Australian National University. His research "elds are Biophysics, Nonlinear dynamical systems, and Statistical physics.
M. Tatsuno et al. / Neurocomputing 38}40 (2001) 965}971
1 2 3 4 5 6 7 8 9
971
Yoji Aizawa received his Ph.D. in Physics from Waseda University, Japan, in 1973. From 1973 to 1977, he was a Research Associate of the Department of Pharmacology, Hokkaido University, Japan, and from 1977 to 1979, he was a Research Fellow at UniversiteH Libre de Bruxelles, Belgium. In 1979, he became an Assistant Professor of the Department of Physics, Kyoto University, Japan, and since 1986 he has been a Professor of the Department of Applied Physics, Waseda University. His research "elds are statistical mechanics including chaos theory and nonequilibrium nonlinear physics, and theoretical biophysics including neural networks and morphogenetic evolution.