Inhibitory network dependency of Cantor coding in hippocampal CA1

Inhibitory network dependency of Cantor coding in hippocampal CA1

e436 Abstracts / Neuroscience Research 68S (2010) e335–e446 P3-q19 Incremental learning and model selection under virtual concept drifting environme...

71KB Sizes 0 Downloads 44 Views

e436

Abstracts / Neuroscience Research 68S (2010) e335–e446

P3-q19 Incremental learning and model selection under virtual concept drifting environments

P3-q21 Asymptotic states of a recurrent network under ongoing synaptic plasticity

Koichiro Yamauchi

Takaaki Aoki 1 , Yuri Kamitani 1 , Toshio Aoyagi 1,2

Chubu University, Department of Information Science

1

Let the learning samples be (xb ; yb ) (b = 1,2,...), whosejoint probability distribution is P(x,y) = P(y|x)P(x). In actual environments, the prior distribution P(x) is not stable. For example, the sensory inputs from a robot highly dependon the location of the robot. Therefore, the sensory input distribution is also gradually moved along with the movements of the robot. To achieve successful online learning ofthe relation between x and y: P(y|x) using a model-based learning machine, we need independent and identically distributed (i.i.d) x’s. However, in the above cases, P(x) is changing over time so that the online learning usually fails.To overcome this problem, many researchers have developed incremental learning algorithms. These algorithms allow P(y|x) to be learned even if P(x) is changing. The author have discussed the theoretical point of view of these incremental learning methods using the learning strategies under covariate shift and redeveloped a suitable learning algorithm. In covariate shift, the learning input density P(x) is not equivalent to that of the test samples. In such environments, learning machines need to adjust their parameters to minimize the following weighted error function to acquire greater generalization capabilities. E = i (F(xi )-f␪ (xi ))W(xi ) where W(x) is the weight for each sample and W(x) = (q(x)/P(x))␭ , where q(x) denotes the density of x for test samples and 0<␭<1. f␪ (x) denotes the output of the learning machine and F(x) denotes the target output. In incremental learning, q(x) corresponds to the input density for the new learning samples subsequently presented. Although the previous study proposed the prediction method of q(x) and the model selection criterion, it did not mention how to reduce the computational complexity as well as the storage space. This paper extends the previous method to reduce the storage space.

How does synaptic plasticity organize the structure of the neural network? In general, learning induces a change of network structure associated with the activity of neurons. However, the organizing process of network structures is still unclear in a network level, despite insensitive studies of the plasticity in a single synapse level. In this study, we analyze the dynamics of a recurrent network under spike-timing-dependent plasticity. Through the plasticity, neuronal activity changes the network structure, and in turn, the network structure affects the neuronal activity. In other words, the network and the activity evolve simultaneously. This co-evolution of network and activity make the system difficult to analyze. In this study, we take a “dynamical system” approach to this problem. Considering a spontaneous activity of regular spiking neurons, we introduce a simple model of co-evolving dynamics and analyze asymptotic states of this dynamical system. We found that there are three distinct asymptotic states, depending on the form of STDP learning function. When the learning function is similar to the Hebbian rule, the neurons forms two synchronized groups. It can be interpreted as emergence of neural assembles. Next, if the learning function is temporally asymmetric, a type of feed-forward connections is organized and the neurons generate sequential spike propagation. When the learning function is similar to the anti-Hebbian rule, this dynamical system becomes chaotic. Owing to inherent instability of the chaos, the memory embedded in the network is quickly destroyed. Moreover, we numerically confirmed these states can be observed in the networks of several conductance-based neuron models. In conclusions, the findings in our analytical model would provide a clue for understanding of the organization of network structures and help further studies of modeling synthetic nerve systems in computer simulations.

doi:10.1016/j.neures.2010.07.1931

doi:10.1016/j.neures.2010.07.1933

P3-q20 Consequences of the imperfectness of the dopamine signal on learning

P3-q22 Inhibitory network dependency of Cantor coding in hippocampal CA1

Wiebke Potjans 1,2 , Abigail Morrison 1,3,4 , Markus Diesmann 1,5 1

RIKEN Brain Science Institute, Wako, Japan 2 Institute of Neurosciences and Medicine, Research Center Juelich, Germany 3 Functional Neural Circuits, Faculaty of Biology, Albert-Ludwigs-University of Freiburg Germany 4 Bernstein Center Freiburg, Albert-Ludwigs-University of Freiburg, Germany 5 Brain and Neural Systems Team, RIKEN CSRP, Wako, Japan Learning to make predictions about future rewards and punishments and to adapt the behavior accordingly is crucial for the survival of any higher organism. Experimental findings in the dopaminergic system suggest that mammals solve this kind of learning problems by implementing a certain type of learning algorithm, known as temporal-difference (TD) learning. This hypothesis is mainly based on the resemblance of the phasic dopaminergic activity to the theoretical TD error (Schultz et al., 1997) and the finding that cortico-striatal plasticity is modulated by dopamine (Reynolds et al., 2001). However, the phasic dopaminergic signal implements only an imperfect TD error, as it is does not have as a large range to represent negative errors as it has to represent positive ones. Therefore, it is unclear, in how far dopaminedependent plasticity is able to implement TD learning. Here, we present a spiking neuronal network model which simultaneously accounts for the following experimental findings: the generation of a dopaminergic signal with realistic firing rates and cortico-striatal plasticity in agreement with experimental findings. We demonstrate that the imperfect TD error results in a slightly modified TD learning algorithm with selfadapting learning parameters and offset. We show that the model is able to learn a task with sparse reward with comparable speed and equilibrium performance to a discrete-time TD(0) learning implementation. Furthermore, we analyze the learning behavior in more complex tasks, where learning is mainly guided by punishment instead of by reward. Reference

Reynolds, Hyland and Wickens Nature 2001, 413: 67-70. Schultz, Dayan and Montague Science 1997, 275: 1593-1599.

doi:10.1016/j.neures.2010.07.1932

Graduate School of Informatics, Kyoto University, Kyoto Chiyoda-ku, Tokyo, 102-0075, Japan

2

Sanbancho,

Yasuhiro Fukushima 1 , Minoru Tsukada 1 , Ichiro Tsuda 2,3 , Yutaka Yamaguti 2 , Shigeru Kuroda 2 1

Brain Science Institute, Tamagawa University, Brain Science Institute 2 RIES, Hokkaido Univ, Hokkaido 3 Department of Sci, Hokkaido Univ, Hokkaido

Tsuda (2001) and Tsuda and Kuroda (2001, 2004) theoretically predicted the possibility of Cantor coding in CA3-CA1 network. Cantor coding provides an information coding scheme for temporal sequences of events. It forms a hierarchical structure in state space of neural dynamics. In the model, it is assumed that the CA3 state is wondering around quasi-attractors, each of which represents a single episodic event, and that CA3 outputs a temporal sequence of events, which should be encoded in CA1, especially in temporal dimensions. Input-dependent distribution of CA1 state is hierarchically clustered in the vector space Our previous study showed the Cantor coding-like property in hippocampal CA1 neurons, where the clustering property was dependent on the magnitude of EPSP and NMDA-type glutamate receptor (Fukushima et al., 2007). Furthermore, the relation between input pattern and recorded responses was proved to be described as iterated function systems, which provides a direct evidence of the presence of Cantor coding, and also the coding quality was drastically improved by a newly invented virtual reconstruction method using data from plural neurons (Kuroda et al., 2009). Recently, in order to clarify the detailed property of Cantor coding, we showed that the performance of Cantor coding sensitivity is dependent on interval of input sequence theoretically in two-compartment modeled neurons (Yamaguti et al., submitted) and physiologically in pyramidal CA1 neurons. However, molecular mechanism of the input sequence dependency is not clear. In this study, we estimated GABAergic control dependency of cantor coding in hippocampal CA1, and interval dependency of cantor coding was modulated by the GABAA receptor antagonist. doi:10.1016/j.neures.2010.07.1934