Abstracts / Neuroscience Research 68S (2010) e223–e334
P2-r07 Novel method to describe neural dynamics across brain regions Toru Yanagawa , Yasuo Nagasaka, Naotaka Fujii Adaptive Intelligence, Rike BSI Multi-electrode neural recoding is more common in neuroscience, but standard method for analyzing the recorded data has not been well developed, especially, in terms of relationship between brain regions. If there is a method to classify whole spatio-temporal patterns of neural dynamics into finite patterns, we will be able to describe interaction of brain regions by repetitive combination of the patterns. In this study we developed novel method to describe whole spatio-temporal patterns of neural dynamics by combining co-activation motifs and applied to neural data in primate.We recorded neural activity in two Japanese macaque by using Eectrocorticogram (ECoG) electrodes while monkey performed social conflict task that required adaptive social behavior. The array consisted of 64 channels and covered prefrontal, premotor, primary motor and parietal cortex in left hemisphere. Wavelet transform was applied to the data. Nonnegative matrix factorization (NNMF) was applied to the wavelet power spectrum that was decomposed to 10 co-activation factors and time series of their weights for each frequency band.We found each factor had unique motif. Activated brain areas of the motif were clustered and well matched to anatomically defined brain regions. Through NNMF analysis, we could transform whole neural dynamics into temporal sequence of weight of the factors for each frequency bands. Then weight sequence of each factor was converted to binary state sequence, either active or non-active state, by applying threshold and probability of co-activation of factors was calculated. We found that probability of observing instantaneous combinations of active factors were stable for experimental days in each monkey. Furthermore, the specific temporal patterns of co-activated factors were correlated with motor action, expectation of food and feeling of social pressure. The finding suggests that our method is useful for revealing network mechanism in various cognitive functions. doi:10.1016/j.neures.2010.07.1456
e329
performance using three different tactile maps over which the features are defined, each with different topological organization. These were (a) random sensor placement, (b) a hand-crafted map preserving relative distances between sensors in space, and (c) an self-organized map in which sensor distances based on temporal cross-correlations are preserved. Inspired by the topological organization observed in somatosensory cortex in the brain, we hypothesized that map (b) would yield the best performance, followed by (c), followed by (a). Surprisingly, we found the opposite result: preserving positional representation in the map actually reduced accuracy compared to random organization. We hypothesize that this may be due to positional bias in the training data which interferes with learning the “touch/no-touch” classification task. This leads us to speculate whether other touch-related tasks may also benefit from features that do not preserve spatial organization, and if so, whether evidence for these kinds of features might be found in the brain. doi:10.1016/j.neures.2010.07.1458
P2-r10 Human machine interface: Hypotheses involving body schema and internal model representations Erhan Oztop 1 Kawato 4 1
, Akira
Murata 2 , Hiroshi
Imamizu 3 , Mitsuo
ATR/NICT 2 Kinki University, Japan 3 NICT/ATR 4 ATR
A machine or a robot that is controlled by a human operator can be consider as a tool, despite the high complexity. Robotic experiments indicate that humans learn to control robots very efficiently provided that the interface is designed intuitively. The critical question is the neural basis of this skilled behavior. Internal Model and Body Schema mechanisms must play an important role. We are currently designing fMRI experiments to answer these questions. We expect that according to the anthropomorphic nature of the robot, the way the brain handles the control will be different, and this difference will be picked up by fMRI imaging. doi:10.1016/j.neures.2010.07.1459
P2-r08 Methods for investigating how central nervous system utilize and control the kinematics redundancy in reaching and grasping movements Brian Moore 1 , Erhan Oztop 1,2 1
ATR-CMC 2 NICT
Our research interest is in synthesizing human like reaching and grasping on robotic systems through understanding the principles underlying human control of these actions. When one needs to define the control and task requirements in the Cartesian space, the inverse kinematics problem needs to be solved. For non-redundant manipulators, a desired end-effector position and orientation can be achieved by usually a single solution. For redundant manipulators however, there are in general infinitely many solutions among which an appropriate one must be selected. From a neuroscientific point of view, it is still an unsettled issue as to how our brains handle motor control for highly redundant degrees of freedom limbs, in particular arms and hands. To address some of these issues we are currently developing methods that will allow us to quantitatively asses how humans utilize an extra degree of freedom in the human arm kinematics.
P2-r11 Discovering action-oriented object meanings in an anthropomorphic robot platform Emre Ugur 1,2,3 , Erol Sahin 3 , Erhan Oztop 1,2 1 Biological ICT Group, National Institute of Information and Communication Technology, Kyoto, Japan 2 Cognitive Mechanisms Laboratory, Advanced Telecommunications Research Institute International, Kyoto, Japan 3 Department of Computer Engineering, Middle East Technical University, Ankara, Turkey
It is known that primate brain process visual information at in least two pathways: dorsal and ventral pathways. The ventral pathway appears to be responsible for object identification; whereas the dorsal pathway is more involved in perception for action. According to Ecological Psychologist J.J. Gibson, the organisms do not need to recognize the action-free meanings of the objects provided by ventral pathway in order to act on them. In this study, following the Gibsonian view of action-perception cycle, we employ an anthropomorphic robotic hand to learn action-oriented meanings of the objects through active and unsupervised exploration of the environment.
doi:10.1016/j.neures.2010.07.1457
doi:10.1016/j.neures.2010.07.1460
P2-r09 Classifying tactile sensation of whole-body robot skin based on localization hypothesis
P2-r12 eMOSAIC model for humanoid robot control
Tomoyuki Noda 1 , Ian R. Fasel 2 , Hiroshi Ishiguro 3,4
Norikazu Sugimoto 1,2 , Jun Morimoto 2 , Sang-Ho Hyon 2,3 , Mitsuo Kawato 2
1
1
A robot with flexible tactile sensors covering its whole body experiences considerable amounts of self-sensation, which can be difficult to distinguish from sensations caused by an external event (such as a human touching the robot). We present a machine-learning approach, in which we first define a set of spatial features parameterized by size and position with respect to a 2D tactile sensor map, and then use an ensemble classifier approach to combines a small set of these features to distinguish “touch” versus “no touch” sensor data. For training the system, a database was collected in which a humanoid robot with 197 whole-body tactile sensors continuously made random, “ambient” movements while humans interacted with it. Each time-instant of the interactions was then labeled for ground truth, i.e., as containing “touch” versus “no touch” by the human. We compared classification
In this study, we propose a novel extension of the MOSAIC architectureto control real humanoid robots. The MOSAIC architecture was originallyproposed by neuroscientists to clarify the human ability of adaptivecontrol. The modular architecture of the MOSAIC model can be useful forsolving nonlinear and nonstationary control problems. Both humans andhumanoid robots have nonlinear body dynamics and many degrees offreedom. In addition, they can carry objects, and this makes thedynamics nonstationary. Therefore, the MOSAIC architecture can beconsidered a promising candidate as a motorcontrol model of humans and a control framework for humanoid robots. However, the application of the MOSAIC model has been limited to simple simulated dynamics.Since each module of the MOSAIC has a forward model, we can adopt thismodel to construct a state estimator. By using the state estimators, theextended MOSAIC model can deal with large observation noise
ATR-CNS Kyoto Japan 2 Department of Computer Science, University of Arizona 3 JST, ERATO Asada Project 4 Graduate School of Engineering Science, Osaka University
NICT Bio-ICT group Kyoto Japan 2 ATR-CNS Kyoto Japan Robotics, Ritsumeikan University Japan
3
Department