NEURONAL MAPS FOR SENSORY-MOTOR CONTROL IN THE BARN OWL. J.C. Pearson ~, C.D. Spence 1, J.J. Gelfand 1, W.E. Sullivan 2 and R.M Peterson ~. (1) David Sarnoff Research Center, Subsidiary of SRI International, CN5300, Princeton, NJ 08543; (2) Department of Biology, Princeton University, Princeton, NJ 08544. We present models and computer simulations of the neural structures that create fused visual/auditory/motor representations of space in the midbrain of the Barn Owl. These representations are used to orient the head so that visual or auditory stimuli are centered in the visual center of view. This is an example of the problem of sensory-motor control in robotics. There are many ways one could solve such a problem. The Barn Owl's solution is based on map-like neural networks or "neuronal maps". Neuronal maps are two-dimensional arrays of locally interconnected neurons whose response properties vary systematically with position in the array, thus forming a map-like representation of information. The feed-forward, locally connected architecture of neuronal maps is well suited to modular, hierarchical processing. Computation is performed in stages by transforming the representation from one map to the next via the geometry of the projections between the maps and the local interactions within the maps. The fidelity of these transformations is maintained through dynamic processes of self-organization, endowing them with self-optimizing and fault tolerant properties. High resolution is achieved at the behavioral level even though individual neurons are broadly tuned to stimulus parameters. Neuronal maps are prevalent in the visual and somatosensory systems, which makes sense since the transducer arrays, the eyes and the skin, are themselves maps. This is not the case for hearing, in which the cochlea contains but a one dimensional map of frequency. Thus it is particularly interesting that the auditory system derives or computes a two dimensional map of space instead of using a simpler method for representing the position of acoustic stimuli. Several tough signal processing problems have to be overcome in the generation of an acoustic map of space, such as the problem of phase ambiguity and ghosts. Possible advantages of the map design are greater ease of calibration with the inherently two dimensional visual representation of space, and the ability to handle multiple objects. The elevation and azimuth of an acoustic stimulus is encoded by the position of the region of activity it produces in the neuronal map of acoustic space. This acoustic space map is derived from differences in the phase and intensity of the acoustic signals arriving at the two ears. Azimuth and elevation are calculated in separate, parallel channels. Azimuth is computed in two stages, starting from an initial neuronal map of frequency vs time delay. Elevation is also computed in two stages using maps of frequency vs interaural intensity difference. We present quantitative models of these operations that account for the qualitative features of the neural structures. The acoustic space map projects onto the optic tectum, as does the retina, forming a fused visual/acoustic mapping of space, i.e., visual and acoustic stimuli at the same elevation and azimuth will activate ceils in regions centered on the same point of the optic tectum. This fusion is adaptive while the owl is growing, as has been shown by experiments in which prisms and/or ear plugs were used to produce misregistration. Registration is restored over a period of days and weeks as the acoustic mapping shifts so as to coincide with the visual mapping. We present a mathematical model of this adaptive fusion. The model assumes that the acoustic projection onto the tectum is topographic though broad, while the visual projection is topographic and sharp. The acoustic connection strengths are altered through a process of selforganization so as to form a focused beam of strong acoustic connections whose inputs are coincident with the visual inputs. The activation of a region in the tectum can elicit head movements to center the corresponding point of space in the visual field of view. Recent studies have shown that simultaneous stimulation of two regions will orient the head toward the point in space that is midway between the points corresponding to the two regions of stimulation. We present two models of this averaging behavior, one is a purely neural solution, the other both neural and motor. These studies explore how the system handles multiple objects in the environment. The purpose of this work is to quantitatively characterize the computations being carried out at the map level, and to determine the mechanisms at the neuronal level. The immediate application of this work is to neurobiology, for the models make a number of specific experimentally testable predictions about this system. In addition, we hope that rigorous study of this particular system will aid in understanding the general principles of computing with neuronal maps, and suggest new neural network architectures and algorithms.
353