Brain maps and parallel computers

Brain maps and parallel computers

eespect ves the MBL, Frank Rattray Lillie, put it, 'the interest in neurological work has never disappeared at the Marine Biology Laboratory '21. In f...

770KB Sizes 1 Downloads 165 Views

eespect ves the MBL, Frank Rattray Lillie, put it, 'the interest in neurological work has never disappeared at the Marine Biology Laboratory '21. In fact, we could trace many major trends within neurobiology by following the changes in research and organisms used at the MBL through the century. Selected references 1 Maienschein,J. (1986) Defining Biology. Lectures from the 1890s Harvard UniversityPress 2 Donaldson,H. H. (1894)J. Morphol. 9, 123-166 3 Ryan, W. C. (1939) Studies in Early Graduate Education pp. 47-90, The CarnegieFoundation 4 Maienschein, J. (1988) in The American Development of Biology (Rainger, R., Benson, K. and Maienschein,J., eds), pp. 154-157, Universityof PennsylvaniaPress 5 Strong, O. S. (1895) J. MorphoL 10, 101-230 6 Morrill, A. D. (1899) in MBL Annual Reports 1896-1899, pp. 66-71

7 Ayers, H. (1896)J. Comp. Neurol. 6, 211-214 8 Lewis,M. (1896) Anat. Anz. 12, 291-299 9 Patten, W. and Redenbaugh,W. (1899) J. MorphoL 16, 91-200 10 Patten, W. (1896)J. MorphoL 12, 17-148 11 Dahlgren,U. (1897) Anat. Anz. 13, 149-151 12 Maienschein,J. (1985) Biol. Bull. 168 (suppl.), 192-196 13 Crosby,E. C. (1960) J. Comp. Neurol. 115, 3-8 14 Hodge, C. K. reported by Goddard, H. H. (1898) J. Comp. Neurol. 8, 245-247 15 Ap&thy, S. (1897) Mitt. Zool. Star. Neapel. 12,495-748 16 Harrison,R. G. (1901) Arch. Mikrosc. Anat. 57, 354-444 17 Loeb, J. (1900) Comparative Physiology of the Brain and Comparative Psychology G. P. Putnam 18 Yerkes,R. M. (1901) Pop. Sci. Monthly. 58, 519-525 19 Whitman, C. O. (1899) Biol. Lectures 1898, 285-338 20 Jennings,H. S. (1900) Biol. Lectures 1899, 93-112 21 Lillie, F. R. (1944) The Woods Hole Marine Biological Laboratory p. 122, Universityof Chicago Press 22 Neal, H. V. (1898)J. Comp. Neurol. 8, 153-160

Acknowledgements Thanksto Jane Fessendenand the MBL Archivesfor permissionto use the photographsand archivalmaterials, and to ArizonaState University'sCollege of LiberalArts and Sciencesfor travel supportto complete the research.

Brain mapsand parallel computers M a r k E. Nelson and James M. Bower It is well known that neural responses m many brain regions are organizedin characteristicspatialpatterns referred to asbrain maps. It is hke/y that these patterns in some way reflect aspects of the neura/ computations being performed, but to date there are no genera/ guiding principles for re/ating the structure of a brain map to the properties of the associated computation. In the fie/d of para//e/ computing, maps simi/ar to brain maps arise when computations are distributed across the mu/tip/e processors of a para//e/ computer. In this case, the relationship between maps and computations is we// understood and genera/princip/es for optima//y mapping computations onto para//e/ computers have been developed. In this paper we discusshow these princip/es may he/p i//uminate the re/ationship between maps and computations in the nervous system. Historically, descriptions of brain function tend to be cast in terms of the most sophisticated technology of the day. For example, early Greeks, influenced by the technology of aqueducts, described mental processes in terms of the flow of bodily fluids, Descartes framed nervous function in terms of machines and mechanical forces, and Sherrington used the analogy of a telephone switchboard. Today, brain function is most often described in terms of circuits and computations, reflecting the modern influences of electronics and computers. Although technology-based metaphors eventually become obsolete, they can serve a useful purpose by providing new conceptual frameworks for generating ideas and posing questions concerning brain function. In that spirit, we will draw on the emerging technology of parallel computing in an attempt to gain new insights into parallel processing in the brain. Recent progress in the field of parallel computing has demonstrated the practicality of harnessing large numbers of modest processors together to achieve remarkable levels of computing performance 1,2. Parallel computers have been shown to be TINS, Vol. 13, No. 10, 1990

capable of solving difficult problems in a wide variety of scientific and engineering fields including computational neuroscience. In fact, our own initial involvement in parallel computing arose primarily from our interest in carrying out large-scale simulations of biological neural networks 3,4. However, while learning to use parallel computers for this practical purpose, we became aware that some of the parallel processing issues we were facing seemed to have closely related counterparts in neuroscience. In particular, the question of how to map a computation optimally onto multiple processors seemed to be a fundamental issue, whether the individual processors were silicon chips or neurons. In this paper, we address this question by describing how optimal maps are constructed on parallel computers and discussing how the principles that have emerged from this effort might apply to maps in the brain.

Mark E. Nelsonand JamesM. Bowerare at the Divisionof Biology, Computation and NeuralSystems Program,California Instituteof Technology, Pasadena,CA 91125, USA.

Parallel computer maps In principle, a parallel computer has the potential to deliver computing power equivalent to the total power of the processors from which it is constructed: a machine with 100 processors can potentially deliver 100 times the computing power of a single processor. In practice, however, the performance or computational efficiency that can be achieved is always less than this ideal value. For a given computational task, one of the factors that most influences this efficiency is how the computation is mapped onto the available processors5. In parallel programming, the efficiency of a particular parallel mapping is analysed in terms of two potential sources of inefficiency referred to as 'load imbalance' and 'communication overhead' (see Box 1 for mathematical description). Load balance is a measure of how uniformly the computational work-

© 199o.ElsevierSciencePublishersLtd.(UK) 0166-2236/90/$02.00

403

perspectives When attempting to achieve maximum performance from a parallel computer, a programmer creates a custom-tailored mapping for each computational task that minimizes the combined contriA perfectly efficient implementation of a computational task on a parallel butions of load imbalance and communication computer with N processors would give a factor N speed-up in computation overhead for that task. In some cases this is time; in practice, the actual speed-up is always less than this ideal. Thus the accomplished by applying simple heuristics 5, while ratio of the actual speed-up, o, to the ideal speed-up, N, can serve as a measure of the efficiency, ~, of a parallel implementation in others it requires the explicit use of optimization techniques like simulated annealing 6 or artificial = olN 'neural network' approaches 7, Independent of the technique that is actually used to find an optimal On parallel computers, the load imbalance, X, is defined in terms of the map, the structure of the map will depend on certain average calculation time per processor, Tavg, and the maximum calculation properties of the underlying computation itself. time required by the busiest processor, Tm~x: Thus different types of computations give rise to different kinds of optimal maps on parallel computers. k = ,,.Tr.ax- Tav~ These maps can be broadly categorized into three T~vg classes based on their spatial structure, as illustrated in Fig. I. The communication overhead, q, is defined in terms of the maximum Continuous maps The most common class of calculation time Tma×and the maximum communication time Tcomm: maps is that in which some computationally relevant parameter is represented in a smooth and conrcomm ~ltinuous manner. These maps are optimal for comTmax+ Tcor,m putations that are characterized by predominantly local interactions in the 'problem space'. An Assuming that the calculation and communication phases of a computation do not overlap in time, as is the case for many parallel computers, the example of such a computation would be the relationship between efficiency, e, load imbalance, X, and communication smoothing of intensity values in an image using an overhead, ~I, is given byS: algorithm that depends on the intensity values of nearby pixels. Because the algorithm only requires information about neighboring pixels, the com1+/. munication overhead would be minimized when neighboring parts of the image are mapped to When both load imbalance, X, and communication overhead, ~I, are small, neighboring processors. A smooth and continuous the inefficiency is approximately the sum of the contributions from load mapping of the problem space onto the 'processor imbalance and communication overhead: space' maintains these spatial relationships as illustrated in Fig. IA. Given that a continuous mapping ~:= 1 -(q+X) minimizes communication overhead, the only remaining concern is that the mapping should also As an example, assume that an iteration of some calculation averages minimize load imbalance. If particular regions of the 100 ms over all processors, but takes 110 ms on the busiest, and that between every iteration there is a 5 ms period during which information is problem space represent larger or smaller compucommunicated between processors. In this example, the load imbalance tational loads than average, load balance is achieved would be 0.10, the communication overhead would be about 0.04, and the by expanding or contracting appropriate regions of resulting efficiency about 86%. In other words, a parallel computer the map while still maintaining continuous spatial consisting of 100 processors, could complete the computation about 86 relationships. times faster than a single processor performing the task sequentially. In Scattered maps At the opposite end of the many parallel applications today, efficiencies of 80-90% or better are mapping spectrum from continuous maps are maps routinely obtained. that show no apparent spatial structure or organization. Interestingly, such non-topographic maps arise on parallel computers as near-optimal solutions load is distributed among the available processors. for a variety of computations, all of which are Since the speed of a parallel computation is limited characterized by the lack of systematic structure in by the speed of the slowest processor, unduly the pattern of interactions in the problem space. burdening even a single processor can dramatically These computations can be divided into three degrade the overall performance of a parallel categories: cases in which no interactions take place, machine. In the extreme case of all processors cases in which widespread interactions take place, waiting for one processor to finish some essential and cases in which the interaction patterns are step in a computation, a parallel computer is unpredictable. In each of these situations, there is no effectively reduced to a sequential machine. Com- useful structure in the pattern of interactions that munication overhead is related to the cost of would allow the communication overhead to be communicating information between processors. reduced by a judicious mapping. Thus the major On parallel computers, this overhead is primarily concern shifts to ensuring that the computational associated with the time-cost for exchanging infor- load is balanced. A scattered mapping automatically mation between processors, although there is also a achieves this goal by homogenizing the problem cost in terms of the physical space taken up by space so as to distribute the computational load connections between processors. uniformly. Figure 1C illustrates a scattered mapping

Box 1. Quantifying computational efficiency on parallel computers

404

TINS, VoL 13, No. 10, 1990

perspectives A

B

D

E

.....

F

Fig. 1. Parallel computer maps and brain maps. (A-C) Threegeneral classes of parallel computer maps: (A) continuous mapping, (B) patchy mapping and (C) scattered mapping. In each case, the pixels from a 256 x 256 MRI brain-scan image have been mapped onto a 4 x 4 array of computer processors, with each processor being assigned responsibility for one-sixteenth of the total pixels. Depending on the nature of the computation being performed, different mappings give rise to different computational efficiencies.

of the image data from Fig. 1A. As can be seen from this figure, a scattered mapping assigns an almost identical mixture of the problem space to each processor. Patchy maps Perhaps the most interesting map class is that in which the spatial organization is intermediate between continuous and scattered maps. Maps in this class are characterized by a number of patches; within each patch the representation is smooth and continuous, but neighboring patches may represent very different parts of the problem space. These maps are characteristically associated with computations in which the interactions have both a local and a non-local component. An example of such a problem might be the calculations required to manipulate an object using a robotic hand. In this case, localizing the contact region of each robotic fingertip requires local interactions among nearby pressure sensors, while coordinating the use of several fingers to manipulate the object requires non-local interactions among all fingers. The optimal mapping for this problem would be one that combines a continuous mapping of the locally interacting sensors on each finger, with a scattered mapping of the interacting sensor groups on different fingers. Figure 1B illustrates this type of patchy mapping, again using the same image data as Fig. 1A. TINS, VoL 13, No. 10, 1990

C

The types of computations for which each map class is optimal are described in the text. (D-F) Examples of brain maps that appear to fall into these same categories: (D) continuous mapping of tactile inputs in somatosensory cortex of the rat, (E) patchy mapping of tactile inputs to cerebellar cortex in the rat, and (F) scattered mapping of olfactory input to piriform (olfactory) cortex of the rat, here represented by the 2-DG uptake pattern in a single section of this cortex.

Brain maps Interestingly, maps representing each of the categories described above (continuous, patchy, scattered) can also be found in the mammalian CNS. Since the structure of parallel computer maps arises from considerations of computational efficiency, it is natural to ask whether or not such considerations might also be reflected in the organization of brain maps. To explore this question further, we will compare a representative brain map from each category with its corresponding parallel computer map (Fig. 1). In particular, we will analyse the neural architecture of each brain region (Fig. 2) on the assumption that the map structure should be correlated with these interaction patterns. If the analogy with parallel computer maps holds, continuous maps should have predominantly local interactions, patchy maps should have a mixture of local and non-local interactions, and scattered maps should have no systematic structure in their interactions. Continuous maps. Figure 1D shows an example of a smooth and continuous brain map. This map represents the pattern of afferent tactile projections to the primary somatosensory (SI) cortex of a rat8,9. Continuous maps are also observed in most other primary sensory regions of cerebral cortex, including tonotopic maps in auditory cortex 1° and retinotopic maps in visual cortex 11. Since continuous maps on

405

per'sr

ctlves parallel computers are optimal for computations involving predominantly local interactions in the problem space, we would expect the computations associated with continuous brain maps also to be dominated by local interactions. The intrinsic neural circuitry in primary sensory neocortical regions is consistent with this expectation. In fact, neocortex is traditionally described as being composed of tightly packed 'columns' of locally interconnected circuitry 12. Although this description is oversimplified ~3, the predominant feature of cerebral cortical networks is still a high degree of local connectivity. This point is illustrated schematically in Fig. 2A, which represents the pattern that would be obtained by labeling the postsynaptic targets of a small cluster of neurons in the input layer (layer IV) of somatosensory cortex. The anatomical substratum for this local distribution of incoming information is the local axonal arborization of stellate cells, as illustrated in Fig. 2D. The predominant locality of processing in primary sensory neocortex is also consistent with the types of computations commonly proposed to take place in these regions such as spatial filtering TM and local feature extraction ~5. Another striking feature of the map shown in Fig. 1D is that the regions representing the lips and whiskers are disproportionately large in comparison to those representing the rest of the body. As mentioned earlier, such map distortions arise on parallel computers as a result of load-balancing. In brain maps, such distortions can often be explained by variations in the density of peripheral receptors.

Somatosensory Cortex

However, it has recently been shown in the monkey that prolonged increased use of a particular finger leads to an expansion of the corresponding region of the map in the somatosensory cortex 16. Presumably this is not a consequence of a change in peripheral receptor density, but instead reflects a remapping of some tactile-related computation onto the available cortical circuitry. Scattered maps. As described above, most primary sensory structures in the brain are characterized by highly ordered input maps. However, one region of primary sensory cortex, that responsible for olfaction, appears to be dramatically different. This cortical region seems to lack spatial organization in the distribution of its afferent inputsIi Figure IF represents the pattern of activity in one section of the olfactory (piriform) cortex, as assayed by 2-deoxyglucose (2-DG) uptake, in response to the presentation of a particular odor TM. As suggested by the uniform label in the cortex, no discernible odor-specific patterns are found. By analogy with parallel computer maps, the non-topographic nature of the olfactory cortex map suggests that the computation being carried out is characterized by unstructured connectivity in the problem space. This hypothesis is supported by the intrinsic circuitry of piriform cortex which consists of an extensive netv, ork of association fibers, interconnecting all cortical regions19. Figure 2C schematically illustrates the broadly distributed pattern that would be expected if the postsynaptic targets of a small cluster of local pyramidal neurons were labeled.

Cerebellar Cortex

Piriform Cortex (-

D

E

Fig. 2. Redistribution of afferent information in three different brain regions. (A-C) Schematic representations of the pattern that would be expected to result from labelling the postsynaptic targets of a small cluster of cortical neurons that receive afferent input. These patterns can be related to the structure of the corresponding maps (see text for details). (A) Inputs to somatosensory cortex (continuous map) have predominantly local interactions. (B) Inputs to cerebellar cortex (patchy map) have both local and non-local interactions, with the non-local interactions limited to a narrow

406

F

band along the parallel fiber direction. (C) Inputs to piriform cortex (scattered map) have widespread non-local interactions. (D-F) Illustrations of the morphology of the neurons that give rise to these patterns: (D) layer IV stellate cell in somatosensory cortex showing local axonal arborization, (E) granule cell in cerebellar cortex showing local ascending branch and non-local parallel fiber branch, and (F) pyramidal cell in piriform cortex showing broadly distributed association fiber axons.

~NS, VoL 13, N o . l ~ 1990

.......

,

........

...............

..............i _

......

. . . . . . i. . . . .

Figure 2F illustrates a typical olfactory pyramidal neuron which receives afferent input on the apicalmost portion of the dendrite and distributes information via an extensive axonal arborization. The notion that this cortical region is carrying out a computation that involves extensive communication in the problem space is also supported by the presumed role of the olfactory cortex in associatively classifying complex olfactory stimuli 17,2°. Patchy maps. An example of a patchy brain map is shown in Fig. 1E. This map represents the spatial pattern of tactile projections to the granule cell layer of the rat cerebellar hemispheres21. In contrast to the continuous tactile map in SI cortex, the patchiness of this tactile map suggests that, in addition to local interactions, the cerebellar computation should involve non-local interactions among the tactile inputs. Again, the organization of the intrinsic circuitry in cerebellar cortex is consistent with this interpretation. Specifically, one of the most prominent features of cerebellar circuitry is the extensive set of long-distance 'parallel fiber' connections that arise from the granule cells22. In addition, however, it has recently been shown that there is also a strong local influence of the same granule cell axons on nearby Purkinje cells23,24. This influence is thought to be mediated by the ascending branch of the granule cell axon, as shown in Fig. 2E, which makes numerous synapses with overlying Purkinje cells, before bifurcating into the parallel fiber portion of the axon 25,26. Thus the intrinsic circuitry of the cerebellum gives rise both to local and non-local distributions of afferent information as shown schematically in Fig. 2B. The computational role of cerebellar cortex and the significance of its functional organization are still subjects of considerable debate 27-3°. However, we have proposed, based on several lines of evidence, that the cerebellum may be involved in optimally controlling sensory receptor surfaces (e.g. retina, fingers, whiskers) during sensory exploration 31,32. In this context, we interpret the patchy maps and the pattern of granule cell axonal distributions as providing a means for analysing local sensory information within a more global sensory context. In the case of the tactile maps in the rat cerebellar hemispheres (Fig. 1 E), these computations would be primarily associated with tactile exploration of the environment using perioral sensory structures. In a single patch of the cerebellar map, local processing of the sensory input to that patch (e.g. upper lip) occurs within a global context provided by the distribution of information from other patches (e.g. lower lip, lower incisor, etc.) via the parallel fiber system32. Load imbalance and communication overhead in the brain The predictions generated by the parallel computer analogy for each of the brain maps we have considered are consistent with the actual properties of the corresponding brain regions. This provides some support for the hypothesis that brain maps, like optimal parallel computer maps, are arranged in TINS, Vol. 13, No. 10, 1990

. . . . . . . . . . . . . .

perspectives

a computationally efficient manner. In the case of parallel computer maps, we saw that the optimal map structure reflects the specific influences of load imbalance and communication overhead. This raises the general question of whether or not there are correlates of load imbalance and communication overhead for the nervous system. In general, load imbalance and communication overhead are much more difficult to identify and quantify in the brain than on parallel computers. Parallel computer systems are, after all, humanengineered while the nervous system has evolved under a set of selection criteria and constraints about which we know very little. Furthermore, fundamental differences in the organization of digital computers and brains make it difficult to translate ideas from parallel computing directly into neural equivalents 3. For example, it is far from clear what should be taken as the neural equivalent of a single processor. Depending on the level of analysis, it might be a localized region of a dendrite, an entire neuron, or an assembly of many neurons. Thus one must take into account the multiple levels of processing in the nervous system when trying to draw analogies with parallel computers. In considering the specific issue of load balancing, for example, it makes little sense to consider the possibility of one overburdened neuron holding up processing in other neurons. However, it might be reasonable to consider certain load-balancing effects acting at the level of groups of neurons. In particular, it is known that localized increases in neural activity give rise to local increases in tissue metabolic activity 33. Since metabolic activity necessitates the delivery of an adequate supply of oxygen and glucose via a network of small capillaries, the efficient use of the capillary system should favor mappings that generally tend to avoid metabolic 'hot spots' that would overload the delivery capabilities of the system. As in the case of load imbalance, a comparison between communication overhead on parallel computers and in the brain also necessitates a somewhat broader perspective. As discussed earlier, communication overhead on parallel computers is usually associated with the time-cost of exchanging information between processors. In the nervous system, the importance of such time-costs is probably quite dependent on the behavioral context of the computation. For example, computations such as those involved in capturing prey or escaping from predators may indeed be mapped to minimize communication time, whereas other computations may actually make use of transmission delays to process information 34. Thus the importance of the temporal component of communication overhead is likely to be highly context-dependent in the nervous system. However, there is another aspect of communication overhead that may be more generally applicable, which concerns the space taken up by physical connections between processors. In the design of modern parallel computers and in the design of individual computer processor chips, space-costs associated with interconnections pose a very serious 407

Acknowledgements We would like to thank Wojtek Furmanskifor providingparallel computingsupport, GeoffreyFoxand Matt Wilsonfor their commentson this manuscript,George CarmanandJohn AIImanfor providing the MRl imagedata usedin Fig. 1, and ErikaOller for her excellentartistic work. Thiseffort was supportedby the NSF (EC5-8700064)and the Lockheed Corporation.

constraint for the design engineer, In the nervous system, similar constraints are also likely to be imposed by the large numbers of connections between neurons and the rather strict limitations on cranial capacity. A simple 'back of the envelope' calculation serves to demonstrate the potential severity of this constraint: if the brain's estimated 1011 neurons were placed on the surface of a sphere and fully interconnected by individual axons 0.1 tam in radius, the sphere would have to have a diameter of more than 20 km to accommodate the connections! Even under more realistic assumptions, it is clear that mappings that minimize the volume devoted to connections between neurons are likely to be selectively favored.

Concluding remarks The view that computational efficiency is an important constraint on the organization of brain maps provides a potentially useful perspective for interpreting the structure of maps in the brain. For example, it could be suggested that smooth, continuous maps are common in the brain because they are simple to specify developmentally. In our view, however, such maps are common because continuous maps are computationally efficient for local computations and local computations are a common feature of many sensory processing tasks. This view also suggests a new interpretation for the lack of spatial organization in structures like the olfactory cortex. Since other primary sensory areas show a clear topography, one could suggest that topography has simply not yet been observed in the olfactory system because the proper stimulus parameters have not been identified. In contrast, the somewhat surprising result from studying parallel computer maps, is that scattered maps are actually preferred for certain classes of computations. Thus, there may be good reasons why the inputs to a brain region, like olfactory cortex, appear to be organized in a non-topographic manner. Finally, the idea of computationally efficient maps has provided new ideas to pursue regarding the structure of patchy maps found in different brain regions including the cerebellum. In this particular case, the map structure suggests a fundamentally new way of thinking about the functional organization of this system 32. Although the available evidence is largely circumstantial, it seems reasonable that the organization of brain maps would be influenced by pressures that favor the efficient use of computing resources. To pursue this idea, we must improve our understanding of the detailed relationships between brain maps, neural architectures and associated computations. While we believe that this effort will benefit from ongoing research in parallel computing, the most interesting insights will probably come from studying the details of maps, architectures and computational efficiency in the nervous system.

Selected references 1 Dongarra, J. J. (1987) Experimental Parallel Computing Architectures (Dongarra, J. J., ed.), North-Holland 2 Fox, G. C. and Messina, P. (1987) 5ci. Am. 257, 66-74 408

3 Nelson, M. E., Furmanski, W. and Bower, J. M. (1989) in Methods in Neuronal Modeling: From Synapses to Networks

(Koch, C. and Segev, I., eds), pp. 397-438, MIT Press 4 Wilson, M. A. and Bower, J. M. (1989) in Methods in Neuronal Modeling: From Synapses to Networks (Koch, C. and Segev, I., eds), pp. 291-334, MIT Press 5 Fox, G. C. et al. (1988) Solving Problems on Concurrent Processors Prentice Hall 6 Kirkpatrick, S., Gelatt, C. D. and Vecchi, M. P. (1983) Science 220, 671-680 7 Fox, G. C. and Furmanski, W. (1988) in Proceedings of the Third Conference on Hypercube Concurrent Computers and Applications (Fox, G. C., ed.), pp. 241-278, ACM 8 Welker, C. (1971) Brain Res. 26, 259-275

9 Bower, J. M., Beermann, D. H., Gibson, J. M., Shambes, G. M. and Welker, W. (1981) Brain Behav. EvoL 18, 1-18 10 Merzenich, M. M., Knight, P. L. and Ruth, G. L. (1975) J. Neurophysiol. 38, 231-249 11 AIIman, J. M. and Kaas, J. H. (1971) Brain Res. 35, 89--106 12 Szent&gothai, J. (1975) Brain Res. 95, 475-496 13 Gilbert, C. D. (1985) Trends Neurosci. 8, 160-165 14 DeValois, R. L. and DeValois, K. K. (1988) Spatial Vision Oxford University Press 15 Marr, D. (1982) Vision: A Computational Investigation into the Human Representation and Processing of Visual Information Freeman 16 Merzenich, M. M. (1987)in The Neural and Molecular Bases of Learning (Changeux, J-P. and Konishi, M., eds), pp.

337-358, John Wiley & Sons 17 Haberly, L. B. and Bower, J. M. (1989) Trends Neurosci. 12, 258-264 18 Sharp, F. R., Kauer, J. S, and Shepherd, G. M. (1977) J. Neurophysiol. 40, 800-813 19 Haberly, L. B. (1985) Chem. Senses 10, 219-238 20 Bower, J. M. (1990) in An Introduction to Neural and Electronic Networks (Zornetzer, S., Davis, J. and Lau, C.,

eds), pp. 3-24, Academic Press 21 Shambes,G. M., Gibson, J. M. and Welker, W. (1978) Brain Behav. Evol. 15, 94-140 22 Ram6n y Cajal, S. (1911) Histologie du Syst~me Nerveux de I'Homme et des Vertebras (Vol. II) (transl. L. Azoulay; reprinted 1955), Consejo Superior de Investigaciones Cientificas 23 Bower,J. M. and Woolston, D. C. (1983) J. NeurophysioL 49, 745-756 24 Rao, M., Rasnow, B., Nelson, M. E. and Bower, J. M. (1987) 5oc. Neurosci. Abstr. 13, 602 25 Llin&s, R. R. (1982) in The Cerebellum, New Vistas (Palay, S. L. and Chan-Palay, V., eds), pp. 189-192, Springer-Verlag 26 Gundappa-Sulur, G. and Bower, J. M. 5oc. Neurosci. Abstr. (in press) 27 Ito, M. (1984) The Cerebellum and Neural Control Raven Press 28 Lisberger, S. G. (1988) Trends Neurosci. 11, 147-152 29 Llin,~s,R. R. (1981) Nature 291,279-280 30 Thompson, R. F. (1988) Trends Neurosci. 11, 152-155

31 Paulin, M. G., Nelson, M. E. and Bower, J. M. (1989) in Advances in Neural Information

Processing Systems I

(Touretzky, D. S., ed.), pp. 410-418, Morgan Kaufmann 32 Bower, J. M. and Kassel, K. J. Cutup. NeuroL (in press) 33 Yarowsky, P. J. and Ingvar, D. H. (1981) Fed. Proc. 40, 2353-2362 34 Carr, C, E. and Konishi, M. (1988) Proc. NatlAcad. 5ci. USA 85, 8311-8315

Keep on top of the neuroscience literature Subscribe to TINS TINS, VoL 13, No. 10, 1990