Neural computation of pattern motion, modeling stages of motion analysis in the primate visual cortex

Neural computation of pattern motion, modeling stages of motion analysis in the primate visual cortex

124 Book reviews of cell assemblies. The nice aspect of this contribution is that it integrates biological facts about neuronal behaviour, theoretic...

152KB Sizes 2 Downloads 134 Views

124

Book reviews

of cell assemblies. The nice aspect of this contribution is that it integrates biological facts about neuronal behaviour, theoretical considerations and the results of numerical computer simulations. Dinse and colleagues discuss the concept of time-dependent receptive fields. They provide experimental data obtained with various techniques (among others single-unit recordings, voltage-sensitive dies) to demonstrate that receptive fields of cortical neurons are not fixed in time but rather time-dependent, and they propose several mechanisms which may explain the results. This chapter ends with papers on the contribution of the study of neuronal interactions in cortex based on cross-interval statistics and on the effect of a probabilistic threshold function for intrinsic and long-lasting dynamics of neurons and neuro-assemblies. The last part of the book deals with a few theoretical papers on the organization of perception and action. These papers have the character of a summary of the discussions, which took place during and after the presentations at the Meeting on Brain Theory. For everyone interested in the spatio-temporal aspects of neuronal information processing, this is a very valuable book. It gives a good review of experimental and theoretical ideas and data and really aims at integrating the experimental and theoretical approaches. The many references make the book also very valuable as a starting point for further exploration of the literature in this field.

C. Gielen

Dept. of Medical Physics and Biophysics University of Nijmegen, The Netherlands Neural Computation of Pattern Motion, Modeling Stages of Motion Analysis in the Primate Visual Cortex, M.A. Sereno. MIT Press, 1993. ISBN 0-262-19329-9. US $ 33.75. Although studies on neural networks have developed quantitatively and qualitatively to a high level, the application of the new insights which have been gained for understanding biological processes is a bit disappointing so far. The present book may contribute to bridge this gap. This book deals with neural networks (although not in the strict way as usually seen in studies on artificial neural networks), and integrates concepts from neural networks in areas as psychophysics, neurophysiology and neuroanatomy. Although the main aim of this book is to provide models to understand the neurophysiological processes and the anatomical structure of visual cortex, the book does more. It gives a good review of computational approaches for describing motion detection in artificial and biological systems and it provides a unified account of neurophysiological and perceptual data on the perception of moving patterns, which is one of the best-studied areas of visual perception from psychophysical, neurophysiological, and computational perspectives. The mathematics used is not hard to follow. It is clearly written and the mathematics is restricted to the bare essentials. In my view the model, which is presented, is not novel in all its aspects. Several ideas can be found scattered in the literature. However, the present model captures the main ideas and integrates them in a unified way with a clear linkage towards experimental data. This makes that the reader can see intuitively and exactly how the model works and why.

Book reviews

125

After a general introduction in Chapters 1 and 2, dealing with computational aspects (e.g. the aperture problem, Reichardt motion detectors, how to reconstruct motion from a sequence of pictures), with psychophysical aspects (e.g. short-range versus long-range processes, effects of dot density, parallel channels for processing of different aspects of visual processing, figureground problem), and with neurobiological aspects (e.g. direction sensitivity of neurons in various cortical areas, organization of visual cortex, receptive field properties, interaction between cortical and subcortical pathways), the model is presented in Chapter 3. In this chapter the author explains the model and provides many (recent) experimental data to support the model. Simultaneously, the author describes the hypotheses, which follow from the model and which are a good test for the model. In Chapter 4 the results of the simulations are presented and in Chapter 5 several psychophysical data are presented and explained in the context of the model. This book is very useful for students. It reads very well and it gives a spendid introduction in the ideas, theories and experimental data on motion perception.

C. Gielen

Dept. of Medical Physics and Biophysics University of Nijmegen, The Netherlands Neural Networks for Optimization and Signal Processing, A. Cichocki and R. Unbehauen. J. Wiley & Sons, 1993. ISBN 0-471-93010-5. 526 pp. Intended as an advanced text on applications of neural networks to optimization and signal processing the book is an excellent reference for the applications engineer and the neural network researcher. Organized into nine chapters and two appendices the book presents a broad spectrum of analog neural networks applied to linear algebra, matrix computations, optimization and control. Each chapter contains numerous examples, figures and simulation results greatly enhancing understanding of the concepts presented. Chapter 1 reviews basic concepts of matrix analysis and nonlinear optimization. The notation and concepts are useful throughout the book. Chapter 2 introduces various models of the neuron (Grossberg, Fukushima, Adaline etc.) and basic neural network architectures including feedforward nets and the Hopfield model. Chapter 3 reviews unconstrained optimization techniques such as gradient search and simulated annealing. Backpropagation learning algorithms are derived in detail for Euclidean and non-Euclidean loss functions. An interesting review of speeding up techniques for the backprop has also been provided. The authors discuss among others the adaptive slope method, Darken and Moody's search-then-converge strategy, global and local adaptation of the learning rules including delta-rule, Supersab, RPROP and Quickprop algorithms. An exhaustive list of learning algorithms for a single neuron is also compiled. Chapter 4 examines neural network architectures for linear and quadratic programming problems. Chapter 5 is concerned with neural network architectures for solving systems of linear equations. Special emphasis is given to ill-conditioned problems and regularization techniques as well as the least squares and total least squares problems. Chapter 6 describes how to solve standard linear algebra problems such as matrix inverse