185
lateral position from which it came, while preserving the two-way travel time from source to reflecting point and back to the receiver. Thus migration is designed to reposition reflectors correctly and to collapse diffracted energy on the sections. Provided that the seismic line is running downdip over a two-dimensional structure, a straightforward conventional algorithm such as the diffraction stock would migrate the data successfully and improve the resolution on the seismic section. HQwever, with three-dimensional structures and cross-dip lines it is incorrect to assume that all the recorded energy has come from raypaths lying on the vertical plane through the seismic line. Twodimensional migration will not reposition reflectors correctly, will not collapse diffractions properly to maximise lateral resolution of faults, and will not eliminate “sideswipe” energy which has come from structures out of the vertical plane. The continuing expansion in the capacity of data acquisition systems has made three dimensional seismic surveys possible, with seismic sources and receivers distributed over an area, or at least a band, at the Earth’s surface, rather than simply along lines. Much research effort has been devoted in recent years to deriving algorithms for migrating seismic data in three dimensions which may be handled efficiently on digital computers. These methods, known collectively as “waveequation migration”, succeed in repositioning reflectors correctly, improving the lateral resolution and correcting the amplitudes of reflections. Professor Berknout’s book is a thorough theoretical treatment of the different approaches to wave-equation migration which have been derived in recent years. The first four chapters set out the basic mathematics of vector analysis, discrete spectral analysis, two-dimensional Fourier transforms and wave theory which are required in the rest of the book. Then the forward problem of extrapolating the wave field is introduced and discussed within the context of modelling seismograms from generalised structures. Migration is shown to be the inverse of modeffing, in that the wavefield has to be extrapolated downward and imaged to produce the corrected seismic section. The meat of the book is the set of chapters on the different approaches to migration by wavefield extrapolation: -
migrating in the wavenumber—frequency domain, migrating by summation in the space—frequency and space— time domains and migrating with the finite-difference method. The penultimate chapter compares these different approaches and the final chapter is a discussion of the factors limiting lateral resolution. The treatment of migration is rigorous throughout. It is undoubtedly excellent for geophysicists who require a thorough understanding of migration methods. This book has to be studied rather than read. It does not contain any verbiage which would be superfluous to a sound mathematical exposition of the subject, and shows no examples of real seismic sections to illustrate the beneficial effects of migration. Accordingly it cannot be recommended for casual reading by nonspecialist geophysicists. The book has been produced by photographically reducing the typed manuscript, which contains only the occasional grammatical idiosyncrasy and some unimportant typing/speffing errors. Although the material has been clearly set out on each page, the reduction in size of the typescript makes the text physically tiresome to study. N.R. GOULTY (Durham)
Microelectronics. Scientific American. W.H. Freeman, Reading, 145 pp., U.K. £3.10 (softcover), ISBN 0-7167-0066-2. We live in the computer age. The U.K. Government has a £55 million fund to help industry embrace the new technology and to create new educational courses in microelectronics and microprocessors. Some areas of scientific activity, for example geophysical field studies, are being transformed by the data logging and processing abilities of the minicomputer. We are entering the age of the “smart” scientific instrument in which it becomes difficult to distinguish between the physical instrument and its built-in capacity to decide when to take readings and to store, process and interpret the data. In order to make the best use of any specialist the potential user must know something about the
186
speciality involved. This has been true for the experimental scientist who over the past fifteen years has had commercially available increasingly specialist electronic instrumentation. It seems that no experimental scientist can afford to ignore the microprocessor so ubiquitous will it become. Being able to write computer programs in a high-level language to process data will not be enough: One barrier between the scientist and the computer specialist lies in the jargon of the subject. Many words which are readily comprehended in the outside world become mystifying in the computer world. The fact that we have an idea what the words “protocol”, “hierarchy” or even “graceful degradation” mean, may not be helpful, and may even be a hindrance in some cases. The lover of the English language who has many problems in
the world at large with “ongoing situations”, and other horrors has no choice but to grit the teeth and come to terms with “hardware— software trade-offs” and other unattractive (but useful) jargon. In any event, this collection of articles, originally published in Scientific American, Nov. 1977, forms an excellent point of entry into the microprocessor milieu from the principles of the transistor as a switch, functioning as a logic or memory circuit, through the fabrication of “chips” to microprocessors, and the applications of microprocessors in data processing, instrumentation and control, communications and finally as a tool in education and a means of self-expression. W. O’REILLY (Newcastle upon
Tyne)