Obtaining a solid model from optical serial sections

Obtaining a solid model from optical serial sections

0031 3203/89 $3.00 + .00 Pergamon Press plc © 1989 Pattern Recognition Society Pattern Recognition, VoL 22, No. 5, pp. 577 586, 1989 Printed in Great...

4MB Sizes 1 Downloads 34 Views

0031 3203/89 $3.00 + .00 Pergamon Press plc © 1989 Pattern Recognition Society

Pattern Recognition, VoL 22, No. 5, pp. 577 586, 1989 Printed in Great Britain

OBTAINING A SOLID MODEL FROM OPTICAL SERIAL SECTIONS* FERNANDO MAC|AS-GARZA,KENNETHR. DILLER, ALANC. BOVIK, SHANTIJ. AGGARWALand J. K. AGGARWAL Computer and Vision Research Center, The University of Texas at Austin, Austin, Texas 78712-1084, U.S.A. (Received 3 August 1988; in revised form 19 October 1988; received for publication 10 November 1988) Abstract An approach for obtaining a three-dimensional solid display of microscopic-scale objects obtained via optical serial sectioning is presented. Serial sections are obtained by incrementing the focusing plane of a microscope through a specimen of interest, resulting in a sequence of two-dimensional images comprising a three-dimensional image of optical density. Unfortunately, the limited aperture of any practical microscope results in the loss of a biconic region of frequencies from the three-dimensional spectrum of optical density, and a low-pass distortion of the remaining frequencies. These degradations make the extraction of accurate three-dimensionalstructural or morphological information difficult.While the low-pass distortion can be improved by inverse filtering, the loss of frequencies is not reversible. However, within the resolution imposed by the finite aperture, a solid model can be obtained. By applying an edge detector to the sequence of inverse-filtered images, contours of high activity can be delineated. Generally these will be noisy and spread throughout the interior of the specimen;however, by applying appropriate mathematical morphological techniques and other region correction procedures the edges can be aggregated into a solid (volumetric)model suitable for display in a cuberille environment.

Mathematical morphology Optical serial sections Cuberilleenvironment Three-dimensional

l. INTRODUCTION When analyzing microscopic morphological data in two dimensions, an enormous number of assumptions about the objects under observation must be made. Volumes are reduced to areas, and events that take place at different depths are imaged essentially on the same plane. The analysis of biological specimens in three dimensions can provide much more information about the object. By using an appropriate optical system, three-dimensional (depth) information can be obtained. An experienced observer can estimate the shape of a three-dimensional specimen by viewing a succession of images with different portions of the object in focus. The technique of optical sectioning has the disadvantage, however, that each slice contains information from neighboring slices that cannot be removed by any known means; the overall series of sections is also blurred. This degradation is a consequence of the finite aperture of the optical system, and serves to impede the development of means for automating the analysis of shape. * This work was supported by grants from the National Science Foundation under grant number ECS-8513123, the Juvenile Diabetes Foundation, the Whitaker Foundation, and in part by the Texas Advanced Research Program under Grant No. 3546. 577

Solid modeling

Inversefilter

In the following sections, the experimental set-up used to obtain optical serial sections is described first. The nature of the optical degradation is then briefly described, and means for minimizing the distortion are demonstrated. Finally, a technique for obtaining a three-dimensional solid representation of the partially restored specimen is described, which is applicable to specimens which have a roughly convex shape. The steps involved include application of an edge detector to identify regions of high activity characterizing the presence of the specimen, followed by morphological and other region correction operations to aggregate the edges thus obtained. The final volumetric representation is suitable for graphical display in a cuberille environment. The method described is limited by the obtainable resolution of the optical system. For systems with a small aperture, much information is lost and the representation will be crude; however, for systems with larger apertures, or which generate little "outof-focus" noise, such as confocal microscopes, the method will provide highly accurate volumetric displays. 2. F U N C T I O N A L STRUCTURE OF THE SERIAL

SECTIONINGSYSTEM The serial sections for the present study were obtained through a conventional Zeiss Universal light

578

FERNANDOMACIAS-GARZAet

al.

Video signal

Mechanical link

VIDEO I TAPE ~ RECORDER

iI MICROSCOPE I ! AND VIDEO ~ CAMERA

~ Audio signal 300 BAUD MODEM

I~

[

STEPPER MOTOR

T [

IBM PC/AT

Fig. 1. Block diagram of the microscope and tape recorder system.

microscope. In addition to the basic microscopic apparatus, other peripheral equipment is necessary in order to precisely control the movement of the microscope stage and to be able to store and retrieve the information from a video recorder for later processing in digital form. A block diagram of the microscope, computer, and tape recorder control system is shown in Fig. 1. To control the movement of the microscope stage, a ZEISS SMC-3 stepper motor and controller was interfaced to the fine focusing gear of the microscope. The motor is able to step the stage along the vertical axis in increments of 0.2 microns. The computer elicits movement of the stage by pulsing a bit through the parallel port connected to the controller. Forward and backward stepping are possible by changing a 'direction bit'. The stage can be stepped at a maximum rate of 200 increments per second. The video information obtained at every plane of observation is stored on a video tape recorder. Archiving the information for later retrieval is necessary since data acquisition during a dynamic experiment must be performed as fast as possible, given that there is always the possibility that the specimen may change its size, shape, or position. If the digital acquisition of the data were carried out in real time, the data transfer and processing rates of our present, available equipment would be unacceptably slow. Storing the information for later retrieval has the added advantage that information for several specimens present in the image at the same time can be acquired for individual processing, and data not germane to the central focus of an experimental protocol may be selectively excluded from the analysis procedure. Computer processing requires that each of the individual focal planes through the specimen be accessible individually. Using tape as the storage medium presents an important difficulty with respect to this constraint. If only video information is stored on the tape, it is not possible to accurately address

each one of the individual frames on demand. To solve this problem, data is stored on the audio channel of the tape concurrent with the video images via a 300 baud modem. The information stored consists of markers delimiting the beginning of the experiment, its name, total number of planes recorded, distance between adjacent planes, scene number (in case the scan is part of a multi-scene experiment, such as a freezing sequencetl~), and markers identifying the beginning of each plane. Enough time is provided between planes so that, during the retrieval phase, the computer can detect their presence, and so that the video frame grabber has enough time to acquire the image. The set-up used to retrieve the information from tape is depicted in Fig. 2. The program driving the system can provide a directory of the contents of the tape, and search for every image by experiment name, scene and plane number. Once a plane is detected, it is acquired automatically into the computer by a video frame grabber. The user may define a window on a video monitor so that only a portion of the image is stored, and it is also possible to set a compression factor to store the data with a lower resolution than that at which the image is displayed on the screen. Computational considerations impose a limit on the resolution and number of planes that can be processed on site in our laboratory. The byte data corresponding to every one of the planes is stored back to back on a file, which is transferred to the mainframe for later processing. The tape must be rewound to the beginning of the experiment after the acquisition of every plane, in order to regain synchrony with the marker mechanism. The computer is interfaced to the video tape recorder through its remote control port, so that the process is carried out automatically. In a more desirable arrangement, a different video storage medium such as an optical disk recorder, which allows the direct addressing of video information by frame number, would provide a better solution to the problem of interfacing the computer with the serial microscopic images.

Obtaining a solid model from optical serial sections

579

Video signal VIDEO TAPE RECORDER

~ 300 BAUD MODEM

I Control signal

TIME BASE CORRECTOR

Au~osign~

Jo l wi° I

Data link to VAXSTATION VAX

1N2/AT

framegrabber

Fig. 2. Block diagram of the system used to retrieve video information from tape.

3. D I S T O R T I O N IN OPTICAL SERIAL S E C T I O N I N G SYSTEMS

The distortion produced by light microscope systems has been studied by several authors; more recently the analysis has been extended to optical serial sectioning systems. Stoksetch Iz) gave an expression for the two-dimensional microscope transfer function that has been used in iterative algorithms for reconstructing serial section images. ~3'4~ Erhardt et al. ~5~ refined the technique by constructing a threedimensional point-spread function from the twodimensional transfer function given by Stoksetch, by allowing a continuous range of defocusing along the optical axis. They were able to deduce the existence of a region of frequencies in the three-dimensional spectrum of the projected image that is not passed by the microscope, and that the remainder of the spectrum is distorted by a strong low-pass distortion. The expressions obtained by these authors are similar, but not identical, to those described here; however, in our work, we also take into account the passage of light through the specimen being imaged. StreibF 6) established restrictions on three-dimensional light distributions using principles of physical optics. In particular, it was shown that the threedimensional light distributions obtainable from practical optical systems are constrained to have incomplete spectra, due to the limited apertures of such systems. He later extended the analysis to threedimensional light microscopy,(7,s~ obtaining a threedimensional optical transfer function using the wave equation and the laws of Bragg diffraction, by assuming the specimen to be self-luminescent and completely transparent. The general shape of the region of Fourier support matches that developed by Erhardt et al.; however, the low-pass distortion outside the region of missing frequencies using this analysis was found to be neither expressible in closed form nor suitable for computer implementation. Fay et al. (9'1°~ have applied scanning optical microscopy techniques to the determination of molecular distributions within muscle cells. They propose an iterative deconvolution algorithm similar to that PR 2 2 : 5 - H

in Ref. (2), which has been modified to drive the serial sections to high contrast. The algorithm works quite well, given that they deal mostly with fluorescent images. Our approach to the problem involves application of the principles of geometric optics, and thus includes effects of the passage of light through the object. By assuming the absorption properties of the specimen being imaged to be linear, it can be shown that the limited aperture of the microscope results in the loss of a biconic region of frequencies in the threedimensional Fourier spectrum of the image, which is manifested by a severe loss of spatial resolution along the optical axis; for frequencies outside this region, a closed-form expression can be developed for the lowpass distortion introduced by the microscope, which is manifested as a blurring both between focusing planes and within individual planes. Both the size of the cone of missing frequencies and the extent of the low-pass distortion increase as the aperture of the microscope is decreased; for very small numerical apertures, little information is uniquely confined to any focusing plane. A complete derivation of the transfer function of the microscope, based on the assumptions just mentioned, is given in Refs (11, 12). The distortion is described by the expression n ( u , v, w) =

1 1,12 + I) 2 .-~ W 2

tan-1

, o < lyol < m.x COS4'ma~

-

-

and zero elsewhere, where 7o = Tan-

l(j)

+ v2

and u, v, and w are the coordinates in the Fourier space. The above equation represents the three dimensional transfer function of the microscope It(u, v, w). ~'maxis the angle of the ray of maximum inclination allowed

580

FERNANDO MACIAS-GARZAet al.

by the numerical aperture of the optical system. H(u, v, w) is a low pass function confined outside a biconical region in the three dimensional frequency domain. For a certain direction, the function varies inversely with the distance from the origin• Given arcs of constant distance from the origin, the function is maximum on the u-v plane, and tends to zero as the angle with respect to this plane (~o) approaches ~b'~ax • Figure 3 shows a plot of H(u, v, w) for u = 0. Any cut of H(u, v, w) along a plane whose normal lies on the u-v plane, and passing through the origin will have the same shape. While the information lost over the region of missing frequencies is not recoverable, the low-pass distortion can be effectively removed by application of an appropriate inverse filter, viz. by multiplying the three-dimensional Fourier transform of the recorded image by the reciprocal of the distortion function outside the missing cone of frequencies. The improvement in resolution gained by the inverse filtering is considerable, particularly within the two-dimensional images representing the projection of each focusing plane• However, the resolution of the overall threedimensional image along the direction of the optical axis will gain very little improvement• 3.1. Reduction of out-of-focus noise Figure 4 shows a sequence of 128 planes collected from a pollen grain, where only every eighth plane is shown to conserve space in the figure (there is very little difference between adjacent frames)• The planes are numbered from top (spatially uppermost) to bottom; each pair of planes are separated by approximately 1 micron (so that those shown are separated by 8 microns). The numerical aperture of the optical system used was 0.63, corresponding to t]~,x ~ 25 °.

Fig. 4. A sequence of 128 planes taken from a pollen grain. Every eighth plane is shown•

Although the original images were obtained with a planar resolution of 256 × 256 pixeis, the images were reduced in resolution by a factor of two, resulting on a data cube of 128 pixels on each side. The images were coded with 8 bits of gray-level resolution, so that the composite three-dimensional image contained approximately 2Mb of information. The large amount of data is a very important consideration in the subsequent processing. Every image in the spatial sequence can be considered to contain some portion of the object, although there are planes where none of the object is in focus, namely the uppermost and lowermost planes• Figure 5 shows a logarithmic image of a cut of the discrete Fourier transform of the data cube along the

1 Fig. 3. Plot of H(u, v, w) for u = 0, with the singularity at the origin removed.

Obtaining a solid model from optical serial sections

Fig. 5. Cut of the Fourier transform of the pollen grain image in Fig. 4 in the plane u = 0.

plane u = 0. The nonzero region of the frequency spectrum is clearly visible. The "missing cone" is also not empty as the analysis predicts, because of random noise, nonlinearities in the imaging system, and aliasing effects. It is interesting to compare Fig. 5 with Fig. 3. The same general shape is present in both figures; Fig. 5 can be interpreted as representing how Fig. 3 would appear when viewed from above. The distortion function has a "bowtie" shape, taking larger values near the central region than for higher frequencies. The inverse filtering algorithm was applied to the frequency spectrum of the data. The resulting sequence of images is shown in Fig. 6. Given the numerous sources of error in the process, as well as the missing cone of frequencies in the observed image, the distinction between object and background is not as clear

Fig. 6. Improved sequence ofimages obtained by application of inverse filter to image sequence in Fig. 4.

581

as it should be if accurate three dimensional information is to be extracted from the serial sections. The images do show a definite improvement, as some of the out-of-focus information present in the original sequence is substantially reduced after the inverse filtering. There is a high-frequency noise component evident in the restored image, which is not visible in the original. As we might expect, the quality of the reconstruction is better for the upper planes, as there is a certain amount of data lost in the lower planes via occlusion from the upper portions of the object. Examination of just the horizontal cuts of the observed 3-D image can be very misleading, and it is only by examination of vertical sections that the low resolution along the optical axis becomes apparent. Consider, for example; Figs 7 and 8 which show vertical cuts of the three dimensional image before and after inverse filtering. From these figures, the low resolution along the optical axis becomes evident, as

Fig. 7. Vertical serial sections of the data cube shown in Fig. 4.

Fig. 8. Vertical serial sections of the data cube in Fig. 6.

582

FERNANDO MACIAS-GARZAet al.

well as the little improvement obtained by inverse filtering. Figures 9 and 10 show a sequence of 128 planes taken from a rat islet of Langerhans before and after inverse filtering. There is limited improvement in the planar resolution of the serial sections, as expected.

Fig. 9. A sequence of 128 planes taken from an Islet of Langerhans. Every eighth plane is shown.

using mathematical morphology and other region correction procedures so that only that region corresponding to the cross section of the object at every plane remains in the images. (iii) Generate a solid model for graphic display of the three-dimensional edge image from any viewing direction. It must be stressed that the operators mentioned above can be expected to provide meaningful results only if the serial sections on which they operate are free from out-of-focus information. Also, the above operators assume that the object has a smooth surface and that it is convex, i.e. the cross section of the object at every plane consists of only one region. The assumptions may be overly restrictive, and the above operators may not provide a generalized technique for the extraction of three dimensional information. The sequence used for the example is an inverse filtered sequence of optical serial sections. There is considerable out-of-focus information present in the sections, and they are only used to demonstrate the different steps involved in the process. The edge detector used in the example is the Laplacian-of-a-Gaussian (LOG) edge detector. Edge detection using the LoG method is accomplished via a two-step operation. First, the three-dimensional intensity image is convolved with the bandpass spatial filter V2Go(x,y,z), where G~(x,y,z)=(1/2na2) 3/2 e x p { - ( x 2 + y2 + z2)/2a2} is a three-dimensional rotationally symmetric Gaussian function, and V 2 = (~2/0x2 + 02/0y2 + d2/Oz2) is the ordinary Laplacian operator. Marr and Hildreth "3~ have demonstrated that meaningful image intensity edges are accurately marked by the zero-crossings (ZCs) of the convolution of the image with the V2Go filter. Thus, the second step in the edge detection operation is the localization of transitions through the zero level. Figure 11 shows the edges (ZCs) found using the three-dimensional LoG edge detector as applied

Fig. 10. Improved sequence of images obtained by application of inverse filter to image sequence in Fig. 7.

Fig. 11. Result of applying three-dimensional LoG edge detector to the filtered image sequence in Fig. 10.

4. OBTAINING A VOLUMETRIC MODEL

In an effort to clarify the amount of meaningful three-dimensional information contained in the filtered image, we have applied a series of algorithms for generating a solid display of the specimen. The procedure is as follows. (i) Apply an edge detector to the three-dimensional data set to delineate the significant, sustained changes in image intensity. (ii) Process the raw edges from the edge detector

Obtaining a solid model from optical serial sections to the filtered image depicted in Fig. 10. The edge maps can be made less cluttered by stripping the walls of the data cube (the edges there are clearly not part of the object) and by removing all edges consisting of 3 pixels or less. The result of these operations is shown in Fig. 12. It is to this data that morphology can be applied to consolidate the edge information and to fill the gaps in the boundaries at every plane. Mathematical morphology ~1*-~6~ has been applied to image processing problems by several authors. In particular, Ref. (17) reports an application of twodimensional mathematical morphology for the automatic determination of area in biomedical images. A morphological operation is an interaction between the image object and a structuring element. The structuring element, which is usually of simple shape and local support, interacts with the image object and transforms it into another form. In order to better understand the concepts involved, the following notation will be introduced. set of points satisfying property P. {x:P}: set of integers. Z: Zn: Euclidean space of dimension n. X,Y: subset of Z ~ (image objects). B: subset of Z" (structuring element). XuY: set of points belonging to X or Y (set union) XnY: set of points belonging to both X and Y (set intersection). X ~ Y ; B ~ Z : X contains Y; Z is included in B. translation of X by vector h e Z n. Xh: ¢: empty set. Using this notation, the basic set of morphological operators can be defined as follows. EROSION: X - B = {x: X ~ Bx}. DILATION: X + B = { x : B x n X ~ dp}. OPENING: X B = (X - B) + B. CLOSING: X a = (X + B) - B.

583

Given the three-dimensional nature of the data involved in serial sectioning experiments, it seems natural to use three dimensional morphology. A good choice for a structuring element is a sphere since it is isotropic, not being biased toward any particular direction. Figure 13 shows the result of applying a three-dimensional dilation operation to the data in Fig. 12. The structuring element used was a sphere with a 3 pixel radius. The boundaries at every plane were closed by the operation, although there are still holes inside the boundaries. Figure 14 shows the same sequence after the holes have been filled in at every plane. A standard twodimensional flood filling algorithm was used. Let us assume that the points in the matrix belonging to the object have value OBJECT and that all others have value N O T H I N G . The area flooding algorithm was used to set all points in the background to value BACKGROUND. All those points having values of

Fig. 13. Result of applying three-dimensional dilation to the edge data in Fig. 12.

Fig. 12. Edges remaining from Fig. l l after stripping the walls of the data cube and discarding all edge contours of length 3 pixels or less.

Fig. 14. Result of hole-filling applied to sequence in Fig. 13.

584

FERNANDOMACIAS-GARZAet

N O T H I N G were set to value OBJECT. This is not a morphological operation and it is only valid for convex objects, that is, those objects whose cross sectional area at every plane consist of a single region. Figure 15 shows the sequence in Fig. 14 after a three-dimensional erosion operation, where the same structuring element was used as in dilation. In some planes the erosion operation created several disjoint regions. The planes were processed so as to keep only the largest of those regions in every plane. The regions in Fig. 15 match the edges in Figs 11 and 12 quite well. From this data, three-dimensional measurements can be computed, and the solid corresponding to the serial sections displayed. Figure 16 shows a view of such a solid, which was generated using an algorithm for surface shading in a cuberiile environment, viz. for diaplaying the solid from the binary edge data without the need for generation of surface panels. The

al.

algorithm used is a modified version of those given in Refs (18, 19). After the solid representing the serial sections has been determined, various measurements pertinent to the morphology and structure of the specimen can be obtained. Curvature, in particular, is an important descriptor of the surface of the object, but is somewhat difficult to compute. In order to obtain meaningful curvature information from the solid, a smooth surface must be fitted locally to the points on its surface. The surface fitting algorithm will provide partial derivative information required to compute the curvature on the surface. Vemuri e t al. t2°) have developed a procedure to fit a surface to an array of points using splines under tension, and to compute curvature from the data obtained during the surface fitting phase. This technique can be applied to the solid generated from the serial sections, given that only the points on the surface are considered. The 3-D surface following algorithm given in Ref. (18) will generate a list of such points, which can be used as input to the surface fitting program. 5. CONCLUSION

Fig. 15. Result of applying three-dimensional erosion to the data in Fig. 14.

Fig. 16. View of the solid cuberille representation of the Islet of Langerhans.

The extraction of three-dimensional information from optical serial sections was described, and the limitations imposed by the conventional light microscope presented. In particular, the limited aperture of the microscope results in the presence of out-of-focus information in the serial sections, which can be removed to only a certain extent. In the frequency domain the microscope has the effect of removing a biconic region of frequencies from the Fourier spectrum of the object under observation. The size of this region varies inversely with the aperture of the optical system. Outside the cone of missing frequencies, the spectrum is degraded by a strong low pass distortion. This distortion can be compensated for by multiplying the spectrum of the observed three dimensional image by the reciprocal of the distortion at every point outisde the cone of missing frequencies. This procedure has the effect of improving the planar resolution of the serial sections, but has little impact on the resolution along the optical axis. The cone of missing frequencies renders the determination of three-dimensional information from the serial sections not accurate and very difficult to perform. A method was proposed for the extraction of threedimensional information from clean serial sections, and was demonstrated by applying it to the inverse filtered sequence obtained from an experimental subject. Although, as mentioned before, the extraction of 3-D information from these serial sections is inaccurate, they are useful to demonstrate the technique. The first step in the process consists of the determination of edges from the serial sections. These edges are generated with a Laplacian-of-a-Gaussian edge detector. Mathematical morphology and other region correction algorithms are then applied to the

Obtaining a solid model from optical serial sections

raw edge information obtained from the edge detector in order to remove edges not belonging to the object, to complete the boundaries of the object, and to generate a solid representing the object under observation. F r o m this solid, three-dimensional measurements can be computed, and a shaded representation can be obtained. REFERENCES

1. K. R. Diller, Quantitative low temperature optical microscopy of biological systems, J. Microsc. 126, 9-28 (1982). 2. P. A. Stoksetch, Properties ofa defocused optical system, J. opt. Soc. Am. 59, 1314-1321 (1969). 3. D. A. Agard, Optical sectioning microscopy: cellular architecture in three dimensions, Ann. Rev. Biophys. Bioengng 13, 191-219 (1984). 4. K. R. Castleman, Digital Image Processing. PrenticeHall, Englewood Cliffs, NJ (1979). 5. A. Erhardt, D. Zinser and J. Bille, Reconstructing 3D light microscopic images by digital image processing. Appl. Optics 24, 194-200 (1985). 6. N. Streibl, Fundamental restrictions for three dimensional light distributions, OPTIK 66, 341-354 (1984). 7. N. Streibl, Three dimensional imaging by a microscope, J. opt. Soc. Am. 2, 121-127 (1985). 8. N. Streibl, Depth transfer by an imaging system, Optica Acta 31, 1233-1241 (1984). 9. F. S. Fay and K. E. Fogarty, Analysis of molecular distribution in single cells using a digital imaging microscope, Optical Methods in Cell Physiology, P. E. Weer and B. Salszberg, Eds. John Wiley, New York (1986).

585

10. K. E. Fogarty and W. Carrington, 3D molecular distribution in living cells by deconvolution of optical serial sections using light microscopy, Proc. 13th Annual Northeast Bioengng Conf., K. R. Foster, Ed., IEEE (1987). 11. F. Macias-Garza, A. C. Bovik, K. R. Diller, S. J. Aggarwal and J. K. Aggarwal, Digital reconstruction of threedimensional serially sectioned optical images, IEEE Trans. Acoust, Speech Signal Process ASSP-36, 10671075 (1988). 12. F. Macias-Garza, A. C. Bovik and K. R. Diller, Missing cone of frequencies and low-pass distortion in 3-D microscopic images, Opt. Engng 27, 461-465 (1988). 13. D. Marr and E. Hildreth, Theory of edge detection. Proc. R. Soc. Lond. B 207, 187-217 (1980). 14. R. M. Haralick, S. R. Sternberg and Z. Xinhua, Image analysis using mathematical morphology, IEEE Trans. Pattern Analysis Mach. lntell., PAMI-9, 532-550 (1987). 15. J. Serra, Image Analysis and Mathematical Morphology. Academic Press, New York (1982). 16. C. Lantejoul and J. Serra, M-Filters, Proc. IEEE Int. Conf. Acoust., Speech Signal Process., Paris, pp. 20632066 (1982). 17. N. H. Kim, A. B. Wysocki, A. C. Bovik and K. R. Diller, A microcomputer-based vision system for area measurement, Comput. Biol. Med. 17, 173-183 (1987). 18. E. Artzy, G. Frieder and G. T. Herman, The theory, design, implementation and evaluation of a three dimensional surface detection algorithm. Comput. Graphics Image Process. 15, 1-24 (1981). 19. L. Chen, G. T. Herman, R. A. Reynolds and J. K. Udupa, Surface shading in the cuberille environment, IEEE Comput. Graphics Applic. 33-43 (1985). 20. B. C. Vemuri, A. Mitiche and J. K. Aggarwal, Curvaturebased representation of objects from range data, Image Vision Comput. 4, 107-114 (1986).

About the Author--FERNANDO MAC|AS-GARZAwas born in Monterrey, Nuevo Le6n, M6xico on 16 January 1961. He received the B.S. degree in Electrical Engineering and Computer Science in 1983 from Universidad de las Am6ricas, Cholula, Puebla, M6xico. He received the M.S. and Ph.D. degrees in Electrical and Computer Engineering in 1985 and 1987, respectively, from the University of Texas at Austin. He is currently with Compaq Computers, Houston, Texas. His research interests include computers, image processing, computer vision, and computer graphics. Mr Macias-Garza is a member of the IEEE Computer Society. About the Author--KENNETH R. DILLER is a Professor of Biomedical Engineering and Mechanical

Engineering at the University of Texas at Austin. He is an international authority on the application of the principles of heat and mass transfer to the solution of many different types of biomedical problems. The primary areas of focus in his research include the frozen banking of human tissues for transplantation, analysis of the microvascular basis of burn injury and how it may be exploited for the optimization of therapy, development of thermodynamic models of dynamic processes at the microscopic and macroscopic scales in biological systems, and computer vision techniques for quantitative measurement and interpretation of microscopic images. He has published more than 100 refereed articles and book chapters on these topics. Professor Diller earned a Bachelor of Mechanical Engineering degree cum laude from Ohio State University in 1966, followed by a Master of Science in the same field in 1967. Subsequently, he was awarded the Doctor of Science degree, also in mechanical engineering, from the Massachusetts Institute of Technology in 1972. After spending an additional year at MIT as an NIH postdoctoral fellow, he joined the faculty of the College of Engineering at the University of Texas as an Assistant Professor and has progressively been promoted to his present position. He was awarded an Alexander yon Humboldt fellowship from the Federal Republic of Germany in 1983-1984 to conduct research on the frozen preservation of pancreas islets at the Fraunhofer Institute at the University of Stuttgart and serves on the boards of editors of the journals Cryobiology and Cryo-Letters. About the Author--ALAN CONRADBOVlKwas born in Kirkwood, MO on 25 June 1958. He received the

B.S. degree in Computer Engineering in 1980, and the M.S. and Ph.D. degrees in Electrical and Computer Engineering in 1982 and 1984, respectively, all from the University of Illinois, Urbana-Champaign. He is currently Associate Professor in the Departments of Electrical and Computer Engineering and Biomedical Engineering at the University of Texas at Austin, where he is the Director of the Laboratory for Vision Systems in the Computer and Vision Research Center. His current research interests include computer vision, computational aspects of biological visual perception, biomedical image analysis, and the application of nonlinear statistical methods to problems in digital signal and image processing.

586

FERNANDOMACIAS-GARZAet al. Dr Bovik currently holds the Robert & Francis Stark Centennial Endowed Fellowship in Engineering. He is a member of the IEEE, the Pattern Recognition Society, the International Neural Network Society, and the Honor Society of Phi Kappa Phi. He is an Honorable Mention winner of the Annual Pattern Recognition Society Award, and is Associate Editor of the international journal Pattern Recognition. About the Autbor--SHANTI J. AGGARWALreceived the M.Sc. degree in organic chemistry in 1954 at Banaras Hindu University, India, and the M.S. and Ph.D. degrees in microbiology at the University of Michigan, Ann Arbor in 1958 and 1962, respectively. She was a Fellow of the National Research Council of Canada, and subsequently did postdoctoral work at the University of Illinois, Urbana, where she studied structure function relationships and comparative evolution of vertebrate immunoglobulins. She joined the University of Texas at Austin in 1965 and taught immunology and biology while actively doing research in immunology. She joined the Department of Mechanical Engineering and the Biomedical Engineering Program in 1982, and is currently active in application of computers and computer vision in biology. She has published numerous papers in immunology, cryobiology, and microcirculatory physiology. Dr Aggarwal is a member of AAAS and the Society of Cryobiology. About the Author--J. K. AGGARWALreceived the B.S. in mathematics and physics from the University of Bombay (1956), the B.Eng. from the University of Liverpool, England (1960), and the M.S. and Ph.D. at the University of Illinois, Urbana in 1961 and 1964, respectively. He joined The University of Texas in 1964 as an assistant professor and has since held positions as associate professor (1968) and professor (1972). Currently, Dr Aggarwal is the John J. McKetta Energy Professor of Electrical and Computer Engineering and Computer Sciences at The University of Texas at Austin. Further, he was a visiting assistant professor at Brown University, Providence, R. I. (1968), and a visiting associate professor at the University of California, Berkeley, California, during 1969-1970. Dr Aggarwal has published numerous technical papers and several books, Notes on Nonlinear Systems (1972), Nonlinear Systems: Stability Analysis (1977), Computer Methods in Image Analysis (1977), Digital Signal Processing (1979), and Deconvolution of Seismic Data (1982). His current research interests are image processing, computer vision and parallel processing of images. He is an active member of IEEE, IEEE Computer Society, ACM, AAAI, The International Society for Optical Engineering, Pattern Recognition Society and Eta Kappa Nu. He was co-editor of the special issues on Digital Filtering and Image Processing (IEEE-CAS Transactions March 1975) and on Motion and Time Varying Imagery (IEEE-PAMI Transactions November 1980) and editor of the two volume special issue of Computer Vision, Graphics and Image Processing on Motion (CVGIP, January and February 1983). Currently he is an associate editor of the journals Pattern Recognition, Image and Vision Computing, and Computer Vision, Graphics and Image Processing and IEEE EXPERT. Further, he is a member of the Editorial board of IEEE Press and the Editor of the IEEE Selected Reprint Series. Dr Aggarwal was the General Chairman for the IEEE Computer Society Conference on Pattern Recognition and Image Processing, Dallas 1981 and was the Program Chairman for the First Conference on Artificial Intelligence Applications sponsored by IEEE Computer Society and AAAI held in Denver, 1984. In June 1987, he began his term as the Chairman of the Pattern Analysis Machine Intelligence Technical Committee of the IEEE Computer Society.