Model control of image processing: Pupillometry

Model control of image processing: Pupillometry

Computerized Medical Imaging and Graphics, Vol. 17, No. I, pp. 2 l-33, Printed the U.S.A. All rights-cd. * itt 0895-61 I l/93 S6.00 + .OO Copyright ...

2MB Sizes 0 Downloads 41 Views

Computerized Medical Imaging and Graphics, Vol. 17, No. I, pp. 2 l-33, Printed the U.S.A. All rights-cd. *

itt

0895-61 I l/93 S6.00 + .OO Copyright 0 I993 Pergamon Press Ltd.

1993

MODEL CONTROL OF IMAGE PROCESSING: PUPILLOMETRY An H. Nguyen* and Lawrence W. Stark Tckmbotk

Unit, Univemity

of California at Berkeley, Berkeley,

CA 94720

(Rtceived 28 Fabvraary 1992; revised27 October 1992)

spaid pmpoee hardware, or both. A pupillometer is -e re@rc adim cow -pm& with a frame grabber to process infrared video describedtbatrelics~aagamml)rpQc t is that a topdown model controls the camerapicMeadtLe~eyt.AmeaiueMUfeatReOfaftbe & TLCme&l@mesa&s m ofh&rest, ROIs, positiollca from knowledge of anatomy image pm** &am prmiow& anlymal frames. Within these adaptively controlled ROIs, andoptksdtbeeyeamd’momentsfor cent&d position fastrull-timea&itk3 . -* “gcaldteba!tbrrsbdla,uerr~ inf~rmation,dmse~, s-a gtoshortacddatmm~ forhqgepupikOmtputsareprecise measarementsof pupils~adcyepositiahraltimc,rritb~~~~for~pnrposes.

Abstract-Smut

Ke_vwords: Image processin&Top-down vision, Pupillometry, Smart instrumentation

INTRODUCIlON

lens in the anterior chamber of the eye and for autonomous visual control of robots has been previously reported by our group (19-22).

Top-down model control of image processing suggests itself as a technique in developing “smart” bioengineering instrumentation that measures both pupil and eye movements, the latter by tracking the center of the pupil. Quantitative measurement of the pupil of the great interest in both clinical neuro-ophthalmology and in bioengineering study of neurological control system (l-5). There is a considerable history of these devices (2, 3, 6-l 5). Marchant (16) used the comparison of the pupil and the Purkinje image for greater reliability against inadvertent head movements. More recently, a digital device with special purpose image processing boards residing in a microcomputer chassis was developed (17). A similar pupil and eye movement microcomputer device was also developed by Professor Hutchinson (18) for use by a locked-in patient as a communication channel (personal communication). At present, a half-dozen biomedical instrumentation companies are able to supply various configurations of instrumentation for measuring the pupil; also several track eye movements. The aim of the present paper is to present a topdown, model-control-of-image-processing scheme that permits a stand-alone microcomputer without expensive special purpose hardware to carry out the realtime image processing, albeit at some cost in careful organization of the top-down processing. A description of a similar device for measuring movements of the

METHODS

Hardware available included an AT-386, a video camera, and a frame grabber. We found a video tape recorder, a laser printer and a Hewlett-Packard plotter useful in studying our various algorithms. Software tools included a Turbo-C package (Borland) that enabled development of the algorithms and the device driver for the frame grabber. Functions of the algorithms are described in the Results section. These included autothresholding, dynamic histogram display for initialization, moments, rectangular and Bresenham circular windows, semipyramiding with optimization of the number of interleave lines, and search and tracking for blink handling (an example of bug detection). IMAGE

PROCESSING

RESULTS

The task and the tool The task facing our scheme is examination of the actual video image of the pupil taken through the video camera. This image has a number of features to be acquired and many to be overlooked, if our measuring instrument is to be accurate, fast, and robust. The tool used to accomplish this task is top-down model control of image processing, MCIP (Fig. 1, upper). A pixel intensity plot, PIP (Fig. 1, lower) illustrates the problem very well (it is seen in perspective, which accounts for the nonaligned borders of the grid floor).

* Correspondence should be addressed to An Nguyen at his current address: Computer Engineering Department, San Jose State University, San Jose, CA 951924085. 21

22

Computerized Medical Imaging and Graphics

January-February/ 1993, Volume 17, Number 1

Fig. 1, Pupil model controlling image processing. Pupil model consists of inner and outer iris borders and a small circle for PI, first Purkinje image or comeal reflex (upper): actual video images of pupil showing these model features and other aspects of image, as eye lashes (middle); PIPS, pixel intensity plots, showing flattened areas of pupil, peaks for PI and also for at-tifactual lower lid tear meniscus, and rough surfaces of irregular iris intensities and other aspects of image (lower). Large pupil (right); small pupil (left).

Model control of image processing: Pupillometry

A. H. NGUYEN

??

0

64

23

and L. W. STARK

128

192

256

Fig. 2. Automatic thresholding with pixel intensity histogram, PIH. Intensity values with brightest to right (abscissa); number of pixels having that value of intensity (ordinate). Smoothing routine operating as an 8- to 32-point dynamic running averager prepares ground for automatic thresholding; running average can be sharper (right vertical pair) or smoother (left vertical pair) depending upon width of smoothing window. Larger pupil (upper); smaller pupil (lower). Automatic thresholding (vertical line with top circular cursor) sets contrast level above which, pupil is black and below which. iris is white. The PIP documents and quantifies the image processing difficulties that have to be solved by the image processing scheme. It should be compared to the video images of a large and a small pupil (Fig. 1, middle). Note the confusing variety of intensity levels; the noise or fluctuation of intensity represents a good deal of unwanted signal as would be seen in any real picture. However, enough information is present as correlated shifts in intensity levels to enable human viewers and our image processing algorithms to distinguish important features. Indeed, an analysis of human vision as a top-down process acted as a stimulus for our utilization of MCIP in robotics and bioengineering instrumentation (23, 24). The inner border of the iris, the pupil-iris border, is the main feature to capture in order to measure pupil area. Horizontal and vertical positions that may be derived from the center coordinates of this iris-pupil border are also necessary for tracking eye movements. Early image processing-thresholding A video image may be partially characterized by its PIH, pixel intensity histogram (Fig. 2). The three

peaks are: to extreme right, bright reflection of the Purkinje image, PI; to left, fairly clustered peaks of dark, low levels of reflection from the pupil; and in center, a broad, rough collection of reflections from the iris. This latter unwanted signal is the noise that the image processing is designed to discriminate against. It is clear that there was sufficient separation of the different levels of grey scale to place a binary threshold in a robust and convenient manner. This was done automatically by means of an algorithm that first fitted a smoothing curve to the histogram with a 8- to 32-point averaging window. Variation of the running average window length was not often used but was available. It is also possible to study effects of actual pupillary changes and of artifactual changes with alterations of illumination intensity on these image processing procedures (Fig. 3). Different modes of operation have been explored. Automatic thresholding could be used and accepted without worry for excellent images with high

24

Computerized Medical Imaging and Graphics

January_February/l993, Volume 17, Number I

J I I I I I I I

I

I1

Fig. 3. Regions-of-interest, ROIs. Rectangular or square regions of interest are drawn encompassing model features (upper); these ROIs are superimposed on the video image so as to narrow portion of image to be processed (middle); finally, thresholds are shown superimposed on the PIPS (lower); large pupil (left); small pupil (right).

contrast. Alternatively, automatic thresholding was used under supervisory control with experimenters’ approval necessary. Finally, it was sometimes required

that the human operator chose the threshold because of factors too difficult for the automatic thresholder to handle. Thresholding can be carried out as an initial-

Model control of image processing: Pupillometry

procedure or it may be rerun throughout the experiment.

izing

at intervals

Regions-of-interest, ROIs A basic part of our top-down approach is to construct regions-of-interest, ROIs, that narrow both the number of pixels to be dealt with in any computation; in most cases this also severely reduces the range of pixel intensities and the numbers of image features that they realize. By carrying out thresholding only within a ROI, it becomes simple to set the threshold level. It is of interest to examine carefully the thresholds superimposed upon the PIPS (Fig. 3, lower). The threshold for the pupil is set at a level, below which all pixels are considered to be part of the pupil portion of the image. The threshold itself can be seen to be below the variety of intensities characterizing iris reflectivity. Thus, the entire iris is excluded from the pupil measurement. Note the high intensity tear meniscus, just above the lower eyelid, is excluded from both of the two windows shown, because it is in a clearly different location-a critical and important window function. Operations on the Purkinje image, PI A different threshold is set for the PI. It can be seen to be much higher than the pupil and also much higher than the iris and most other areas of the illuminated front of the eye. A set of algorithms is first used to operate on the PI, Purkinje image (Fig. 4). The region of interest is small, but may actually overlap onto borders of other ROIs as indicated in the small pupil example (Fig. 4, upper right). The thresholded image inside the ROI is unambiguously present. It should now be apparent how the use of different thresholds for different ROIs is a powerful approach in top-down image processing. Computation of moments The centroid operator has been chosen by us as the fastest and most robust of the usual 2D image processing algorithms (Irwin Sobel, personal communication: also (2 1,25). Indeed, even VLSI chips are now available to compute this family of functions (26) and (27). The zeroth-order moments compute areas; those are useful quantities to check on the adequacy of adjustment of the focus of the infrared illuminating system. Magnification is fixed by both the distance of the subject and the lens of the video camera. A precalibration check sets the fixed conversion from pixels to size in millimeters. Thus, the PI, when well focussed on the minifying mirror of the cornea, will have a small fixed area: any significant divergence of area likely in-

A. H. NGUYEN

??

and L. W. STARK

25

dicates large enough shift of a subject’s head to cause defocussing. First moments generate x, y positions of the eyeball. Centroids (Fig. 5, middle) provide the exact location of the PI for the next operation, which is to set pixel intensity values of the PI to black (Fig. 5, lower). They will then be counted in the pupil by the pixelcounting zeroth-order-moment operation for the pupil. The PI is, thus, effectively removed. The PI centroid alternatively enables assignment of the PI area to the iris if it lies on that portion of the image; or finally for assignment of a portion to the pupil and a portion to the iris if the PI overlays the pupil-iris border. For example, it can be seen that both the large pupil (Fig. 5, left) and the small pupil (Fig. 5, right) enclose the high contrast, thresholded PI. Thus, in this case, the PI area was included in the calculation of pupil area. Clean thresholding of the pupil is necessary for further processing. Separation of pupil from surrounding iris within the ROI is obtained in a robust fashion, clearly demonstrated in the PIPS (Fig. 5, middle) and also in the direct (lower) and inverted video images (upper). Next, area and x, y position of the pupil is determined (lower). Finally, the model is superimposed onto the video image to demonstrate the success of the image processing algorithms. The basic plan of our top-down approach is to carry out thresholding only within an ROI; and to use that level of threshold appropriate for the particular ROI. The clean thresholding that this use of the ROIs make possible removes many opportunities for noise to enter into the calculation and, thus, makes the procedure robust. Initialization A basic consideration is the separation of our program into initialization or set-up processes and runtime procedures. Clearly, in initialization we can afford a large number of reprocessing steps of the images in order to set calibration, correct illumination for automatic thresholding, and other such processes. For example, 100 repetitions for image thresholding adjustment might only take a few seconds, easily afforded time in initialization. What factors need to be initially adjusted? The infrared illumination needs to be first oriented so that uniform lighting of the iris-pupil border is obtained; next, the PI is shifted so that it can lie within the pupil border or alternatively a locus well onto the iris: both of these maneuvers minimize the calculation for dividing the PI area into pupillary and iris portions (see Fig. 5 and its discussion above). Adaptive algorithmic adjustment of the ROI size can be an important aspect

26

Computerized Medical Imaging and Graphics

January_February/1993,

Volume 17, Number 1

Fig. 4. Operations on the Purkinje image, PI. ROIs surrounding PIs superimposed on video image (upper); centroid calculation indicated by black plus (middle); finally, PI is removed (lower), so as not to interfere with subsequent pupil area calculation. of an lig

le initialization procedure (see Fig. 4, above, for xample of this). Adjustment of threshold and of ing may also be guided by the information in the (see Fig. 2 above).

Pyramiding to attain rapid, real-time computatio In When attempting to use our top-down image processing scheme for real-time applications, a numb: ber of problems come to the fore. For run time, a diffe rent,

Model control of image processing: Pupillometry

A. H. NGUYEN and L. W. STARK

??

Fig. 5. Thresholding and measuring pupillary moments. Direct (lower) and inverted (upper) video images of eye with PI removed; x, y components of first moments (white cross) and model also superimposed (lower); note adaptation of ROI to pupil size (left, large pupil and large ROI; right, small pupil and small ROI); PIPS document elegant threshold achieved (middle).

27

28

Computerized Medical Imaging and Graphics

highly constraining set of boundary conditions apply. In particular, there is a severe requirement for speed of calculation. Here, we limited our considerations to a software approach, although a parallel effort is ongoing toward hardware implementation (22). For example, it was found initially that the computation time for small pupil was much less than taken for large pupil. We, thus, resorted to a pyramiding scheme (Fig. 6); by pyramiding is meant to use, say, only l/2, l/4, l/8, etc. of the pixels of the image (28). Semipyramiding, used by us, only exploits one dimension of the video image, e.g., the vertical one, and reduces the number of horizontal lines used in a scan to l/N, where N is number of lines skipped. The semipyramiding image compression algorithm used enabled us to count about the same number of area pixels for large pupils as for small pupils (Fig. 7). Benchmark experiments and similar estimations were continually used by us as we explored our software program and as a guide to a parallel effort in hardware design. We determined by experiment (Fig. 7) that signal-to-noise ratios were adequate to permit this truncation. Coefficients of variation (standard deviations/ means), of course, increased linearly with decrease in number of pixels counted. We generally set N, the number of sweeps omitted, to be equal to four, allowing the coefficient of variation to be 0.07%. Computation time should vary as the reciprocal of the square root of N, however, hardware limitations within the frame grabber only permitted computation time to be reduced from about 180 ms to about 70 ms. The tiny coefficient of variation achieved could be demonstrated in another fashion (Fig. 8). The time function of change of pupil area to a 10 s on-off sequencing of light stimuli was reanalyzed and replotted for each of a series of sweep sampling rates, without any degradation of DC range of response or of AC fidelity to the dynamics of the pupil system (the pupil is an admittedly slow system, with an approximate time constant of 0.15 s (4). Other considerations in attaining programming ejiciencies It is easier to set up rectangular windows for ROIs as compared to circular windows, even though the latter might fit the image features, in our particular instance, more naturally. It is interesting to note that the window calculation for circular windows, based on the Bresenham circular algorithm, is somewhat faster (in the ratio of 7 to 8) than that for rectangular windows. This gain in speed is, of course, due to the circular windows being smaller than rectangular ones; therefore, having fewer pixels

January-February/ 1993, Volume 17, Number 1

to process. However, the illustrations in the paper were restricted to rectangular ROIs for simplicity. Integer counting provided an additional benefit (30). Also, our application here is only for the low order moments; in other applications, such as robotics, the orientation angles derived from the moments require mixed operations (22). Integer counting was easy to apply because we were already operating on a binary image. With grey level, of course, floating point would be advisable; still, the zeroth-order moment would not give us directly the value of area, which we need (22). Finally, the address values of the rectangular window are abbreviated to the lowest values by offsetting the ROI window to the upper left comer; this provides still a further decrease in computation time. The benchmark for readdressing indicated that the time saved was minimal; yet this is an excellent design feature for our future hardware implementation. Similarly, logical operations such as shifting right and left could have been used instead of divisions and multiplications by powers of 2, but this level of optimization was not usually employed. RESULTS:

USE IN EXPERIMENTS

Feedback to experimenter (who is supervising automatic measurements during experiments). The utility of a semiautomatic initialization scheme and of a robust automatic real-time run-time procedure is enhanced by providing the supervisory controller, the human experimenter, with continual feedback that all is well (13). If problems arise, the supervisor can be updated with enough information so that he/she may take corrective action. Of great value is the PIP and the automatic threshold marker indicating that there is a sufficient valley between the dark pupil and the lighter iris peaks, so that it is clear that a robust threshold is in place (see Fig, 2 above). During run time, a display of the thresholded pupil, seen as a bright disc (Fig. 5) is clear evidence that the model has acquired the pupil-iris boundary. Similarly, continual viewing of the superimposed x, y centroid cross (Figs. 4 and 5) was very helpful ( 12). Another feature of help is a search algorithm to find a black hole at the location of the last successful measurement; this enables the system to either restart after a blink or to reinitialize with the eye in primary position. Experimental responses ofpupil and reading eye movements The final test of an instrumentation system is its use in measuring the desired trajectory. A sequence of

Model control of image processing: Pupillometry

??

A. H. NGUYEN and L. W. STARK

29

Fig. 6. Semipyramiding scheme for image compression. Original video image (upper, left); display of every line sweep (upper, right). Progressively reduced number of sweeps ranging from 1:2 and 1:4 (middle) to 1:5 and I : 10 (lower). Note Purkinje image being counted as within the pupil.

ligk tt on-off stimuli were presented to a subject. The rest dlting responses demonstrate the elegance of this image processing top -down, MCIP, model-controlled

system (Fig. 9, upper). Recall, that all the image processing is being carried out on a small microcorn Juter containing a simple frame grabber. The pupil exk [ibits

Computerized Medical Imaging and Graphics

30 300

r

,

5.0

January-February/ 1993, Volume 17, Number I

occur to begin a new line, or regressive backward saccades return to already-read words for verification (Fig. 9, lower). DISCUSSION Top-down model control of image processing The reader will now understand how the use of top-down MCIP has resulted in fast and robust image processing.

I

0

0

VERTICAL

I 7

I 14

1

0.0

21

PYRAMIDING l/ IN+11 LINE

Fig. 7. Benchmarks for effectiveness of semi-pyramiding scheme. Sampling of horizontal line sweeps ranged from 1: 1 to 1:20 or l/(N + 1) with N indicated on abscissa; note, N is number of lines skipped. Left ordinate and solid lines display computation time per frame. Solid line with small dots is theoretical computation time cost and represents function 1/sqrt (N); large open circles are experimental points showing additional limitation due to frame-grabber hardware timing

cost. Right ordinate and dashed line represents coefficient of variation increasing with decreasing binary pixel counts per area estimations; standard error bars are necessary because determinations were made at a number of actual pupil areas. Image processing scheme set at N = 4, and where coefficient of variation equals 0.07%.

dynamical behavior typical for a normal subject and with a rather smooth-appearing response curve with little instrumental noise. The image processing system was not at first thought to be satisfactory for eye movement recording because of the higher bandwidth of the saccadic trajectory. However, because the x, y components of the centroid are calculated in our scheme, we were led to explore such possibilities. Also, because the thresholding within an ROI was so robust to noise, we could reduce the line sampling rate considerably, by a factor of four, and thus reduce the computational time by a factor of three (Fig. 7). To document reading eye movement patterns, for example, when following the training of dyslexic patients, it is also not necessary to faithfully follow the fast dynamics of saccades (3 1, 32). Rather, one wishes to study the regularity of the reading process, alternating between the rapid, time-optimal 40 ms jump of a forward saccade and the linguistic fixation pause of about 250 ms. Less frequently, return sweep saccades

I

I

10 SECS Fig. 8. Static and dynamic fidelity with pyramiding. Sequence of 1O-s on-off light stimuli causing pupil to change area about 10 mm2 or 3000 pixels. Note rapid constriction, decrease in area, with a time constant of about 0.15 s. With increasingly sparse sampling, N ranging successively from 1, to 2, 3, 4, 5, and 10, the time functions show successively fewer pixel changes (upper). With renormalization, multiplying by the inverse of line sampling rate, the curves again superimpose (lower) indicating neglible loss of precision. The time function has only 200 bins into which to put area data. At N = 4, about 10 frames/s are counted for, a total duration of about 20 s; hardware frame-grabber limitations slow us here. For N = 1, the complete line-pixel count reduces the speed so that only about 2.5 frames/s are counted and a longer time function is recorded into the 200 bins; this eventuates as a longer plot on the graph.

Model control of image processing: Pupillometry

A. H. NGUYEN

??

and L. W. STARK

31

especially important with less than optimal lighting. Also note in the 2D pixel intensity histogram (Fig. 2) that local models for smoothing and Gaussian fits are useful. Second, because the PI is known to be very bright, it can be localized first and then removed before processing the iris-pupil boundary. This removal of the PI is also used when the PI is not centered on the pupil and must be accounted for as an edge obstruction. Third, the centroids of the PI and of the pupil are later used to calculate horizontal and vertical rotations of the eye, independent of head movement. In this manner the tool is matched to the task.

I

I s

SEC3

I

Initialization vs. run time operation.

1

5 IIEGREES

Fig. 9. Experimental responses of the pupil and reading eye movements. Pupil responses to light pulses demonstrating clear recordings of pupillary constriction and redilatation (upper). These single, unaveraged traces are quite free of instrumental noise, yet show the full bandwidth of the pupil system. Reading eye movements showing vector approximations to the very fast saccadic trajectories between fixational pauses (lower). These fixations, lasting about 250 ms, are marked by noise causing the record to graph as darker l/2 degree regions of the fixations. Note a regression saccade and a corrective vertical saccade may be seen in the bottom line.

First, the ROIs, regions of interest, directly reflect the knowledge of the image. In this way thresholding can be a local problem and a simplified one because the small pixel intensity range allows for effective separation of the two intensity humps by automatic thresholding algorithms (Fig. 1). Without the local region removing pixels of many different intensity originating from all parts of the video image it could be difficult to threshold the difference between the signal intensity and the background noise intensity. This is

Two modes with different strategies for each are employed in our programs. Initialization procedures are not limited by speed or cost of computation. Several hundred frames may be averaged without taking more than a few seconds. A calibration procedure, carried out at initialization, and at intervals throughout the experimental runs when necessary, handles any consistent nonlinearities. Similarly, any movements of the head can be monitored and corrected for. Localization of the pupil and the PI, and especially shifting of PI with respect to the pupil, may be carefully handled. Initializing adaptive processes are used for threshold settings guided by the dynamic PIH. The PIH is, as well, useful in adjustment ofthe lighting. Adjustment of ROI size and location was also found to be important. Again, there is plenty of computational time for these supervisory or adaptive processes so that robust and accurate settings can be obtained. Run-time procedures, on the other hand, require optimization to achieve that speed necessary for fast and accurate tracking of the dynamically changing images. Pyramiding was most helpful to attain rapid, realtime computation (Fig. 7). The ROI windows were also adjusted to as small a size as compatible with robust performance. Circular and rectangular windows were compared. Several other considerations in attaining programming efficiencies involved integer counting (30) and readdressing. In this manner our real-time programs were able to achieve image processing of somewhat less than 30 frames per second on a small host AT microcomputer.

Bottom-up image processing The vast amount of development work that has gone into bottom-up image processing has revolutionized image processing in medicine and other fields. These algorithms are available for use in our software and we fully exploit them. Thus, the top-down ap-

32

Computerized

Medical Imaging

and Graphics

preach is not antagonist to bottom-up methods, but rather places them in an advantageous position to provide for rapid tracking of our image components. Rapid computation of moments of the centroids has not only been developed with efficient software algorithms, but VLSI chips exist to carry out the calculations (26, 33, 34). A more complex set of calculations can determine the ellipticity of the shape being studied and also the angle of rotation of the major axis. This aspect of higher moment algorithms is not utilized in the pupillary instrumentation, but has proven to be of great value in our use of top-down image processing for tracking and controlling robots in a telerobotic control scheme (22). Run-time feedback to experimenter The experimenter acts as a supervisory controller during an experiment (35). Several sources of information are continually available. The video image indicates that the front of the eye is in view and in focus. A contrast-amplified picture is also shown that documents the effectiveness of the image capture and automatic thresholding procedures ( 10). Finally, the model of iris-pupil border and of the Purkinje image is superimposed on the video image. This proves that the program is tracking eye movements and pupil size and further that the model fits the image. Experimental responses of pupil and of reading eye movements (Fig. 9) demonstrate the efficacy of the top-down image processing scheme. SUMMARY Perhaps, as a summary, a brief discussion of bioengineering instrumentation is in order. Ordinary instruments carry out energy conversion (e.g., from light to electricity) and then to proceed to signal processing; bioengineering devices have more complex functions as well. Energy conversion is generated with a video camera and a picture that is the signal is produced. Signal processing begins with complex preprocessing of the image, as detailed in our paper. The exploitation of top-down modeling and of initialization and initial adaptive settings is essential in order that the run-time algorithms function efficiently to accomplish their various tasks. Calibration and linearization require considerable ingenuity. Two humans are involved; the subject whose comfort and cooperation enable nonstationarities to be overcome is one-the other is the experimenter. We have emphasized feedback to experimenter as necessary to his supervisory role, including continual observation of the state of the measurement algorithms, of the state of the subject, and of the state of the ex-

January-February/l

993, Volume

17, Number

1

periment. Finally, we have automatic (and well-designed) file management and data analysis and presentation. A final point that is always considered is the possibility of substituting hardware for software (33, 34). Can we use our system as the basis for specification of an image-processing computer architecture? Or is it better to exploit a general computer architecture so that future modifications and choices of image processing algorithms can be made based only upon elegance and not upon hardware constraints.

Acknowledgments-The authors acknowledge partial equipment support from Dean Enoch of the School of Optometry; also, the authors acknowledge their colleagues Professor Fuchuan Sun, Mr. Gregory Tharp, and Ms. Amy P. Nguyen for discussions and for acting as subjects.

REFERENCES 1. Talbot, S.A. Pupillography and pupillary transient. Ph.D. Dissertation, Dept. of Physics, Harvard University, Cambridge, MA, January 1938. 2. Lowenstein, O.L. Pupillaty reflex shapes and topical clinical diagnosis. Neurology 5:63 l-644; 1955. 3. Stark, L.W.; Sherman, P.M. A servoanalytic study of consensual pupil reflex to light. J. Neurophysiol. 20: 17-26; 1957. 4. Stark, L. Stability, oscillations and noise in the human pupil servomechanism. Proc. Instit. Radio Eng., IRE-47:1925-1939; 1959. 5. Stark, L. The pupil as a paradigm for neurological control systems. IEEE Trans. Biomed. Eng., BME-31:919-924; 1985. 6. Asano, S.: Finnila, C.; Sever, G.; Staten, I.; Stark, L.; Willis, P. Pupillometry. Q. Prog. Rep. Res. Lab. Elec. MIT 66:404-412; 1962. 7. King, G.W. Recording pupil changes for clinical diagnosis. Electronics 3267-69; 1959. for clinical. 8. King, G.W. An improved electronic pupillograph Proc. Nat. Elect. Conf. 16:672-676; 1960. 9. Piltz, J.E. A new apparatus using photography for pupil measurement. Neurol Zentr. 23:801-822; 1904. 10. O’Neill, W.D.; Stark, L.; Troelstra, A. Optical status testing means and method accommodation. Patent no. 3,525,565; Aug. 25, 1970. 11. Stark, L.W. Pupillometer. Patent no. 3,036,568; May 29, 1962. 12. Stark, L.W.; TroeJstra, A. Dynamic pupillometers using television camera system. Patent no. 3,533,683; Oct. 13, 1970. 13. Stark, L.W.; Troelstra, A. Display of measurement adequacy marker system for pupillometers. Patent no. 3,533,684; Oct. 13, 1970. 14. VanderTweel, L.H. Reaction of the pupil of man to change of light. Ph.D. Thesis, University of Amsterdam: 1956. 15. Young, L.R.; Sheena, D. Survey of eye movement recording techniaues. Behav. Res. Methods Instrum. 7(51:397-429; 1975. 16. March&, J. Personal communication. Boston; MA: Honeywell Corporation; 1968. 17. Meyers, G.; Sherman, K.; Stark, L. Eye monitor: Microcomputerbased instrument uses an internal model to track the eye. Computer 24:14-21; 1991. 18. Frey, L.A.; White, K.P.; Hutchinson, T.E. Human computer interaction using eye-gaze input. IEEE Trans. Sys. Man Cyber. 19: 1527-1534; Nov.-Dec. 1989. 19. Stark, L.; Mills, B.; Nguyen, A.; Ngo, H. Instrumentation and

Model control

20.

21.

37 -_.

23. 24.

25. 26.

27.

of image processing:

Pupillometry

robotic image processing using top-down model control. Robot. Manufact. New York: ASME; 1988675-682. Nguyen, A.H.; Ngo, H.X.; Stark, L.W. Robotic model control of image processing. Proc. IEEE Int. Conf. Syst., Man Cybernetics 12-15; 1988. Nguyen, A.H.; Stark, L.W. 3D model control of image processing. Proc. NASA Conf. Space Telerobot. JPL Pub]. 89-7,3:2 13-222; 1989. Nguyen, A.H.; Top-down model control of image procesing for telerobotics and biomedical instrumentation. Univ. of California at Berkeley; 1993; Ph.D. thesis Noton. Stark. L.W. Eye movements and visual perception. Sci. Am. 22434-43: 1971. Stark, L.; Ellis, S. Scanpaths revisited: Cognitive models direct active looking. Eye Movem. Cogn. Visual Percept. Fisher, Monty, and Senders, ed. Hillsdale, NJ: Erlbaum Press; 198 1: 193-226. Hu, M.K. Visual pattern recognition by moment invariants. IRE Trans. Inform. Theory IT-8:179-187; Feb. 1962. Hatamian. M. A real-time two dimensional moment generating: Algorithm and its single chip implementation. IEEE Trans. Acous. Speech Signal Proc. ASSP-34 (3):546-553; June 1986. Anderson, R.L. Real-time gray video processing using a momentgenerating chip. IEEE J. Robot. Automat., RA-1(2):79-85; June 1985.

28. Pavlides, T.; Horowitz, N. Pyramiding Personal Commu.; 1979.

for image processing.

29. Bresenham, J.E. A linear algorithm for incremental digital display of circular arcs. Comm. ACM 20(2):100-106; Feb. 1977. 30. Macado, J.A.T.: decarvalho, J.M.L.; Costa, A.M.C.; Mator, J.S. A new computational scheme for robot manipulators in robots with redundancy: Design, sensing and control. Bejczy, A.K.: Rovetta, A. ed. Berlin: Springer-Verlag; 199 I. 31, Bahill. A.T.; Stark, L. Trajectories Sci. Am. 240:84-93: 1979.

of saccadic

32. Sun. F.; Morita, M.; Stark, L.W. Comparative

eye movements. patterns

of reading

A. H. NGUYENand L. W. STARK

??

33

eye movements in Chinese and English. Percept. Psychophys. 37:502-506; May 1985. 33. Ruetz, P.; Brodersen, R. An image recognition system using algorithmically dedicated integrated circuits. Machine Vision Appli. 1:3-22; 1988. 34. Offen, R.J. VLSI image processing. New York: McGraw-Hill: 1985. 35. Kim, W.S.; Stark, L. Cooperative control of visual displays for telemanipulation. Proc. Inter. Conf. Robot. Automat. pp. 13271332: 1989. H. NGUYENis currently an associate professor with the Computer Engineering Department at San Jose State University, where he teaches robotics, graphics, image processing, and computer designs. His interests are in the top-down image processing for robotic vision and bio-medical instrumentation, hardware architectures for real-time image processing, and virtual instrument. He is also interested in biological systems and their applications in robotic control. He received his Ph.D. in Electrical Engineering and Computer Science at the University of California at Berkeley in 1993. Ahout the Author-AN

Ahout the Author-LAWRENCE

STARK has been a professor at the University of California at Berkeley since 1968, where he divides his teaching efforts among the EECS and ME departments in engineering and the Physiological Optics and Neurology units in biology and medicine. His research interests are centered in bioengineering, with emphasis on human brain control of movement and vision, and symbiotic interactions of this knowledge with the rapidly developing fields of robotic vision and control. He pioneered in the application of control and information for real-time acquisition of data, for selforganizing pattern recognition, and for physiological modeling. Stark has published several books and numerous research papers during his career that also included faculty positions at Yale (1954- 1960). MIT (1960- 1965), and Illinois (1965- 1968). His educational degrees are A.B. (Columbia, 1945) (Union-Albany. 1948) Sc.D.. h.c. (SUNY, 1988). His many former graduate students and postdoctoral fellows form a world-wide school of bioengineering and biocybernetics.