Discrete sampling of continuous biological signals: Analog-to-digital conversion

Discrete sampling of continuous biological signals: Analog-to-digital conversion

Bruin Research Bulletin, Vol. II, pp. 755-760, 1983. o Ankho International Inc. Printed in the U.S.A. LABORATORY INSTRUMENTATION AND COMPUTING Dis...

940KB Sizes 0 Downloads 72 Views

Bruin Research Bulletin, Vol.

II, pp. 755-760, 1983. o Ankho International Inc. Printed in the U.S.A.

LABORATORY INSTRUMENTATION

AND COMPUTING

Discrete Sampling of Continuous Biological Signals: Analog-to-Digital Conversion’ L. CAULLER,

Neuroscientijk

W. MAYHEW

AND T. J. TEYLER2

Laboratories, Neurobiology Program, Northeastern College of Medicine, Rootstown, OH 44272 Received

24 October

Ohio Universities

1983

L., W. MAYHEW AND T. J. TEYLER. Discrete sampling of continuous biological signals: Analog-toBRAIN RES BULL ll(6) 755-760, 1983.-Computer processing of many electrophysiological events requires analog-to-digital conversion (ADC) of the time-varying signal. Inadequate ADC sampling rates affect the accuracy of amplitude and latency measurements and introduce rate-limited errors. A second class of errors, termed limitedprecision errors, result from the limitation imposed by the word size (in bits) of the ADC. Total error is the sum of rate-limited error and limited-precision error, but can be controlled and specitied as described here. Given the critical nature of ADC parameters, a standard is proposed for describing ADC performance. CAULLER,

digital conversion.

Analog-to-digital

conversion

Computer

Biological signals

TO be of use, any computer system must employ a reliable means of obtaining raw or pre-processed data and placing it in an appropriate digital representation. For many problems in electrophysiology, this means converting an analog signal into a time-series of discrete values (for general consideration of microcomputer applications in the laboratory see [3]). Neurophysiological events are typical biological events that can be defined as time-varying voltages with different

degrees of bandwidth, dynamic range and duration of signal to be analyzed. Table 1 summarizes the major classes of neurophysiological signals, describes their properties, lists common non-digital (analog) recording/analyzing machines and, looking forward to a topic considered later in this paper suggests a minimally acceptable sampling rate. ADC SAMPLING RATES Let us consider a representative neurophysiological signal from those included in Table 1. We shall use field potentials as they do not lie at the extremes of the range of neurophysiological requirements. Field potential recordings detect the activity of elements in the immediate vicinity of

the recording electrode and are generally used to record the activity of a population of neurons simultaneously active. A hippocampal population response provides a good example of a field potential recording. Figure 1A is an oscilloscope tracing of a hippocampal field potential showing a negative-

going CA1 population spike superimposed on a positivegoing population EPSP of approximately 2-4 msec duration. The population EPSP is considerably slower, having a duration of about g-10 msec. Considered in the frequency domain, the two components of the above waveform could be approximated by sine waves of 250-500 Hz and 100-125 Hz, respectively. For most neurophysiological purposes it is necessary to extract several items of information from the biological signal. In the case of a hippocampal population spike, peak amplitude and latency to peak are often considered. To accurately measure the amplitude of a sinusoidal-like event, it is necessary to be able to determine the peak voltage generated. This is difftcult when dealing with data input devices, for all such analog-to-digital conversion (ADC) devices operate on the principle of taking repetitive samples of the input signal. The periodic sample is then converted to a digital value by one of several varieties of ADC which vary in accuracy, speed, and price. (We shall compare these devices later in this paper.) The rate at which samples are taken determines the degree to which the peak can be accurately determined. To illustrate the effect of sampling rates on real neurophysiological signals, refer to Fig IRE. In Fig. lB, one sees the effects of sampling at an excessively high sampling rate (10 kHz), at the recommended rate of 5000 Hz (lC), and several examples of lower than recommended

1Supported in part by research grants from NIH, NSF and the Knight Foundation and by NeuroScientitic ‘Requests for reprints should be addressed to T. J. Teyler.

7.55

Laboratories.

CAULLER,

TABLE

Bandwidth in Hz

Dynamic Range

SfGNALS

Signal Length

Analog Devices

EEG

t-75 Hz

4: I

Evoked Potential Field Potential Intracellular Muscle Force ~xtracettutar Unit

I-100 Hz

10: I

set lo min 0.1-2 set

polygraph oscilloscope

IO-500

10: I

0.02~0.1 set

oscilloscope

DC-2000

20: I

0.01-0.05 set

DC-100

S:l

oscilloscope polygraph

3~3~~

l&l

OS-10 set

Digitizing Rate in WI

oscilloscope histogram

0.02-set

FIG. I. Etectrophysiological waveforms (from hippocampus). (A) Analog signal as seen on oscilloscope. in (A) digitized at (B) IO kHz, (C) 5 kHz, (D) 2.5 kHz and (E) 1 kHz.

sampling

rates (iD, E). Note that there are essentially no between a 10 kHz and a 5 kHz sampling rate-both accurately depict the waveform and detect the peak of the population spike. Sampling at 2.5 kHz and at I kHz fails to accurately reflect the input signal. Up to a point, reducing the sampling rate is equivalent to engaging a iowpass filter. Note, too, that these man~puiations have minimally affected the ~pulation EPSP-a lower frequency waveform component. The errors are the result of the sampling rate chosen and will be referred to as rate-limitrd erdifferences

rors .

The maximal error resulting from rate-limited sampling of a signal may be derived from a model based upon sampling of a sine wave, the frequency of which corresponds to the upper end of the signal’s frequency band. For instance, the frequency band of a field potential signai typically extends upwards to 500 Hz (Table 1). The maximum amount of error resulting from sampling a field potential at a given rate is best determined by studying the effect of sampling a SO0 Hz sine wave. Sampling theory (see [I]) specifies the minimum sampling rate that is required to uniquely determine a time function. That lower limit (sometimes referred to as the Nyquist criterion) is twice the upper frequency of the signal band (i.e., 1000 samples per second for a field potential). Sampling rates below this limit result in the phenomenon of abasing whereby erroneous frequencies appear in the record that arise from an interaction between the signal and the sampling frequencies. The immediate directive of this theorem is that the signal to be sampfed should be bandlimited by a low pass filter that attenuates frequencies above the signal band to prevent alias frequencies from dis-

AND TEYLER

1

NEtiKOPHYSIOLOGiCAL

Biological Source

MAYHEW

Analog signal

torting the sampfed representation of the signal. Note, however, that whereas sampling at the Nyqnist criterion does not alias the field potential -&~e can see where the population spike occurs) it certain1.y does misrepresent the amplitude measure and alters the fidelity of the signal (Fig. IE). Qearly for this neurophysidogic;il measure a more stringent criterion is necessary. It is desirable to be able to specify the m~imum possible error that may result if a signal is sampled at a given rate. The maximal sampling error may be as high as 100% of the actual sine wave amplitude if sampling begins at one-of the zero-crossing phases of the sam@d ,$ne wave. A mode1 that is based upon sampling a sine wave is illustrated.in Fig. 2. The maximum error that may result from sa@es that are taken at given intervals may be derived from this mqdel by considering the worst case phase rekion between sampling and the sine wave. The worst case restits when the @t%k occurs at the middle of the sampling interval, as shown in the figure. Any other phase relation would place one of the samples closer to the peak value. From this model the relationship between sampling interval (= ffsam@ing rate) and maximum error may be derived as follows: (1)

Em,,, (I) Error% where

=

*= = I-COS(~*:F,,,,,,*l) , = 1-c0s(Tr”I,rr,,,,,,) G*lOO% I = sampling interval; F,,, = upper limit of signal batid; T,,, = liFm;,,

This relation applies to sampling intervals Up to T,,&Z (i.e.. the minimal sampling criterion) at which point the error

ANALOG-TO-DIGITAL

757

CONVERSION

Rate- Limited

Sampling

I

*g-3 /-

7

1

24.0 id

7.6 4.9 2.5 ‘62

.025

.I25

,075

.05

SO

.2 5

,175 .I5

.20

.25

SANlwNG INTERVAL knits of Tl I 4of

I 20f

SAMPLING RATE

FIG. 2. Plot of a sine wave showing the effect of discrete sampling on an estimation of peak-to-peak amplitude. Shown is the worst case relationships between sampling and the peak of the signaf.

reaches loo%, and above which aliasing occurs. This derivation assumes that equivalent errors will occur when sampling both peak and trough. Also, the possibility that the sampling frequency is ha~onicaUy related to the sine frequency is not considered. If the sampling rate was a multiple of the sine frequency, then the error would be less than that estimated by the above function. Such a consideration is, however, inappropriate because I?,,, is only a rough approximation of the upper limit of the signal band. Figure 3 is a plot of function (1) to show the maximal sampling error that may result from intervals as long as ‘/4Tma, (i.e., sampling rates as slow as 4 times the upper limit of the signal band). Notice that to keep the error below %, at least 10 samples must be collected during T,,,(i.e., 5,000 samples per second for a field potential). Since the most fidelity is preserved by the highest sampling rate, why not simply sample at high rates? There are two reasons why this is not usually practical. First while high sampling rates assure the best signal fidelity, they also quickly till up memory space and in memory-limited microprocessors, this space must be conserved. Second, the faster the converter, the more costly it is. The solution is to select a sampling rate that provides the accuracy needed...and no more. Memory overload can be prevented by studious attention to other aspects of the digitized signal. Paramount among these is the length of the signal to be digitized. It has been our observation that many new computer users digitize far more signal than they need to. Probably a remnant of dealing with oscilloscopes having only several available calibrated time base settings, these users often digitize a long “tail”

I 5f

I IOf

I 4f

( xf )

FIG. 3. Rate error. Maximum-ardent error as a function of sample rate derived from model shown in Fig. 2.

(often consisting of a straight line) following the signal of interest. Such practices are extraordinarily wasteful of computer and disk memory and serve no useful purpose. We have been discussing single channel info~a~on input. Obviously two channels will at least double the sampling rate requirements (multiplexing the ADC between two channels). One can quickly arrive at a situation requiring more sampling speed than is available. In the case of one familiar example from Neuroscientific Laboratories, the WaveMan: Waveform Analysis System (Teyleret al. 14]),utilizes the Mountain computer ADCYDAC (DAC = ~~~-t~~~og converter). This 8-bit ADC has an 8 psec conversion time and is designed to plug into an expansion slot of the Apple II microcomputer. In the WaveMan configuration, the time between ADC conversion is, however, nearly 60 psec. The reason for this is that the ADC requires a signal from the computer for each conversion it makes. Obviously the “trigger” pulses the ADC receives must be accurately timed. In the WaveMan program, timing of such signals is done by software loops. The overhead of the program requires time to deposit the sample in the appropriate memory location, update registers, etc. Thus, the maximum sample rate in this single channel system is limited by software, yet results in an acceptable rate of approximately 17,000 Hz. However, expansion of the WaveMan software to multiple channel operation (four channels for instance) would necessitate additional program overhead to access the separate channels and file the data and would result in unacceptably low sampling rates. There are several solutions to insufficient sampling rate. One is to obtain a faster ADC, however, the limiting factor in the above is not the ADC, but the software that runs it. A

CAULLER.

7.58

better solution would be to replace the software timing loops with a hardware clock that would reside in another expansion slot and, upon computer command, initiate CPU interrupts required to synchronize conversion. With the hardware clock implemented, the rate-limiting factor would be the processor time, such that the Mountain Compute1 ADCPDAC should be able to adequately service four channels, each having a DC-2000 Hz biological signal bandwidth as described previously. Another possibility is to employ an ADC that can access the memory directly (DMA), thus obviating the need for CPU control during the sweep capture. DMA systems, however, require sizeable amounts of hardware which limit their use for demanding applications. We have been dealing with the problem of amplitude accuracy in the preceeding discussion. The same considerations hold for temporal accuracy. Obviously, temporal accuracy is fixed by the sampling rate employed. A 5000 Hz sample rate means that any point will differ from its immediate neighbor by 0.2 msec, limiting temporal accuracy to that value. Again, to achieve higher accuracy, one must use higher sampling rates. One must simply determine the minimal temporal accuracy acceptable and specify the system performance based on that value. LIMITED-PRECISION

ERROR

The other major source of sampling error arises from the limited precision with which the ADC can approximate the analog input. Most ADC’s are capable of approximating the input to within one least significant unit. For instance, an &bit ADC has a precision of t-1/(2*)= 1/256=about 0.4% of its input range, while a 12-bit converter has a precision of + l/(2”)= l/4096=0.0%. Various ADC’s are designed to span different input voltage ranges. The input range is spanned by the number of discrete voltage levels that the converter can discriminate. The particular range of a given ADC is long as the input signal of little concern as can be amplified sufficiently to take advantage of as much of the ADC range as possible. if in the example above the input range was 1OV (i.e., -5 V to +5 V), then an 8-bit conversion would be accurate to within ~10 V/(2H)=+-39 mV, while the 12-bit ADC would yield a precision of t2 mV. The effect of the ADC precision upon the sampling error depends upon how much of the input range is filled by the input signal. If the input signal were 10 volts peak-to-peak, then the entire complement of steps would be used. Thus, if a 10 volt signal were processed by an 8-bit ADC, the ratio of largest to smallest value is 256: 1, the maximum possible. The signal to noise ratio is thus: 20 log,,) (256), or approximately 48 dB. If, however, the input signal were only 2 volts peak-to-peak, then only 511256 of the steps would be used. The largest to smallest ratio is now 5 1: 1 and the signal to noise ratio becomes 34 dB. Consider the following experiment. An evoked potential having a measured baseline to peak amplitude of 4 mV is being recorded. Obviously a 4 mV signal must be amplified prior to being digitized. But by how much? Using the Mountain Computer I-bit ADC which has an input range of 10 V, we see that with a preamplifier gain of 1000, the signal is now 4 volts, or 40% of the ADC range (or 102 steps). In this case, each ADC step has an absolute value of 3.9x lo+’ V (or 39 WV) with respect to the preamplifier’s input. If we increase the system gain such that 30% of the ADC range is utilized (we need to leave some room, 20% in this case, for variance in the signal to prevent overload and clipping), we require a

MAYHEW

AND TEYLER

gain of 2000 to achieve an 8 volt signal. Here we use 20 steps, wherein each step has a value of I .95 x 10. i volts ( 19.5 pV). Since the accuracy ofthe ADC is limited to the absolute step size, we have doubled our accuracy by increasing the system gain to take advantage of most of the input range of the ADC. From this example it can bc seen that two aspects of’ sampling precision affect accuracy: the number ofbits (N) to which the input is approximated; and the portion of the input range (R) tilled by the amplitude of the input signal IA). Precision depends upon N and R as follows:

To be consistent with the way in which the rate-limited err01 was described. the following derivation defines error in terms of the fraction of the signal amplitude that the error represents. The prcc.ision-limitcltl (‘nor is determined thus: (3) E,,,,,(P) -- r. (ZPIAJ ..’ zz (R/,4)/2” 1 The factor of 2 in equation (3) arises because a measure of peak-to-peak amplitude requires two samples, one of the peak, another of the trough, each of which is subject to the ADC’s precision. Figure 4 is a plot of equation (3). Although error drops precipitously as the amplitude of the input signal approaches the input range. an X-bit conversion is still subject to a 1% error with signals as great as 78% of the range. Notice also the error falls exponentially as the number of bits of the conversion increases (i.e., 12-bit error = B-bit error/24). The obvious solution to the problem of prccisionlimited error is to employ a more precise converter. However, two factors favor the empltiyment of an 8-bit ADC. First of all, the cost of ADC’s increases rapidly with increased precision, such that 1Zbit converters are typically twice as expensive as 8-bit converters. Secondly. more precise approximations require longer conversion time which yields slower sampling rates. However, as already pointed out, sampling rate is most affected by processing efGciency. The additional programming overhead that is necessary to handle 12-bit data with an g-bit processor usually cuts tbt maximum possible single channel sampling rate in half. thereby exacerbating the problem of rate-limited error. Furthermore, more memory is required to store 12-bit data. Fortunately, sampling strategies are available that can almost eliminate the problem of precision-limited emr and make the 8-bit ADC a viable alternative. One approach is to adjust the gain of the signal amplifier until the input signal fills enough of the input range to reduce error to an acceptable level. This technique reqtijres careful attention to changes in the signal and compulsive record keeping of the gain associated with each sample. Then. following data collection, the gain is entered into the computation of signal amplitude during analysis for subsequent comparison with data collected at other gains. It has been our observation that many new users do nut take full advantage of the ADC input range, thus losing accuracy and significance of the input data. In part, this~ is because people are used to leaving the system gain fixed and changing the settings of the oscilloscope (or other output device) when the signal changes. A good way around this difficulty is to educate users as to the requirements of the computer/ADC. A practical help is to employ an OSLOscope to monitor the input to the computerfADC. lf the monitor oscilloscope is adjusted such that the ADC input range covers the vertical oscilloscope face, and the settings

ANALGG-TO-DIGITAL

CONVERSION

759

Sanpling

Precision - Liied

the sampling interval. The average relationship between sample and peak would be half way between these two extremes which places the peak half as far from a sample. The resulting error can be derived from our model by finding the rate-limited error associated with sampling intervals that are half as long as that used during collection (i.e., twice the sampling rate). If, for instance, 1000 samples were collected per second, then averaging would yield the maximum error level equal to that associated with 2000 samples per second. For sampling intervals less than T,,,/4 (i.e., sampling rates greater than 4*F,,,) the maximum possible error after averaging would be iess than half that before averaging.

IO

8

TYPESOFANALOG-TO-DIGITALCONVERTERS

0 0

0.2

0.4

0.6

0.8

1.0

SIGNAL-TO -RANGE RATIO FIG. 4.Precision error. Maximum percent error as a function of signal-to-range ratio.

Two curves are shown: 8 bit and 12 bit preci-

sion.

remain unchanged, then the user will have to adjust the systern gain to obtain a satisfactory waveform on the monitor oscilloscope. In so doing, the ADC will be supplied with a signal su~ciently large to keep the signal to noise ratio high. A more elegant solution is to employ a programmable amplifier and have the software adjust the gain of the programmable ampIifier to match the ADC input requirements. A powerful approach to dealing with signal noise in general and sampling error in particular is the technique of signal averaging. This technique has revolutionized signal detection by taking advantage of the fact that most sources of noise are independent of the signal and tend to be randomly distributed about a mean of zero. Averaging can be applied whenever an event is available that is time-locked to the signal. In the case of evoked potentials, the time-locked event is the evoking stimulus and is used to trigger the sampling episode. Each sample represents the state of the input at a fixed latency following the trigger. The average of samples that were collected at a given post-trigger latency during separate sampling episodes can be computed by executing simple algorithms. The average signal is constructed by stringing together the computed averages. The maximum sampling error that can result from precision-limited conversion approaches zero as the number of sampling episodes included in the average increases because such error is not time-locked to the trigger and varies randomly about zero. Signal averaging also reduces the error that is caused by rate-limited sampling. If it is assumed that the temporal variation of the signal exceeds the sampling interval, then averaging would be expected to cut the rate-limited error in half by effectively doubling the sampling rate. The assumption of temporal variation would be especially valid with respect to the higher frequency components of the signal. Under this assumption, the peak of the sine wave in the model that was used to derive rate-limited error (Fig. 2) would be as likely to coincide with a sample as it would to occur at the middle of

In current technology there are three popular types of ADCs, namely: successive approximation, tracking, and flash converters. All of the above ADC types are characterized by the fact that they feed info~ation about the ampiitude of the input voltage to the computer as numbers whose value is directly proportional to the voltage. Some ADCs have logarithmically weighted transfer functions whose step size-is inversely propo~ion~ to the input voltage, thus input voltages near zero amplitude are more finely characterized than large voltages. Logarithmically weighted converters are well suited for voice and music tr~smission where data is transferred without processing, but prove less useful for filtering applications because their nonlinear scaiing makes mathematical processing difficult. The most popular type of ADC is the successive approximationconvertor(SAADC). The SAADC operates by generating an arbitrary analog voltage with a digital to analog convertor (DAC) and comparing the DAC voltage to the unknown input voltage. After comparison, the DAC voltage is adjusted in continually decreasing steps, until it matches the input. Using binary diviSion of the step size, it is possible to achieve one bit of resolution for each comparison. Thus, a reading of S-bit accuracy can be made in a maximum of eight comparisons. The tracking ADC (TAX) offers the advantage that the dynamic range of its allowable input voltage exceeds the dynamic range of its digital output. Expansion of the input range is facilitated by the fact that the TADC measures the voltage difference between two successive samples rather than the absolute voltage of each sample. Therefore, the dynamic range of the output need only equal the largest difference that one expects to occur between any two sampies, and thus with the same voltage step size per bit, the TADC requires fewer bits to cover the same input range as the SAADC. While the TADC does achieve an economy of design, the value of the first sample of data must be known beforehand, or else an arbitrary guess at the starting value must be made. Alte~atively the TADC may be alIowed several samples to lock on to the input. Another disadvantage is that TADCs cannot accurately track rapidly changing input signals. The flash convertor ADC (FAD0 offers very high speed measurement of the input signal by virtue of the fact that voltage comparators are provided for each possible resolvable voltage level. The outputs of the individu~ convertors are than encoded into digital form by high-speed combinatorial logic. As the desired number of bits of precision is increased, the required number of comparators increases exponentially. For an n-bit convertor; 2” comparators are required. To supply reasonable amplitude resolution, high

CAULLER,

760

precision comparator circuitry must be duplicated many times, the result being a very complex and costly convertor circuit. Additionally FADCs consume sizable amounts of electrical power, which obviates their use in battery operated equipment. Beyond the disadvantage of cost and complexity, an FADC can produce data at speeds that are much higher than microcomputers can accept data, necessitating the installation of a special high-speed cache memory between the computer and FADC. The cache memory is quickly filled, therefore, the FADC is limited to short bursts of high-speed data collection with pauses for the computer to process the data in the cache memory. A variety of ADCs exist to cover many different specific laboratory requirements. The cheapest and easiest to use ADC is the SAADC which provides digital data in convenient form and can sample at rates up to approximately 1 MHz. The TADC provides an economy of storage because it is capable of characterizing a wide input range with a few bits. Additionally, some TADCs allow sampling rates up to 5 MHz because their fewer bits may be digitized more quickly. The most delicate and expensive type is the FADC. The resolution of FADCs is usually limited to 6 bits because of the complex circuitry involved. The chief advantage of the FADC is its ability to digitize signals at sampling rates exceeding 100 MHz. Other hybrid combinations of the above types of ADCs can be found for unusual signal or data storage requirements. STANDARDS

Given the expected increasing use of microcomputers in the neurophysiology laboratory, the question of standards must be addressed. By standards we refer to the characteristics of the computer that will influence its handling of biological signals. Clearly, an inappropriately chosen sample rate or low signal-to-range ratio can degrade the true characteristics of electrophysiological signals. Given the proliferation of computer systems, peripherals and accessories, from a diversity of manufacturers, as well as custom-built hardware and software, the question of assuring minimal performance standards to ensure reliable data collection and manipulation becomes quite important. The question resolves to: What characteristics are impor-

MAYHEW

AND TEYLER

tant in devising standards and how should those standards bc reported in the scientific literature? We have argued that the most critical characteristics of a computerized analog data acquisition system for neurophysiology relate to two sources of error. Precision eIfors are related to the bit-size of the ADC and the signal-to-range ratio employed in data collection. Rate errors are related to the sampling frequency employed in data collection. Both sources of error affect the accuracy of measuring the amplitude (and latency) of electrophysiological signals. We have emphasized “worst case” conditions, such that the error values we deal with are maximal errors (represented as a percent of the signal). Precision errors and rate errors are independent. Therefore, the maximal total error will be the sum of precision error and rate error. It is important to recall that errors will tend to cancel out when repeated samples are taken of the same response (as when collecting samples for an average). We have argued that the user needs to determine the acceptable maximal error rate for each application, taking into account the variance inherent in the signal. Once an acceptable maximal error rate has been selected, the user needs to select hardware appropriate to the experiment. This generally means choosing between ADC’s of various types and numbers of bits of precision (e.g., 8 bits, 10 bits, 12 bits), a choice often determined by the host computer. This done. two degrees of freedom remain: determining the sample rate and adopting an operating procedure that ensures the use of a maximal signal-to-range ratio. Those latter variables will be the primary ones determining rate error and precision error. respectively. Therefore, the user will adjust these variables to meet the previously defined maximal error rate. Since the total maximal error (precision error plus rate error) are important determinants of the accuracy of ank electrophysiological measure, it is important that they be communicated in research reports utilizing computerized analog data acquisition. We propose that users of computerized electrophysiological data collection and analysis systems, whether commercial or custom devices, specify their precision error and their rate error in research reports. To unify reporting, we suggest that the formulas we have developed above be adopted to report these sources of error.

REFERENCES 1. Hancock, J. C. An Introduction to the Principlrs oj’ Commutlication Theory. New York: McGraw-Hill, 1961. 2. NeuroScientific Laboratories: WaveMan, DataMan CellSeeker programs available from the Stoelting Co., Kostner Avenue, Chicago, IL 60623.

and 1350

3. Teyler, T. J.. L. Cauller and W. Mayhew. The use of the 6502 microcomputer in neurophysiology. In: Micn,c,onlptrtrr.r m Neurobiobgy, edited by G. Kerkut. Cambridge: Oxford University Press, 1983. 4. Teyler, T. J., W. Mayhew, C. Chrin and J. Kane. Neurophysiological field potential analysis by microcomputer. J Rie~vosc~i Mrthods 5: 291-303. 1982.