COMPUTERS
AND
BIOMEDICAL
RESEARCH
(1977)
l&605-616
Filtering and Sampling for Electrocardiographic Data Processing* ALAN S. BERSON, THEODORE A. FERGUSON, CHARLES D. BATCHLOR, ROSALIE A. DUNN, AND HUBERT V. PIPBERGER Veterans
Administration Research Centerfor Cardiovascular Data Processing, Veterans Administration Hospital, Washington, DC. 20422 and The Departments of Clinical Engineering and Medicine, The George Washington University, Washington, D.C. 20006
Received January 25, 1977 Frank-lead electrocardiograms from 433 normal and abnormal adult males were recorded. Amphtude and time measurements were obtained after subjecting each record to eight different combinations of filters and equal sampling intervals. For each record, measurements were compared to those obtained with 200.Hz bandwidth and analog-to-digital conversion at 2-msec intervals. Amplitude errors were least for records filtered at 100 Hz and sampled at 4 msec, with R-wave errors ranging up to 120 and S5 PV for single beat measurementsand measurements averaged over a IO-set period, respectively. For 50-Hz digital filtering and 6-msec sampling intervals, these errors ranged up to 146 and 108 PV, and for 40-Hz digital filtering, they were 195 and 141,uV. Effects of analog and digital filtering were similar. When the sampling interval was increased to 10 msec with 40- or 50.Hz bandwidth, large errors in amplitude and time measurements were found, probably resulting from aliasing errors. It is concluded that: (1) 100. Hz bandwidth data cause significantly fewer amplitude errors than 50-Hz data, (2) sampling at 4-msec intervals is adequate for 100.Hz bandwidth data, but sampling at 10 msec is inadequate for 50- or 40-Hz data, and (3) averaging measurements over several seconds is a clear advantage over measurements obtained from a single beat. INTRODUCTION
Electrocardiographic data processing has existed for more than 15 years, but initial questions regarding the effects of various sampling rates for the analog-todigital conversion process still remain unanswered due to lack of objective data. The problem is further compounded when analog and digital filters are used for preprocessing. Theoretical discussions and a limited amount of experimental results have been produced to support various contentions, with a net result that some investigators resort to an “overkill” policy using sampling frequencies and frequency response requirements excessive for their needs. At the other extreme, some use low sampling rates and/or very limited bandwidth recording, thus reaping the benefits of * This research was supported by the Medical Research Service of the Veterans Administration and by Research Grant HL 15047 from the National Heart and Lung Institute, National Institutes of Health, Bethesda, Maryland. Copyright @ 1971 by Academic Press, Inc. All rights of reproduction in any form reserved. Prtnted in Great Britain
605
ISSN 0010-4809
606
BERSONETAL.
lower data acquisition hardware and processing costs. The former practice is, at best, wasteful and counterproductive to the goal of computer electrocardiographic analysis, and the latter may be equally undesirablewith respectto optimal diagnostic accuracy. This study is intended to provide sample statistics derived from real data to complement the theoretical considerationswhich have heretofore beenthe primary basis of performance standards for electrocardiographic processing systems. The effects of samplingrate and a variety of analog and digital filters upon measurements were analyzed using the Veterans Administration electrocardiographic analysis system.
MATERIALSANDMETHODS
Frank-lead electrocardiograms (I) were recorded on three channels of FM magnetic tape with a bandwidth of 0.05 to 1250 Hz from patients in the Veterans Administration Hospital in Washington, D.C. All patients were recorded in the supine position. The set of 433 adult malesincluded cardiovascular normals and a cross section of various abnormalities, as follows: normal, 79: coronary artery disease, 104; hypertensive cardiovascular disease,83; valvular and/or congential heart disease,16; pulmonary disease,35; and noncardiac abnormalities, 116. The records were reproduced from analog FM tape three times each for analog-todigital conversion, as follows: (1) Low-pass analog filter (one for each data channel) with 200-Hz bandwidth and 12 dB/octave roll-off betweentape output and analog-to-digital converter input: (2) as above, but using analog filters with 100.Hz bandwidth and 12 dB/octave roll-off; and (3) as above, but using analog filters with 50-Hz bandwidth and 12 dB/octave roll-off. The three leads were digitized simultaneously for 10 set at 500 samplesper set (sps) per lead with 12-bit precision. For each of the three digitizing operations, the same IO-set portion was digitized for each record. Three additional sampling rates of 250, 167, and 100 sps were derived from the original 500 sps digitized records by selecting every second, third. and fifth sample, respectively. The maximum samplingrate usedfor this study was 500 spsbecausethis has been the rate most often quoted as providing the necessary minimum requirements for equal interval sampling of electrocardiographic signals. Barr and Spach (2) have suggestedthis value as the minimum rate for body surface recording, and the American Heart Association’s recommendations (3) quote the 500 sps figure. The lower sampling rates were selected for obvious convenience as submultiplesof the original samples. The selection of analog filter characteristics was based in part upon the sampling rates chosen and in part upon the previous literature pertaining to frequency content of the electrocardiogram. Scher and Young (4). Berson and Pipberger (5), Golden et al. (6). and Berson et al. (7) among others, have reported that the main deflectionsin
607
ECG FILTERING/SAMPLING
electrocardiographic waveforms are contained within the frequency spectrum below 100 Hz. On the other hand, extending the bandwidth to 200 Hz could conceivably provide useful information in a small number of cases which would be retrievable at the highest sampling rate of 500 sps. Those records filtered at 200 Hz and sampled at 2-msec intervals were further operated upon by using two different low-pass digital filters of the weighted movingaverage type. Both filter characteristics were identical except for half-power points (3 dB down), which were chosen to be approximately 40 and 50 Hz. As shown in Fig. 1, the nominal 50-Hz filter is actually 3 dB down at 53 Hz. The principal reasons for 1.0
0.8
I
I
0
20
I
I
I
1
40
60
a0
100
Frequency
FIG. 1. Frequency response are symmetric, moving-average data.
(Hz)
characteristics of digital filters with bandwidth of 40 and 53 Hz. Both filters using 27 and 3 1 points. respectively, from 2-msec interval digital
selecting these filters were: (1) they reduce 60-Hz noise by 10-20 dB because of their sharp cutoff characteristics; (2) they introduce minimum phase shifts within the usable bandwidth; and (3) they provide data for comparing the effects of analog vs digital filtering on measurements. Figure 1 illustrates the digital filter frequency characteristics. Details of filter construction of this type have been published previously (8). After digital filtering, sampling at 6- and lo-msec equal intervals was performed on each record. Each electrocardiographic record, then, was treated as summarized in Table I. In the remainder of this report, Treatment No. 0 will often be referred to as the reference, and data obtained for other treatments will be compared to the reference data for purposes of determining errors caused by various treatments. All electrocardiograms were analyzed with the Veterans Administration electrocardiographic analysis program using a Control Data Corporation 3200 digital computer at the Research Center for Cardiovascular Data Processing, Veterans
BERSONETAL.
608
TABLE1 FILTERANDSAMPLINGINTERVALCOMBINATIONSFORPROCESSINGELECTROCARDIOGRAMS~ Treatment IlO.
Analog bandwidth
filter (Hz)
Digital filter bandwidth (Hz)
-
200 100 50 50 200 200 200 200
40 40 50 50
Sampling interval (msec)
2 4 10 6 10 6 10 6
u Separate analog filter channels for each of the three leads were used. The three filtered channels were multiplexed into one analog-to-digital converter with sample/hold circuitry to provide essentially simultaneous conversion at 2-msec intervals.
Administration Hospital, Washington, D.C. Over 300 measurementsare routinely computed in this program (9), but in the present study, 13 of those, including amplitudes and time interval measurements,were selected as a set which would adequately reflect the magnitude of changes. The Veterans Administration electrocardiographic analysis program samplesthe electrocardiographic record for 10 set and, after wave recognition, determinesthe number of usable cardiac beats. Individual beat measurementsare then averaged to provide a single set of measurementswhich characterize the electrocardiogram. In addition, measurementsfrom a single beat were obtained for all eight treatments. Statistics are presented for both the averaged measurementsand individual beat measurements. In the remainder of this report, the measurementsobtained from lo-set averaging will be referred to as “best-beat” measurements,and those obtained from the single cardiac cycle will be referred to as “single-beat” measurements. It should also be noted that the Veterans Administration electrocardiographic analysis program requires that the input data to the measurementsroutine be a set of 2-msecsampleddata. This is properly satisfied when the 500 spsdigitized data are used.In the caseof lower sampling rates, straight-line interpolated values were used to satisfy the program requirementsfor 500 spsdata. RESULTS
Table II presentsdata for 13 electrocardiographic measurements,basedon singlebeat values as describedabove. Arbitrary values of 4 and 8 msecand 0.1 and 0.2 mV for time and amplitude differences were used for determination of the proportion of records for which measurement differences exceeded stated limits. The data are based upon differences between the measurementsof the filtered record under
3/<1 4/<1
5/l 4/l
R., amplitude R y amplitude
R, S, amplitude amplitude J.” amplitude QRS,,,, amplitude $,,,, amplitude
-
18/4 4/<1
l/< I
1612 l/2
1 l/2 ll/
30 9 3/<1
48136 25 13/g
Treatment 3 50.Hz analog 6 msec
BETWEEN FILTERED
II
311
70137 2615
81146 57120
12 9 20/z
30/20 18 56115
Treatment 4 40.Hz digital 10 maec
AND REFERENCE
IO/< 211 1 I/<1 1512
10/l 14/< 1
9 9 2/<1
16112 10 12/b
4/< 1 2/<1
5/C 1 7/< 1
6 7
2111
15/12 10 g/5
Treatment 7 50.Hz digital 6 maec
All data are from meaanrementa
z/1
77/34 3017 l/< 1 95168
81143 55118
18 10 2613
30118 17 62126
Treatment 6 50-Hz digital 10 maec
OUTSIDE OF STATED LIMITS’
Treatment 5 40-Hz digital 6 msec
VALUES
” For all amplitudes, the two numbers listed are percentages of records with differences greater than 0.1 and 0.2 mV. respectively. a single cardiac cycle. Abbreviations: F = Filter, SI = Sample Interval.
5/< 1
92161
712
412
2/1
2/c I
76137 29/g
8 l/47 51122
35 11 2113
30 9 2/<1
Treatment 2 50.Hz analog 10 maec 55138 26 67126
Treatment 1 lO@Hz analog 4 msec
WITH DIFFERENCES
49/35 25 17111
(F) (SO
OF RECORDS
P duration (-t4/+8 msec) P-R interval (+8 msec) QRS duration (+4/+8 msec) ST-Tinterval (+8 msec) Q: duration (+8 msec) Q, amplitude
Measurement
PERCENTAGES
TABLE
of
%
k F
z 0 z! s 9 2 n 3;
amplitude amplitude amplitude amplitude amplitude amplitude Q%w arnplltude T max smplitudc
Q, R, R, R, S, J,
P duration (+4 msec/+8 P-R interval (k8 msec) QRS duration 1_+4 msecli8 ST-Tinterval (f8 msec) Q, duration (+8 mm)
Mcasurcment
msec)
msec)
(F) (SI!
2512 83146 56119 11134 30/h c OS/O 95/6X 0.5: c,o.s
0.5/o 8/0.5 510 S/C 0.5 2/o 0 1210.5 ~0.510
5
Y
4
0 0.5/o <0.5/O .-0.5/o 0 0 o.s/n ‘CO.5
x
lh
410.5
14123
4/l
Y
I?
33114
13
3X/16
Treatment ? 50~H7 analog 6 msec
II
34114
Treatment 2 50-Hz analog 10 msrc
1911 84/45 55119 79135 2613 CO30 96110 co..s/o
7
6511 I
IO
23/10
Treatment 4 40 HI digital 10 maw
I/iO..s 9/l 1110.5 71;o.s IlCO.5 LO.5/0 121 I osi 0,s
4
3/0.5
6
24,‘l R5/42 5411h l i32 30!6 0.5/o Yfdh4 0.5 ‘0
II
68i22
3/0.5
ECG FILTERING/SAMPLING
611
consideration and those of the reference record of 200-Hz bandwidth and 2-msec sampled intervals. Overall, Treatment No. 1 introduces the least effects of the seven treatments on measurements. This is followed in order of increasing effects by Treatment Nosi.7, 5,’ 3, 6, 4, and 2. Treatment Nos. 2 and 6 produce approximately similar effects. However, these latter two, filtered at 50 Hz but sampled at IO-msec intervals, are drastically different from Treatment Nos. 7 and 3. The effects of increasing sampling interval from 6 to 10 msec can be seen in the much greater ranges of differences for both durations and amplitudes. When Treatment Nos. 4 and 5 are compared, the greater differences for the former again confirm the adverse effects of increasing the sampling interval from 6 to 10 msec. A comparison of Treatment Nos. 5 and 7 indicates greater differences for 40. than for 50-Hz filtering although these differences are not so drastic as those caused by reduced sampling rate. Although the above comments hold generally, some measurements, particularly durations and intervals, are affected differently. For example, QRS durations are hardly affected, even when filtering at 40 Hz, as may be seen by comparing the data for Treatment Nos. 1, 3, 5, and 7. For P-wave measurements, Treatment Nos. 5 and 7 with 50. or 40-Hz digital filtering and 6-msec sampling intervals produced better results than the lOO-Hz filtered records. The most plausible explanation for this is the improvement in P-wave recognition that occurs when 60-Hz interference is reduced. Measurement results using the best beat are shown in Table III. Comparison of these data with those in Table II leaves no doubt that the use of averaged values reduces measurement differences regardless of which filter/sample interval treatment is considered. Otherwise, the effects observed for best-beat measurements follow almost the same trends as those for single-beat measurements.
DISCUSSION
There is general agreement that the major deflections in the human electrocardiogram can be reproduced to a sufficient degree of accuracy and precision using a bandwidth extending to 100 Hz. This does not apply to the small amplitude, high frequency, notches and slurs often observed during QRS, for which several hundred hertz bandwidth is required (10-13). Previous studies (5, 7) have already described the effects on measurements when bandwidth is reduced below 100 Hz. In part, therefore, some of the results presented here serve to confirm these findings. For automated electrocardiographic data processing, the additional consideration of sampling rate has great importance because of the impact it may have on computer memory requirements or channel capacity if digital data transmission over telephone lines is desirable. Classical sampling theory provides theoretical guidelines for sampling continuous data. Stated simply, the theory predicts aliasing errors when analog signals are
612
BERSON
ET AL.
sampled at a rate which is less than twice the repetition rate of the highest significant frequency component. These aliasing errors occur regardless of the fact that the aliased frequencies may be of no particular interest in the analysis of the signal. To avoid aliasing. the sampling rate must be high enough to preclude the “folding” of significant frequencies. The reader is referred to several references on this subject generally (14. IS) and for biomedical applications specifically (16,17). When an analog signal is bandwidth-limited by an ideal filter, i.e.. a filter with uniform gain from dc to f and zero gain above f, the sampling rate may be chosen arbitrarily close to, but not less than, 2f in order to properly reconstruct the original signal from the sampled data, providing that an ideal interpolation filter is also used. As a practical matter, however, ideal filters are never achieved, and the choice of sampling rate then becomes dependent upon the filter characteristics and the reconstruction errors one is willing to tolerate. In most cases,the choice of sampling rate is usually three to five timesf, wherefis the bandwidth (3 dB down) of the actual filter used.There is no better way to arrive at this choice than to experiment with real data. Preliminary experiments (unpublished) comparing lOOO- and 500-Hz sampling rate for 200-Hz bandwidth data revealed that the lower samplingrate could be used without introducing serious aliasing effects, and this combination was therefore selected for the reference data. Based on previous studies of electrocardiographic bandwidth (4, 7), lOO-Hz bandwidth data, if sampledproperly, should causeminimal problems as compared with the reference data. Tables II and III show that amplitude changesfor Treatment No. 1 ascompared to the reference are small. Reducing bandwidth to 50 Hz, regardlessof sampling rate, results in increased numbersof records with significant amplitude differences from the reference records. Durations and intervals were not adversely affected, and indeed, improvements in someinstancesoccurred with the 50-Hz digital filter. These improvements were seen in the reduced ranges and can probably be attributed to better wave recognition with reduced noise. The sampling interval of 6 msecfor 40- or 50-Hz data wasjudged to be adequate for effective elimination of aliasing errors since amplitude errors were of the same order of magnitude as those reported previously in which a I-msec samplinginterval was used (5). However, when the samplingrate is reduced still further, a stepjump in performance occurs as shown in both Tables II and III. This should not be surprising since even for the 50-Hz digital filter, with characteristics approaching those of an ideal filter, this samplinginterval is, at best, borderline according to samplingtheory. The use of averaged measurementsfrom several beats may be likened to a filter and could obscure someof the effects which this study was intended to determine. On the other hand, averaging of severalbeats provides a more reliable measureof the “typical” beat becauseit minimizes effects of beat-to-beat variability (18) usually causedby respiratory or other influences. The effects describedpreviously for single-
ECGFILTERING/SAMPLING
613
beat measurements apply generally to the data for best-beat measurements. However, the data clearly show how beat averaging can improve both amplitudes and durations. Of course, when an inadequate sampling rate is used, the benefits of beat averaging are not realized because of the predominating errors caused by sampling. (See columns for Treatment Nos. 2,4, and 6 in the tables.) All existing automated electrocardiographic analysis systems require sampling of incoming analog electrocardiographic data, followed by wave recognition and measurements routines. Although differences exist among these programs in details of wave identification, measurement definitions, and number of beats analyzed, the basic processes are similar. The results of this study are thus applicable generally to all such programs although obviously the specifics of the statistics presented apply only to the Veterans Administration program. Although effects on conventional electrocardiographic measurements were used as the criteria for the proper choice of sampling rate, other criteria may be used. For example, Barr and Spach (19) have studied sampling rates using as criteria the mean error between original analog waveforms and the waveforms reconstructed from digital samples. They suggest that a minimum sampling rate of 500 sps for body surface waveforms is necessary to reduce mean error below 1%. The study of effects on conventional electrocardiographic measurements, on the other hand, has more direct application to those analysis programs which use these measurements as inputs to diagnostic routines.
LIMITATIONSOFTHESTUDY
The results of this study are limited in several important ways. First, the Veterans Administration electrocardiographic analysis program required an x, y, z sample every 2 msec, necessitating interpolation whenever other sampling intervals were tested. Ideally, a measurements program should have been used that was tailored for each sampling interval under test. This, however, would have necessitated rewriting and testing of algorithms for wave recognition-a very formidable task. Linear interpolation was used in this study based on published data (15, 19) suggesting that neither quadratic nor (sin X)/X interpolation is more advantageous than linear interpolation for reconstructing electrocardiographic signals. Second, the effects of quantizing level were omitted from this study in that 12-bit precision was used throughout. An independent study of effects of different quantizing levels on electrocardiographic measurements was recently completed (20), concluding that g-bit resolution is probably adequate. The possible interaction between this reduced bit resolution and reduced sampling rate was not treated. Third, no attempt has been made in this report to relate electrocardiographic diagnostic changes with measurement changes caused by filtering and sampling. This is an obvious final step which must be investigated separately for each automated
614
BERSONETAL.
electrocardiographic analysis program. How the Veterans Administration electrocardiographic analysis program, in particular. is affected will be the subject of a future report. Finally, one particularly promising area which was not investigated concerns digital data compression as applied to electrocardiographic signals. After arriving at a minimum acceptable sampling rate using methods similar to those reported here, a further reduction in the average number of bits per sample or bits per second may be obtained by manipulating the digital data in various ways. For example, one can code amplitude differences from one sample to the next rather than the actual sample values, as suggested by Stewart and co-workers (21). Further, one can make use of correlations between adjacent sampled values and use predictive coding for transmitting or storing predicted values. Preliminary results applying this technique to electrocardiographic data compression have been reported recently (22). Additional methods for reducing bit rate could make use of variable step analog-todigital conversion and variable length encoding schemes, e.g., Huffman codes (23). These types of data manipulation require varying degrees of complexity in implementation and were beyond the scope of the present study.
CONCLUSIONS
Electrocardiographic measurement changes associated with lOO-Hz bandwidth data and 4-msec sampling intervals are relatively small compared with 200-Hz, 2msec sampled data. Loss of accuracy for amplitude measurements should be well within tolerable limits for purposes of normal electrocardiographic interpretive methods. Based upon single-beat measurements, reducing bandwidth to 50 Hz with a moving-average digital filter increases the number of amplitude errors slightly if the threshold for error is taken as 100 yV, and 40-Hz bandwidth increases errors by a factor of about two. Amplitude errors increase by a factor of approximately two when a passive analog filter of 50-Hz bandwidth is compared to a 50-Hz digital filter. Based upon best-beat measurements, amplitude errors occurred significantly more often with 50-Hz bandwidth than with lOO-Hz bandwidth. For a 40- or 50-Hz digital or analog filter, a sampling interval of 6 msec appears adequate to avoid aliasing errors, but a IO-msec sampling interval results in extremely large errors in electrocardiographic measurements. It must be concluded that a lo-msec sampling interval is clearly unacceptable. Averaging of measurments from several cardiac beats in a IO-set period improves performance substantially with respect to electrocardiographic measurement accuracy so long as an adequate sampling rate is used. The effects of filtering and sampling parameters upon the crucial electrocardiographic interpretation must be individually tested with the specific automated electrocardiographic analysis system under consideration.
ECG FILTERING/SAMPLING
615
ACKNOWLEDGMENT The authors wish to express their deep appreciation for the help of Mrs. Renata Babuska who dedicated much time and effort in obtaining the statistical results presented in this report.
REFERENCES 1. FRANK, E. An accurate, clinically practical system for spatial vectorcardiography. Circulation 13, 737 (1956). 2. BARR, R. C., AND SPACH, M. S. Minimum sampling rates required for measuring extracellular cardiac potentials. Circulation 50 (Suppl. III), III- 160 (1974). 3. PIPBERGER,H. V., et al. Recommendations for standardization of leads and of specifications for instruments in electrocardiography and vectorcardiography. Report of the Committee on Electrocardiography, American Heart Association. Circulation, August 1975. News from the American Heart Association, pp. 1 l-3 1. 4. SCHER, A. M., AND YOUNG, A. C. Frequency analysis of the electrocardiogram. Circ. Res. 8,344 (1960). 5. BERSON, A. S., AND PIPBERGER, H. V. Electrocardiographic distortions caused by inadequate high-frequency response of direct-writing electrocardiographs. Amer. Heart J. 74,208 (1967). 6. GOLDEN, D. P., WOLTHUIS, R. A., AND HOFFLER,G. W. A spectral analysis of the normal resting electrocardiogram. IEEE Trans. Bio-med. Eng. 20,366 (1973). 7. BERSON, A. S., LAU, F. Y. K., WOJICK, J. M., AND PIPBERGER, H. V. Distortions in infant electrocardiograms caused by inadequate high-frequency response. Amer. Heart J. 93,730 (1977). 8. STALLMANN, F. W., AND PIPBERGER,H. V. Automatic recognition of electrocardiographic waves by digital computer. Circ. Res. 9, 1138 (1961). 9. CORNFIELD, J., DUNN, R. A., BATCHLOR, C. D., AND PIPBERGER,H. V. Multigroup diagnosis of electrocardiograms. Comput. Biomed. Res. 6,97 (1973). 10. LANGNER, P. H., JR. The value of high fidelity electrocardiography using the cathode ray oscillograph and an expanded time scale. Circulation 5,249 (1952). 11. GESELOWITZ, D. B., LANGNER, P. H., JR., AND MANSURE, F. T. Further studies on the first derivative of the electrocardiogram, including instruments available for clinical use. Amer. Heart J. 64,805 (1962). 12. FLOWERS,N. C., HORAN, L. G., TOLLESON, W. J., AND THOMAS, J. R. Localization of the site of myocardial scarring in man by high-frequency components. Circulation 40,927 (1969). 13. FLOWERS. N. C., AND HORAN, L. G. Diagnostic import of QRS notching in high-frequency electrocardiograms of living subjects with heart disease. Circulation 44,605 (197 1). 14. SUSSKIND, A. K. “Notes on Analog-Digital Conversion Techniques.” MIT Press, Cambridge, Mass., 1957. 15. MCRAE. D. D. “Interpolation errors,” Advanced Telemetry Study Technical Report 1, Parts 1 and 2. Radiation, Inc., Melbourne, FI., 1961. 16. MACY, J., JR. Analog-digital conversion systems. In “Computers in Biomedical Research” (R. W. Stacy and B. D. Waxman, Eds.). Vol. 2, pp. 3-34. Academic Press, New York, 1965. 17. BERSON, A. S. Analog-to-digital conversion. In “Computer Application on ECG and VCG Analysis” (C. Zywietz and B. Schneider, Eds.). pp. 57-72. North-Holland/American Elsevier, Amsterdam, Oxford, and New York, 1973. 18. FISCHMANN, E., COSMA, J., AND PIPBERGER, H. V. Beat to beat and observer variation of the electrocardiogram. Amer. Heart J. 75,465 (1968). 29. BARR, R. C., AND SPACH, M. S. Sampling rates required for digital recording of intracellular and extracellular cardiac potentials. Circulation 55,40 (1977).
616
BERSON
ET AL.
20. BERSON, A. S., WOJICK, J. M., AND PIPBERGER.H. V. Precision requirements for electrocardiographic measurements computed automatically. IEEE Trans. Bio-med. Eng. 24,382 (1977). 21. STEWART, D., DOWER, G. E., AND SURANYI,0. An ECG compression code. J. Electrocardiol. 6, 175 (1973). 22. RUTTIMANN, U. E., BERSON. A. S., AND PIPBERGER, H. V. ECG data compression by linear prediction. In “Computers in Cardiology,” pp. 313-315. IEEE Computer Society, Long Beach, Calif.. 1976. 23. HUFFMAN, D. A. A method for the construction of minimum redundancy codes. hoc. IRE 40, 1098 (1962).