Statistical modeling and feature selection for seismic pattern recognition

Statistical modeling and feature selection for seismic pattern recognition

Pattern gecoonirion, Vol. 18, No. 6, pp. 441-448, 1985. Printed in Great Britain. 0031-3203/85 $3.00+ .00 Pergamon Press Ltd. Pattern Recognition Soc...

847KB Sizes 0 Downloads 88 Views

Pattern gecoonirion, Vol. 18, No. 6, pp. 441-448, 1985. Printed in Great Britain.

0031-3203/85 $3.00+ .00 Pergamon Press Ltd. Pattern Recognition Sociely

STATISTICAL M O D E L I N G A N D FEATURE SELECTION FOR SEISMIC PATTERN RECOGNITION R. F. KUBICHEKand E. A. QUINCY Department of Electrical Engineering, University of Wyoming, Laramie, WY 82071, U.S.A. (Received 7 December 1984;received for publication 5 March 1985)

AImmct--Application of pattern recognition techniques to reflection seismic data is difficult for several reasons. The amount of available training data is limited by the degree ofweUcontrol in the area and may not be sufficient. In contrast, seismic data sets are often extremely large, necessitating the use of the smallest possible feature set to allow quick and efficient processing. In this paper, a method to generate synthetic training data is described, which alleviates the problem of insufficient training data. A means is provided for injecting a priori geologic knowledge into the classifier, including well logs. Finally, a feature evaluation algorithm using a performance metric related to the Bayes probability of error is outlined and applied to the training data to identify effectivefeature sets. Seismic pattern recognition

Feature selection

Statisticalmodeling

I. I N T R O D U C T I O N

Stratigraphic traps containing hydrocarbon deposits are normally difficult to locate using seismic data. Consequently it is estimated that traps of this form contain much of the world's remaining undiscovered hydrocarbon reserves. (1) The ineffectiveness of the usual seismic prospecting methods can be attributed to the subtle nature of stratigraphic traps. In many cases, the reservoir is a sandstone layer only tens of meters thickma fraction of the seismic wavelength and below the ordinary limits of resolution. (2~ Furthermore, reservoirs bounded by small lateral changes of porosity may be discernible only by minute changes in the character of the reflection waveform. The research described in this paper addresses several problems associated with applying pattern recognition techniques to reflection seismic data. Among these is the difficulty of designing a classifier which utilizes the a priori knowledge of the geologist about the characteristics of expected hydrocarbon traps. A second problem stems from the small number of traces measured in the vicinity of a control well, which is normally required to associate seismic reflections with hydrocarbon or non-hydrocarbon bearing strata. These traces may not be available in sufficient quantity to adequately train the classification algorithm or to select an effective feature set. Finally, careful selection of the feature set and application of a Bayes classifier are critical for discriminating between subtle changes in waveform. Feature selection must therefore be based on a metric related to the Bayes probability of error. In the following sections, we describe a statistical modeling procedure for generating training data which includes a priori geological information. A

Synthetic training data

feature selection method is also described which uses a metric related to the Bayes performance. A flowchart depicting the major processing steps is given in Fig. 1. Statistical generation of seismic training data is not discussed elsewhere. A thorough treatment of feature performance metrics and feature selection techniques are available in the literature. (3"4) The measurement and selection of seismic attributes for linear classifiers is detailed in Khattri et al. (5-~) Discussions on selection of seismic attributes applicable to a non-linear Bayes classifier were not found in other literature.

2. STATISTICAL M O D E U N G

In order to incorporate a priori geological knowledge into the classifier design, synthetic seismic traces are generated based on well control and the statistics of anticipated properties of the various rock layers. This modeling procedure also alleviates the problem of inadequate training data by creating as many synthetic traces as are needed. Statistical modeling consists of generating a set of 1-D normal incidence seismic traces according to a statistical model. Specifically, the layered earth is described by inputting the probability density function (pdf) for the rock velocity, density, and depth (or thickness) of each layer. The density functions represent a priori geologic knowledge and can be arrived at by either theoretical considerations, (s' 9) or by sketching out pdfs which describe intuitive feelings about the expected layering. A computer program generates a set of synthetic velocity and density well logs which vary randomly according to the input pdfs. Next, a synthetic seismic trace is created for each velocity/density well log using a method described by Wyatt.(i°)

441

442

R.F. KUnlCHEKand E. A. QUINCY

I KA -PW RLE IO RE I NO DG ANDWELL CONTROL

l

STATISTICAL] MODELN IG

L

Q

QSYNTHETC I RANDOM TRACES

I

EXTRACTO I N] OFSEISMIC ATTRB I UTES BOTTOMU - P[ FEATURE SELECTO IN

TA FEATURE', VECTORS~

TPJ~N I' INGOF BAYES CLASSIFIER

I

ET SU TREI FB EA SET

Fig. 1. Major processing steps for statistical modeling and feature selections.

Because these are one-dimensional traces, use ofthis system is restricted to cases which involve essentially only horizontal layering, Such traps are not uncommon, however, and those due to horizontal changes in porosity, pinchouts and even sandstone lenses can be modeled without incurring unreasonable seismogram error. To demonstrate this procedure, we generate synthetic traces for a model of the South Glenrock field in north central Wyoming described by Ryder et al/1 t~ This field produces in part from deposits in a thin sandstone lens, and would be difficult to detect using 40 Hz seismic data. Figure 2 shows a model of the South Glenrock field which includes four artificial lenses (numbered 1, 2, 3 and 4) added to test classifier performance when geological conditions differ from the region near the control wells. The actual field consists of lens 5. Figure 3 shows a synthetic seismogram produced from this model by a ray tracing technique using a 40 Hz Ricker wavelet. Wells 4 through 7 are productive while wells 2 and 3 are dry. The statistical model is listed in Table 1 and assumes control at well 5 (producing) and at well 2 (nonproducing). Layer depths are assumed to be known constants or uniformly varying within a known range while the velocity and density are normally distributed with standard deviations of 100 fps and 0.025 g cm- 3, respectively. The mean values used are the velocities and densities measured from the control well logs. Representative synthetic well logs are shown in Fig. 4 and suites of random traces are presented in Fig. 5, The traces on the bottom correspond to the unproductive well 2 while the traces on the top represent the producing well 5. Although there is random variation from trace to trace, differences between the bottom and

Table !. Statistical model description of South Glenrock field reflecting a priori knowledge Depth

Formation

Velocity*

Density*

10638 I 1364 10526 12195 10526 13699 I 1364 14493

2.45 2.45 2.41 2.51 2.45 2.58 2.58 2.64

10638 11364 10526

2.45 2.45 3.41

(Control at nonproductive well 2)

0 152 332 352 359 U(371,375) U(396, 411) 501

Belle Four(he Shale Mowry Shale Shell Creek Shale Marine Siltstone Shell Creek Shale Muddy Sandstone Skull Creek Shale Dakota Sandstone

(Control at productive well 5)

0 152 332 352 359 U(371,375) U(396, 411) + U(5, 25) 501

Belle Fourche Shale Mowry Shale Shell Creek Shale Marine Siltstone Shell Creek Shale Muddy Sandstone Alluvial Channel Sandstone Skull Creek Shale Dakota Sandstone

12 ! 95

2.51

10526 13699 13300 1! 364 14493

2.45 2.58 2.44 2.58 2.64

* Velocity and density values: in all cases these are normally distributed with the mean values shown above, and having standard deviations of I00 fps for velocity and 0.025 gcrn -3 for density, Notation: U(a. b) is a uniformly distributed number between a and b. + U(a, b) is a uniformly distributed number between a and b which is added to the previous depth.

Statistical modeling and feature selection i DJstonce trnil,es)

2

443

S 4 5 6

o~,

?

ooo

e

o

o 12 mites

Legend 0 Oil, weft ..~ Dry weft

(0638 2.45

11364 2.45

A= 320

12~s5 2.Sf"~ _(0526 2 4 5 "

' I .44

. ios26

2.4=

13699

2.58 5

32.44

2.44

~ 2.44

1

1 3 2.44

J 6

4

2.58

14463 2 64

¢s4c

Fig. 2. Model of South Glenrock field. Consists only of sandstone lens 5; lenses 1-4 were added for testing reasons. Velocity (fps) and density (g era- 3) pairs given for each layer. top traces are barely discernable. These traces have been used to train a classifier yielding a 15~oerror rate when applied to synthetic seismic data similar to that in Fig. 3." ~)The random nature of these training traces allows the classifier to perform well even when the velocity, density, or layer thickness is different from the control well log values. Statistical modeling is also used to generate data used in the feature selection process described next. 3. FEATUREEVALUATION Computer time requirements for pattern recognition software grows as the number of attributes increases, making it advantageous to reduce the size of the feature space. Feature selection is used to determine a small subset of the 29 available attributes which provides suffi~ent ~ i n a t o r y power for a dassitier. The features measured and evaluated here are listed in Table 2 and are described in more detail in Kubichek. (t3) Because complex interrelationships can exist between features, the optimal set can be identified only by conducting an exhaustive search of all possible combinations of attributes. Normally, an exhaustive search is impra~ical due to the enormous number of possible feature combinations. Narendra and Fukunaga " ' ) have proposed an optimal 'branch and bound' algorithm which substantially reduces the number of set combinations that must be evaluated. This approach Fig. 3. Synthetic seismogram of S. Glenrock model using 40 Hz Ricker wavelet. requires a feature performance metric which decreases

444

R.F. KUBICHEKand E. A. Qtnscv V - Log :-

y-

O - tog -

N

m

D - tog .-

m

(a) O.OOE + O 0

V - tog N

0 m

.

.

m

m

O.OOE + O0

_=

• S I ~ E *01

=

--

I.~E÷~ -~_ _=

I.Oq[*02 _=

t. 6 0 | + 0=* .--~ _= ~.OOE,t 0 2 ~

2 . 0 0 [ + 02 , ~ _

T

_= P.50[ ¢ 02 -~

=7 ;LSOE÷O;~ " - ~

?.

_=

S.OOE ÷0~' . - ~

-

5.(~S÷ ~

--~

3 . 5 0 E ÷ 0 2 ---~

_= I

I

4.00E ÷ 02 - ~ _= =

.

-

--"

I

4.~OE+O:*

4 . S 0 [ ~ OZ ---2

__=

= S.OOE ,~ OZ --~_ --

~,.SOE,* Oz

-=

S . S I ~ *01~

I

-

Fig. 4. Representative density and velocity well logs for S. Gienrock statistical model. (a) Non-productive and (b) productive. or increases w h e n e v e r a feature is r e m o v e d or a d d e d to a given subset. Unfortunately, the metric selected for

this research (described below) does not always behave this way and the branch and bound method cannot be used. Several suboptimal methods exist for finding feature

subsets which do not guarantee the best set will be identified. The fact that our main goal is to determine a low dimensional feature set suggests we utilize a 'bottom-up' type search which operates as follows. An initial subset is formed using the single feature with the highest performance score. Each subsequent iteration

Table 2. List of features Abbreviation

Description

References

NFTTT

Normalized peak-trough time

18; 19, 20

A3- !, A3-2 REFCI, REFC2, REFC3 INSTFI, INSTF2, INSTF3, INSTF4 INSTPI, INSTP2, INSTP3, INSTP4

Coefficients of 3rd order auto-regressive model

21; 22; 23; 24

Reflection coefficient or partial correlation coefficients Principal components of inst. frequency

13; 25, pp. 104, 105, 113 26; 27

Principal components of inst. phase

26; 27

DPOLAR

Dominent polarity Frequency of maximum power

28; 29; 30, p. 220 5; 6; 7

Average power weighted frequency 2.Sth,50th and 75th percentileof integratedspectrum

5; 6; 7 5; 6; 7

AI/A0, A2/A0, A3/A0, A2/A I, Amin/A0

Ratios of autocorrelations

5; 6; 7

TZERI, TZER2, TZER3 TMIN

Time lags of Ist,2nd, 3rd zero crossings

5; 6; 7

Lag of first minimum

5; 6; 7

FMAX FAVW F25, FS0, F75

Statistical modeling and feature selection

445

Table 3. Top 5 feature sets for 40 HZ S. Glenrock data Feature set

I-NN

Bayes

1. AMIN/A0, DPOLAR, INSTPI

81.3

82.9

2. REFC2, F75, A2/AI

76.6

78.1

3. F75, DPOLAR

77.0

77.9

4. FMAX

79.7

76.7

5. TZER3, F25

73.4

76.4

contains a step to add one feature to the set and a step which may remove one feature. In the add step all the features not yet included are examined one at a time to see which one will most improve the current set. If the amount of improvement exceeds a preset threshold, the feature is added and the algorithm continues on to the next step, otherwise the algorithm terminates. In the removal step each feature currently included in the set (unless it has just been added) is temporarily removed, one at a time, to see if that feature is no longer useful. An attribute is deleted from the set if its removal does not reduce the performance beyond a set threshold. In this way, features are taken out when they no longer contribute to the set's discrimination power. The search procedure requires a metric directly related to Bayes classifier performance. Measures such as the Bayes error rate, divergence, or the Kolmogorov variational distance ~3'4~ are applicable, but make excessive demands on computer time, due to the evaluation of multidimensional integrals and the problem of estimating the conditional density function for each class. The metric used in this paper is the per cent correct classifications for a l-nearest neighbor (I-NN) decision rule. Using the 1-NN rule, a sample is assigned the same class as the closest training sample. The feature set performance metric is calculated by classifying every sample of the training data set using the 1-NN rule and determining the error rate. Whitney (1s) shows that the Bayes risk R* (or probability of error) is related to the 1-NN risk (or probability of error) R(1) by: [1 - x/1 - 2R(1)]/2 ~< R* ~< R(I) In other words, R(1) provides an asymptotic upper bound on Bayes classification error and is therefore a reasonable measure of the discriminatory power of a feature set. The bottom-up search using the I-NN metric was applied to the data set generated by the statistical modeling method. This data was first modified by adding 10~o bandpass filtered Gaussian noise in order to identify variables which are effective when conditions are not ideal. The Bayes performance of attributes chosen by the bottom-up procedure was measured in order to verify their effectiveness. The Bayes classifier was implemented with the class probability density function modeled Pg 1 8 : o - £

Fig. 5. Trace ensembles generated by statistical modeling for the S. Glenrock model using 40 Hz. Bottom traces correspond to non-productive well 2 and top traces to productive well 5. as a weighted sum of Gaussians/~ 6, t T~ The top 5 feature sets are listed in Table 3 along with their 1-NN and Bayes performance values. Examination of these values indicates that the 1-NN measure does not provide a consistent lower bound. This is primarily caused by the inclusion of variables such as peak-trough time, which contain a number of identical measurement values. In this situation, the determination of the closest neighboring sample depends on the sorted order of the feature vectors in computer memory. The 1-NN metric is thus biased upward since adjacent vectors often belong to the same data class. Variables which sometimes exhibited this 'clumped' behavior are NPTTT, TMIN, and FMAX. Another reason for the I-NN metric exceeding the Bayes performance is the type of clustering displayed by the data in the Oven feature space. Use of ISODATA cluster analysis (used for pdf estimation) and multi-model Gaussian estimates of the density function implicitly assume that clusters are approximately hyper-ellipsoidal in shape. When this is not the case, the Bayes classifier can be expected to yield suboptimal results. Because of these problems, we identify ten feature sets for each data case rather than relying completely on the first selection. After an initial set is found, the first feature chosen by the bottom-up procedure in the previous set is removed from the list of available features. Another set of attributes is then identified from this list and the process continues until ten sets are known. Figure 6 illustrates 9 iterations of the bottom-up procedure showing the I-NN metric plotted as a solid line and the Bayes result as a dashed line. As expected,

446

R.F. KUmCH~Kand E. A. QuINcY Performonce meosure compori$on Sol.id line = I - NN m e t r i c Doshed Line = Boyes Derformonce

eo 8'5 in eo

70

J5 J

2

3

4

5 E H e r o , i o n number

?

g

!

Fig. 6. Bayes and I-NN performance metrics for the bottom-up evaluation example.

the 1-NN metric provides a rough lower bound on Bayes performance but the two curves do not track each other exactly. This indicates that small changes of the metric may not translate directly into similar small changes in Bayes performance. Furthermore, experience with curves of this type indicate that feature sets containing about 5 or more variables are not significantly better than smaller sets. The feature sets described in Table 3 therefore represent only the top 4 variables and do not include those which add no more than 2 or 3 per cent to the performance measure. 4. CONCLUSIONS

A Bayes classifier using feature set I (Table 3) and trained using statistical modeling was able to produce 85% correct classifications on data similar to that in Fig. 3. Even with 20°~ added noise (signal-to-noise ratio of about 11 dB) performance dropped only 6.4% demonstrating the robustness of the technique. In summary, this research has shown that statistical modeling can generate effective training data by using both well control and a priori geological information. The statistical training data can also be used on conjunction with a feature selection procedure and a I-NN performance metric to identify potent feature sets. The application of these techniques along with Bayes classification and relaxation labeling are discussed in Kubichek and Quincy. I~) SUMMARY

Stratigraphic traps containing hydrocarbon deposits are normally difficult to locate using seismic data. The research described in this paper addresses several problems associated with applying pattern recognition techniques to reflection seismic data. Among these is the difficulty of designing a classifier which utilizes the a priori knowledge of the geologist about the characteristics of expected hydrocarbon traps. A second problem stems from the small number of traces measured in the vicinity of a control well, which can result in insufficient training data. Finally, careful selection of the feature set and application of a Bayes classifier are critical for discriminating between

subtle changes in waveforms. Feature selection must therefore be based on a metric related to the Bayes probability of error. In order to incorporate a priori geological knowledge into the classifier design, synthetic seismic traces are generated based on well control and the statistics of anticipated properties of the various rock layers. Probability density functions (palls) describing these statistics are used to generate suites of random well logs which describe possible conditions at points remote to the control wells. Synthetic seismic traces are derived from these random well logs and are subsequently used to train the classifier. This modeling procedure alleviates the problem of inadequate training data by creating as many synthetic traces as are needed. Selection of an effective feature set is accomplished by a 'bottom-up' approach which successively adds in attributes that improve performance and removing those which are no longer useful. Since this step is carried out using statistically generated training data, the resulting feature sets should be effective for the range of anticipated geological conditions. The performance metric employed is the per cent correct classifications of a 1-nearest neighbor (1-NN) decision rule applied to the training data. Whitney (Is) has shown this measure to provide a lower bound on Bayes classifier performance. These techniques are demonstrated by generating synthetic training data for a sandstone lens model. Feature selection is applied to this data to identify five effective sets of attributes. In a companion paper 112) this process is used in conjunction with Bayes classification and relaxation labeling to successfully identify stratigraphic traps in synthetic seismograms.

Acknowledgements--The authors wish to express their appreciation to Dr SCOTTSMITHSON,Department of Geology and Geophysics, University of Wyoming, for his valuable discussions on geologic modeling. REFERENCES 1. M. T. Halbouty, Rationale for deliberate pursuit of stratigraphic, unconformity, and paleogeomorphic traps,

Statistical modeling and feature selection

2. 3. 4. 5.

6. 7. 8. 9. 10. 1I.

12. 13.

AAPG Memoir 26. 3-7, American Association of Petroleum Geologists, Tulsa, OK (1972). M. B. Dobrin, Seismic exploratiion for stratigraphic traps, AAPG Memoir 26. 329-351, American Association of Petroleum Geologists, Tulsa, OK (1977). P. A. Devijver and J. Kittler, Pattern Recoonition: A Statistical Approach, Prentice-Hall, London (1982). C. H. Chen, Statistical Pattern Recoonition, Hayden Book Co., Rochelle Park, NJ (1973). K. Khattri, A. Sinvhai and A. K. Awashthi, Seismic discriminants of stratigraphy derived from Monte-Carlo simulation of sedimentary formations, Geophys. Prospect. 27, 168-195 (1979). A. Sinvhal and K. Khattri, Application of seismic reflection data to discriminate subsurface lithostratigraphy, Geophysics 48, 1498-1513 (1983). K. Khattri and R. Gir, A study ofthe seismic signatures of sedimentation models using synthetic seismograms, Geophys. Prospect. 24, 454-477 (1976). E. W. McLemore, Probability studies in three variants: velocity, depth, and lithology, Geophysics 28, 46-86 (1963). C. J. Velzeboer, The theoretical seismic reflection response of sedimentary sequences, Geophysics 46, 843-853 (1981). K. D. Wyatt, Synthetic vertical seismic profile, Geophysics 46, 880-891 (1981). R.T. Ryder, M. W. Lee and G. N. Smith, Seismic Models of Sandstone Stratigraphic Traps in Rocky Mountain Basins. American Association of Petroleum Geologists, Tulsa, OK (1981). R. F. Kubichek and E. A. Quincy, Identification of seismic stratigraphic traps using statistical pattern recognition, Pattern Recognition 18, 449--458 (1985). R.F. Kubichek, Identification of stratigraphic traps from 2-D seismic data using non-linear statistical pattern recognition. Ph.D. Dissertation, University of Wyoming

0984). 14. P. M. Narendra and K. Fukunaga, A branch and bound algorithm for feature subset selection, IEEE Trans. Comput. C-26, 917-922 (1977). 15. A. W. Whitney, A direct method of nonparametric measurement selection, IEEE Trans. Comput. C-20, 1100-1103 (1971). 16. A. N. Mucciardi and E. E. Gose, A comparison of seven techniques for choosing subsets of pattern recognition properties, IEEE Trans. Comput. C-20, 1023-1031

447

(1971). 17. E. A. Quincy, Non-linear correlation receivers for nonGaussian statistics, Record of 1970 Annual Southwestern IEEE Conf.. p. 204 (1970). 18. L.D. Meckel Jr andA.K.Nath, Geologic considerations for stratigraphic modeling and interpretation, AAPG Memoir 26, 417--438, American Association of Petroleum Geologists, Tulsa, OK (1977). 19. M. W. Schramm Jr, E. V. Dedman and J. P. Lindsey, Practical stratigraphic modeling and interpretation, AAPG Memoir 26. 477-502, American Association of Petroleum Geologists, Tulsa, OK (1977). 20. N. S. Neidell and E. Poggiagliolmi, Stratigraphic modeling and interpretation--geophysical principles and techniques, AAPG Memoir 26, 389--416, American Association of Petroleum Geologists, Tulsa, OK (1977). 21. P. Bois, Autoregressive pattern recognition applied to the delimitation of oil and gas reservoirs, Geophys. Prospect. 28, 572-591 (1980). 22. P. Bois, Determination of the nature of reservoirs by use of pattern recognition algorithm with prior learning, Geophys. Prospect. 29, 687-701 (1981). 23. D. Tjostheim and O. Sandvin, Multivariate autoregressive recognition of multichannel waveforrns, IEEE Trans. Pattern Anal. Mach. lmell. PAMI-I, 80-86 (1979). 24. L Marple, A new autoregressive spectrum analysis algorithm, I EEE Trans. Acoustics Speech Signal Process. ASSP-28, 441--454 (1980). 25. J. Makhoul, Linear prediction: a tutorial review, Proc. IEEE 63, 561-580 (1975). 26. K. Huang and K. Fu, Classification of Ricker wavelets and the detection of bright spots using a tree classifier, Proc. 3rd Int. Syrup. on Computer Aided Seismic Analysis and Discrimination, pp. 89-97 (1983). 27. D. C. Hagen, The application of principal component analysis to seismic data sets, Proc. 2nd Int. Symp. on Computer Aided Seismic Analysis and Discrimination, pp. 98-109 (1981). 28. M. J. Taner and R. E. Sheriff, Application of amplitude, frequency, and other attributes to stratigraphic and hydrocarbon determination, AAPG Memoir 26, 301-327, American Association of Petroleum Geologists, Tulsa, OK (1977). 29. M.T. Taner, F. Koehler and R. E. Sheriff, Complex trace analysis, Geophysics 44, 1041-1063 (1979). 30. N. A. Anstey, Seismic Interpretation: The Physical Aspects. IHRDC, Boston, MA (1977).

About the Author--R. F. KUBICHEKreceived B.S. degrees in Electrical Engineeringand Computer Science at the University of Wyoming in 1976. He received an M.S. in Electrical Engineering in 1977 at the same university. He has recently completed the requirements for a Ph.D. degree in Electrical Engineering at the University of Wyoming. He worked for Boeing Computer Services for two years, and has since worked as a Lecturer, Research Assistant and Consultant, Mr Kubichek is currently employed in electrical engineering at the University of Wyoming on seismic signal processing research. His interests are statistical signal processing and pattern recognition. Mr Kubichek is a member of IEEE and the Society of Exploration Geophysicists. About the AU~N'mEDMUNDA. QuINcY received his B.S.E.E. degree from the University of Nebraska in 1960, an M.S. degree in Engineering from the University of California, Los Angeles in 1962 and a Ph.D. degree in Electrical Engineering from Purdue University in 1966. Both advanced degrees were specialized in Statistical Communication Theory. From 1960 through 1963 he was employed in the aerospace industry working on radar cross-section design and optical space communications. From 1966 to 1968 Dr Quincy conducted signal design and detection research for the U.S. Department of Commerce, Boulder, CO, and was concurrently a Visiting Lecturer at the University of Colorado. In 1968 he joined the Department of Electrical Engineering at Colorado State University as an Assistant Professor. Dr. Quincy accepted the position of Associate Professor with the Department of Electrical Engineering at the University of Wyoming, Laramie, WY, in 1970 and was promoted to Professor in 1977. Since 1968 his research has been directed toward statistical signal processing in geophysics, including pattern recognition, image processing and estimation. Research project., have included induction sounding of

448

R.F. Ktm~C.HEKand E. A. QuINcY underground coal gasification sites, in situ oil shale burn detection, pattern recognition of ice crystals, prediction of freezing on bridge decks and seismic image enhancement, beam steering and pattern recognition. Dr Quincy has consulted for North American Rockwell, IBM, the U.S. Department of Energy and major off companies. He is a Senior Member of IEEE, has served as Secretary to the Denver Section and Chairman of the Communications Society Chapter of IEEE, Denver Section. In addition, he has served as Associate Editor of the IEEE Communications Society Digest. He is also a member of Etta Kappa Nu, Sigma Tau, Tau Beta Pi and Sigma Xi.