Pattern Recognition, Vol. 22, No. 6, pp. 767 774, 1989
0031 3203/89 $3.00 + .0G Pergamon Press pie Pattern Recognition Society
Printed in Great Brilain
A N E W A P P R O A C H TO C L A S S I F I C A T I O N O F B R A I N W A V E S JOSEPH J. LIANG Institute for Constructive Mathematics, Department of Mathematics, University of South Florida, Tampa, FL 33620, U.S.A. and VIRGINIA CLARSON E-System Inc. ECI Division, 1501 72nd Street, St. Petersburg, FL33733, U.S.A. (Received 11 March 1987; in revised form 28 July 1988; received fi)r publication 28 November 1988) Abstract--In past years statistical pattern recognition has been used in the classification of evoked potential waveforms. The statistical approach has more recently been enhanced and often replaced by the structural approach in which patterns are depicted as being built out of subpatterns in various ways of composition. This paper describes an application of a general approach to pattern recognition in which these principal approaches are combined. The result is a two-stage classification algorithm which places evoked potential waveforms into clinically significant classes. Fourier descriptors are used as feature representations of each waveform shape. Euclidean distance is used to determine an optimal number of these features for design and testing of the algorithm. The development of a dissimilarity measure follows and is crucial to the overall success of the algorithm. An interdistance matrix of the training sample which is all that is retained from the original input is projected into a pseudo-Euclidean vector space resulting in a minimal vector representation of the sample. Accuracies of 90% of correctly classified test waveforms have been obtained. Unifiedtheory Brainwaves
Pattern recognition
Classification
INTRODUCTION The many different techniques used to solve pattern recognition problems have been grouped into two principal approaches: the statistical approach and the structural approach. In the statistical approach, a set of characteristic measurements, or features, are extracted from patterns and the classification of each pattern is made by partitioning the feature space into mutually exclusive regions, each region corresponding to a particular pattern class. ~11The features define the basis of the space and an object is represented as a vector in that space. Structural pattern recognition procedures have rarely been used in EEG evaluation, 114~but there is a growing literature in analysis of other biologically derived signals.~x5'161 This pattern recognition approach involves not only the capability of assigning a pattern to a particular class, but also determining those aspects which make it ineligible for other classes. This paper describes the result of a research effort to combine the advantages of both of these approaches into an algorithm for the automated classification of normal and abnormal evoked potential waveforms into clinically significant classes. A finite set of features, Fourier descriptors (FDs), 13'4'5) that characterize each waveform has been derived. FDs have proved to be a useful set of features in problems where the main information for pa ~2:6-z
Evoked potential waveforms
classification is found in the boundary of the object. This property is extended to a line pattern by tracing it once and retracing it so that a closed boundary curve is obtained. Since the feature sets of FDs contain information about the shape of the waveforms, they will be used in the structural stage of the algorithm. Using this set of features, a dissimilarity measure which identifies general pattern trends of individual EP waveforms is developed using discriminant analysis. The resulting interdistance matrix is all that is preserved in the construction of the vector representation, making the dissimilarity measure crucial to the success of the remainder of the procedure. Finally, the test waveforms are projected into a minimal pseudo-Euclidean vector space defined by the vector representation. Since the dimension of the pseudoEuclidean space is less than the original FD vector descriptions, an additional reduction in the number of features is realized. Using only 5 features for each waveform, the elements of the test set are placed into clinically significant classes. The classification procedure itself is an application of an approach proposed by Goldfarb ~6"7~ in which the structural and statistical approaches to pattern recognition are combined to obtain high recognition accuracy with minimum computing effort. It will be shown that this algorithm using FDs as feature vectors and Euclidean distance to indicate improvement in
767
768
JOSEPH J. LIANG a n d VIRGINIA CLARSON
per cent of correctly classified waveform yields accuracies of at least 90%. DATA PROCESSING
The brainstem auditory evoked potential (BAEP) has been chosen for classification because it has been shown that the technique of averaging used to recover these tiny signals from background activity and system noise is most reliable for the BAEPs. 18~The most common model used is:
f(t) = s(t) + n(t) where f(t) is the measured waveform, s(t) is the component off(t) associated with the evoked potential, and n(t) is the component of f(t) independent of the evoked potential, i.e. noise. Since the noise is independent of the evoked potential and s(t) repeats with each stimulus, simple averaging results in an estimate of the evoked potential waveform. Waveforms for 90 subjects have been recorded on a Nicolet CA-1000 and transferred to permanent storage in an IBM 3081D system via a Nicolet DC2000. Of the 90 waveforms, 50 are normal and 40, abnormal. Visual evaluation by electroencephalographers has divided 40 of these waveforms into normal and abnormal categories--20 per category. These waveforms have been used for the training set. Each waveform is assumed to qualify for only on of the categories. Fourier descriptors have been chosen as feature representations of the waveforms. An advantage of using FDs is the significant data reduction in shape representation and recognition. The relationship between training sample size and optimal number of features has been studied. 117) It has been shown that when using a finite number of design samples and parameters are estimated, the probability of error approaches 1/2 as the dimensionality increases."81 Cluster analysis is used to maximize confidence in the optimal number of features to use in the classification system. The L technique,(19) a modification of the U method, uses N - 1 of the data vectors, where N is the number of training samples in each class. Using Euclidean distance to test the Nth vector, a maximum of 55% correctly classified waveforms occurred when the feature vector had 8 components. The length of the FD feature vectors were varied between lengths of 5 and 15 components. Since the initial classification results were so low, it was decided that maximizing the number of correctly classified waveforms using Euclidean distance would be the criterion for the success of the remainder of the procedure. D E V E L O P M E N T O F A DISSIMILARITY MEASURE
An important aspect of the systematic approach that combines the advantages of the structural and statistical approaches is that the shape of the objects
to be classified plays an important part in the development of the procedure. In this application the description of the shape of the waveforms is assumed to be contained in the vectors of Fourier coefficients. It should be mentioned that other interesting representations of brain waveform data have been presented. Bourne et al. 12°) describe a pattern representation of the physical shaped based on measurements of peak frequency, power in the main spectral peak and various absolute and ratio power measures. A purely syntactic description where primitives are extracted and sentences that adequately describe the signal can be found in Madhaven et al. ~2~Either of these methods of shape description could serve as an excellent vehicle in the development of a dissimilarity measure. This section describes the steps taken in the development of an interdistance matrix for the training set. Stage I involves an investigation to find an appropriate discriminant function that will be used to give an initial partition of the waveform data. Stage II combines this partition with a norm which describes the relative distances between FD vectors. The final partition that results from stage II provides the discriminatory information that is required for the remainder of the procedure. The dissimilarity measure, or pseudometric function, is used to transplant the original sample to a vector space which preserves discriminatory information. See definition 1 in Appendix A at the end of this paper. Other useful definitions and statements of theorems and corollaries will also be found in the Appendix. The mapping H will be used to represent a pseudometiric function.
Stage I: discriminant analysis The construction of the dissimilarity measure is achieved using the discriminant function described by Lachenbrucht9) Dr(X) = (X -- 1/2(# 1 + pz))tE l(pj _ P2)
(1)
where Pi, i = 1, 2, are the maximum likelihood estimates of the feature vectors for the normal and abnormal categories, and E is the common covariance matrix. Some modifications to #i were necessary because outliers in the individual marginal samples appeared to exist. A problem inherent to brainwave classification is the outlier which can arise naturally in the generation of noise infiltrated data. A discordancy test for upper outlier, xt,~, was performed on the eight marginal samples in each clinical class. The test statistic used was the internally studentized extreme deviation from the mean -X ( n )- - s
< 2.56
where 2 is the maximum likelihood estimate of each marginal sample and N
s2= 1~(n-l)
~ ( x t j ) - 2 ) 2. j=l
Classification of brainwaves Once identified, the outlier is collapsed to its nearest neighbor which is counted twice in the new unweighted average vector. Using a modified discriminant function, D~., where Pi in (1) are replaced by those without outliers results in a new partition placing 55% of the waveforms into correct clinical classes and another 25% of the waveforms into a tight cluster containing both normal and abnormal candidates. Thus, the training set was classified into three categories: normal (N), abnormal (A), and questionable (Q). It will be shown that in spite of these low results at this stage, the final accuracy achieved using the described dissimilarity measure will be high.
Stage II: development of the interdistance matrix Let each waveform, X j, be represented by eight FDs, x~, i = 1, 2 . . . . , 8. Normalize the F D vector using the following: if xjt is the largest F D for waveform X j, then ajk =
{ 8 t(xjs*8/xjl+O'5)
f°rkg:lfor k = I.
Thus, we have the following vector
X~ = (aj~, a j2 . . . . . ajs). Next, if define yj. ::
ajk is the kth component of waveform X j, n=l,
liar. - aj,.+ 111,
2 .... ,7
and 7
A:= ~
IlYji - Ykill
i=1 8
Ilaii--akill.
B : = y' i=I
The norms, [ill, are given values that reflect their relative frequency of occurrence:
[[Yji -- Ykill =
ti
iflYji-- Ykile{O, 1}
if
lyj, -
Yk~I~ {2, 3,4}
if lYji - Y'k~l e {5 ..... 8},
and similarly for I!a~i- akili. Differences of 0 or 1 are considered to indicate highly similar individual FDs or relative differences in FDs. Differences of 5-8 show significant comparisons and occur most often when comparing different classes. Finally, define: X jk : =
min(A, B)
~ k : = max(A, B). The numerical measure of dissimilarity is calculated using the following algorithm.
Step 1. Input two waveforms, Xj and Xk, each represented by 8 FDs. Step 2. If xji = Xki for i = 1, 2 . . . . , 8, then djk = d(X~, Xk) = O.
769
Step 3. Normalize FDs and calculate X j k , Yjk" Step 4. Use D~. to classify Xj and X k a s Normal, Abnormal, Questionable. Step 5. Assign the value for ing to Table 1.
d jR = d ( X j , X k )
accord-
Table 1 N
A
Q
x~ ~
~
t
N
Xj
A Q
xk ? If Xi~ ~< 3 then assign X*k2, otherwise assign X}~5. + If Yjk = 4 or IYjk - Xjk ] = 2 assign X jr 5, otherwise assign Y pt 3.
Selections from the resulting interdistance matrix are shown in Table 2. VECTOR
REPRESENTATION
Once introduced, the dissimilarity measure and not the set of waveform data becomes of prime importance. Letting P = {xi} o .
770
JOSEPH J. LIANG and VIRGINIA CLARSON
Table 2. Dissimilarity matrix of 9 normal and 9 abnormal waveforms in training set. X5 (Abnormal) is in the questionable category. Normal
Abnormal
1
2
3
4
5
6
7
8
9
1
2
3
4
5
6
7
8
9
0
5 0
4 4 0
2 4 6 0
4 1 4 4 0
7 5 7 5 5 0
7 2 5 5 3 3 0
4 2 0 6 2 5 4 0
4 6 8 4 7 7 7 7 0
30 25 40 35 25 30 10 30 45 0
25 25 35 35 25 40 30 35 45 7 0
35 25 35 25 25 35 30 35 15 8 8 0
35 30 40 25 30 20 25 40 30 8 5 6 0
25 25 35 25 20 35 35 30 35 9 11 5 9 0
20 30 40 30 20 15 20 30 25 5 6 4 6 6 0
20 30 35 25 30 45 35 30 20 8 6 5 8 6 7 0
15 30 40 30 30 45 35 30 30 6 6 5 7 5 5 5 0
50 30 45 45 35 35 35 40 25 8 7 5 7 8 7 7 9 0
project (P, H) into a pseudo-Euclidean space, i.e. ~:(P, H) ~ R~"+'"-k The quadratic form is given as: k
q4x) =
1/2(d 2, + d~j - d2)x,xj
~
(2)
i,j=l
representation for the finite pseudometric space (P, H), P = {Xi}o~i~kx,k 1 < k, is constructed as follows. 1. Compute the matrix M(0') of order (kl + 1) x (kl + 1) using (3) where dq = rc(Xi,X~). 2. Find the characteristic values of M(0') and the corresponding orthonormal characteristic vectors. Then form diagonal matrix /)~k~+ 1) ×~k~+ 1/
where di.i = II(X~, X j). N o t e that ~(Xo) = 0. It is rarely the case that among the elements of P there exists an element Xo such that ~(Xo) = 0. This problem can be eliminated by introducing one new element, p, to the set with the property YlZ(p, Xi) = Ho - viii2, where vl = ~(Xi) and ~(p) = ~ is the mean vector of (P, H) with respect to ~. With the introduction of p, the matrix of the symmetric bilinear form (2) becomes:
[
dl • dn +
~=
dn+ + 1 d,+ +,0 0
M(~O')= 1/2 l / k + 1 2 d~ + 2 d2"',J i=o j=o
The following algorithm results in the vector representation we are looking for. In it, we find the matrix T such that
where d~, 1 -%
.lo = TtM(~') T
T = L'/~1/2
-
1/(k + 1)2~o ~ o d ~ i=
-
d ,j.2.
j=
. O<-%i,j<~k
(3)
where M(~O') is found using (3),
Jo =
0 i]
I,0
kxk
and I, is the n x n identity matrix, see theorem 1. Out of the original training samples, an appropriate representation set P is chosen, i.e. those waveforms which were correctly classified into normal and abnormal categories. These are assumed to sufficiently well represent the two categories. Then a vector
and take the first n + + n - elements of the ith row of T as the coordinates of ct(Xi_ 1), 1 ~ i ~< kl + 1 with respect to a 4~-orthonormal basis (e3 of R "+'"-} for the vector representation ct: (P, H) ~ R t"÷""-). An appealing aspect of this algorithm is that, since the characteristic values of M(0') can be determined in order of decreased magnitude, the matrices D, L and T can be computed for any number of characteristic values and vectors. Thus, one has complete control over the dimension of this vector representation. In this study, the magnitudes of three positive and
Classification of brainwaves
two negative characteristic values were significantly greater than the others. By theorem 2, pseudo-Euclidean space R (3'2) is the minimal one where (P, I-I) can be isometrically represented. E x a m p l e . The vector signature can be confirmed using LaGrange's algorithm. "31 To illustrate the procedure thus far, six correctly classified wavef o r m s - - t h r e e normal, three a b n o r m a l - - h a v e been chosen. Figure 1 shows the normalized FDs for these waveforms. The matrix D of the corresponding interdistances for waveforms Xo, Xa, X2, X3, X4, X5 is given by:
I D=
0
15
6
15
6
15
0
6
30
30
6
45
4
0
20
5
20 5 35
0 40 5
40 0 30
35 5 30 0
15 6 20
6 45 4
Distance in R ~3'2) is given by the following formula, d ( u , v ) = [(u 1 - v l ) z + (u z - v2) 2 + (u 3 -- /)3) 2 - (u4 - v4) 2 - (us -
U = (bII,U2,U3,U4,U5)
r
V = (I)l,U2,U3,[~4,US)
t.
The orthogonal projection is used to represent an element, q, in the pseudo-Euclidean space R <"+ '"- ( A basis for R I"+'"-~ is chosen from among the elements v i = ~t(Xi) which are obtained from the construction of the vector representation of (P,H) during the training stage of this procedure. Let G denote the G r a m matrix of the basis (v31 .<~, which is given by:
6.9 -29.4 10.15 14.0275 -20.4818 24.795,1 0 0 14.6190 4.4022 - 31.8286 --4.7024 0 -- 15.3232 22.6291
The vector representation a: (P, H) ~ R (3'2) is given by the 0 vector and the columns of T" ~(Xo) = Vo = (0, 0, 0, 0, 0) t
X1
The Gram matrix of the form ¢ with respect to basis (vi)t~i.<, contains the complete metric information about the vectors v l . . . . . v,. Then, if Ao is the matrix whose columns are the coordinate columns of a j, the
X3
I,
I,
IIII II CO~FO'4ENT
III III COdPONENT
Xo
II III COMPO'4ENT
(4)
REPRESENTING A NEW OBJECT
where y = T . x and
,, II
vs)z] 1/2
Using this formula one can check that distances are preserved in R 13'2).
Ill(y) = ( y l ) 2 + (y2) 2 + (y3) 2 -- (y4) 2 - - (y5) 2,
T=
(15, 0, 0, 0, 0)' (-10.65, 0, 0, -8.779, 0)' (6.9, 14.0275, 0, 4.4022, 0)t ( - 29.4, - 20.4818, 0, - 31.8286, - 15.3232) t a(Xs) = v 5 = (10.15, 24.7954, 14.619, -4.7024,22.629) t.
and
S 5 are normal. Substitution of these values into (2) gives the quadratic form, if(x), of (P, H). LaGrange's algorithm gives the following canonical form
- 10.65 0 0 -- 8.799 0
vl = v: = v3 = v4 =
where
20
X o , X 2 , X 4 are abnormal waveforms and X1, X3,
1i
0ffXa) = ct(X2) = a(X3) = ct(X4) =
771
I
X5
,,t, I11
II
£Z~PONENq
x2
x4
,I,I
,t,l,I
IIIIII COMPONENT
IIII 11 COMPONENT
Figure 1. Normalized Fourier descriptors, X 1X3 X5normal, X 0 X 2 X4abnormal. Fig. 1. Normalized Fourier descriptors, X x X 3 X s normal, X o X 2 X
4
abnormal.
772
JOSEPH J. LIANG and VIRGINIA CLARSON
orthogonal projection, #, of q is found: /~ = A o • G - 1. b
(5)
where b = (bt,...,bn) is the coordinate vector whose components are computed by
The remaining test waveforms were projected into R {a'2) and classified there. 90% of the test set was correctly placed into normal and abnormal categories. Thus, this classification algorithm reduces the total number of features to five and gives very promising results.
b / = 1/21-I-I2(q,Xo) + I-I2(Xj, Xo) - rI2(q, xj)] ~(X j) = vj. Thus, since G - 1 and Ao" G - 1 are computed once for all new elements, q, in the test set of evoked potential waveforms, the only on-line computations necessary are those to find bj and those in (5). It is necessary to perform only O(n 2) calculations to find each new projection. Returning to our earlier example, let X6 represent a new waveform in the text set. We wish to represent X 6 in R {3'2)and classify it there. The distances between X 6 and the six waveforms used to construct the vector representation are given by: H(Xo, X6) = 30 I-I(Xt,X6) = 4 I-I(X2, X6) = 25
II(X3, X6) = 4 1-l(X4, X6) -m-35 H(Xs, X6) = 2.
The only possible basis, in this case, is X t , X2, X3, X4, X s . Therefore, I1i Ao ~
I G-
-10.65 0 0 -8.799 0
225 - 159.75 103.5 -441 152.25
6.9 -29.4 14.0275 -20.4818 0 0 4.4022 -32.1695 0 -14.5939
-159.75 190.8449 -1122.22 596.1694 -66.7211
10.15 24.7954 14.6190 -4.7024 22.6291
103.5 -1122.22 263.7601 -631.785 397.1516
In this paper a new approach to pattern recognition has been applied to classification of evoked potential waveforms. In it, Fourier descriptors are used as feature vectors to describe each waveform. Using FDs and the described dissimilarity measure has given results of 90% correctly classified waveforms, indicating that the theoretical approach proposed by Goldfarb has considerable merit. Feature selection is an important phase in the development of any pattern recognition algorithm. It has been indicated that the features used in this classification procedure could be enhanced by incorporating those described by Bourne and Madhaven. Future studies will pursue improvements in feature selection. Finally, it is important to realize that this algorithm can be extended to other applications. A natural extension would be a two class problem with objects described by their boundary using FDs.
-441 596.1694 -631.785 2531.7228 -985.2374
I544.5 1
/ t55.5/ k6375/ Substituting into (5) gives
# =
CONCLUSION
36.3 -4.4314 11.729 - 35.9568 1.6962
Using (4), the distance in R (3'2~ between # and the mean vectors for the normal and abnormal categories is found. The test waveform, #, is classified as normal which, indeed, it is.
152.25 -66.7211 397.1516 -985.2374 1465.7383
REFERENCES
1. K. S. Fu, Sequential Methods in Pattern Recognition and Machine Learning. Academic Press, London (1968). 2. G. P. Madhaven, H. DeBruin, A. R. M. Upton and M. E. Jernigan, Classification of brain-stem auditory evoked potentials by syntactic methods, Electroenceph. clin. Neurophysiol. 65, 289-296 (1986). 3. C. T. Zahn and R. Z. Roskies, Fourier descriptors for plane closed curves, IEEE Trans. Comput. C-21, 269281 (1972). 4. E. Persoon and K. S. Fu, Shape discrimination using Fourier descriptors, I E E E Trans. S yst. Man Cybernetics SMC-7, 1710-181 (1977). 5. T. R. Crimmons, A complete set of Fourier descriptors for two-dimensional shapes, IEEE Trans. Syst. Man Cybernetics SMC-12, (1982). 6. L. Goldfarb, A new approach to pattern recognition, Progress in Pattern Recognition, L. N. Kanal and A. Rosenfeld, Eds, Vol. 2. North-Holland, Amsterdam (1985).
Classification of brainwaves
7. L. Goldfarb, A unified approach to pattern recognition, Pattern Recognition 17, 575-582 (1984). 8. J. J. Stockard, J. E. Stockard and F. W. Sharbrough, Brainstem auditory evoked potentials in neurology: methodology, interpretation and clinical application, Electrophysiologic Approaches to Neurological Diagnosis, M. J. Aminoff, Ed. Churchill-Livingston, New York (1980). 9. P. A. Lachenbruch, Discriminant analysis when the initial samples are misclassified, Teehnometrics 8, 657662 (1966). 10. V. Barnett and T. Lewis, Outliers in Statistical Data. John Wiley, New York (1978). 11. W. Greub, Linear Algebra. Springer, Berlin (1974). 12. I. J. Schoenberg, Remarks to Maurice Frechet's article, Ann. Math. 2, 724 (1935). 13. A. Mal'cev, Foundations of Linear Algebra. Freeman, San Francisco (1963). 14. D. A. Giese, J. R. Bourne and J. W. Ward, Syntactic analysis of the electroencephalogram, IEEE Trans. Syst. Man Cybernetics SMC-9, 429-435 (1975). 15. S. L. Horowitz, A syntactic approach for peak detection in waveforms with applications to cardiology, Communs A C M 18, 281-285 (1975). 16. G. Stockman, L. Kanal and M. C. Kyle, Structural pattern recognition of cartoid pulse waves using a general waveform parsing system, Communs A C M 19, (1976). 17. L. Kanal and B. Chandrasedaran, On dimensionality and sample size in statistical pattern classification, Pattern Recognition 3, 225-234 (1970). 18. G. V. Trunk, A problem of dimensionality: a simple example, IEEE Trans. Pattern Analysis Math. Intell. PAMI-I, 306 307 (1979). 19. J. I. Aunon, C. D. McGillem and R. O'Donnell, Comparison of linear and quadratic classification of eventrelated potentials on the basis of their exogeneous or endogeneous components, Phycophysiol. 19, 531-537 (1982). 20. J. R. Bourne, V. Jagannathan, B. Hamel, B. H. Jansen, J. W. Ward, J. R. Hughes and C. W. Erwin, Evaluation of a syntactic pattern recognition approach to quantative electroencephalographic analysis, Electroenceph. clin. neurophysiol. 52, 57-64 (1981).
773
(1) ( x 1 + x2,y ) = ( x l , y ) + ( x 2 , y )
Vx~,x2,yE F (2) (cx~,y) = C(Xx,y) VeeR
(3) ( x l , y ) = (y, x l ) (4) ( x l , y ) = 0¥x1~ V ~ y = O.
Properties 1-4 tell us that (., .) is a non-degenerate indefinite symmetric bilinear form. Pseudo-Euclidean vector spaces may have vectors x :#0 such that ( x , x ) = 0 (isotropic vectors) and vectors y such that (y, y) < 0. In what follows R I"+'"-~ denotes the real n-dimensional (n = n ÷ + n-) pseudo-Euclidean vector space. Definition 3. If q~ is any non-degenerate symmetric bilinear form then given any basis (alh <~i<~,.the square matrix
m(~): = (flp(ai,aj))l ~
:1 where I,, is the m x m identity matrix. Definition 5. A pair of non-negative numbers (n*, n ) will be called the vector signature of a finite pseudometric space (P, FI) if there exists a distance preserving mapping
~t:(p, H) ~ [~l,'., 1 such that for any other isometric mapping of (P. HI into R (",'"~, we have ni /> n + and n 2 >1 n . ~ is called the vector representation of (P, F[) of signature tn +, n ). Theorem 1. For every symmetric bilinear form (or its corresponding quadratic form) on a vector space V of dimension n, there exists a basis of V with respect to which the matrix of ~ has the form
m(q~) = 0
I, 0
= ./,~ 0
k ~k
and n +, n - are uniquely determined b~ 6.
APPENDIX
The dissimilarity measure (pseudometric function) is used to transplant the training sample to a vector space which preserves discriminatory information. Definition I. A pseudometric space is a set P together with a non-negative real-valued mapping
This theorem is proved in GoldfarbJ "~ It is important to note the uniqueness of the vector signature. The next theorem and its corollary show that the class of Euclidean inner product spaces is not large enough to accommodate any finite pseudometric space. The theorem is due to Schoenberg (~21and provides the quadratic form needed to construct the vector representation. Also it should be noted that a result of the next theorem is that the signature of (P, I1) is independent of the ordering of the set P. Theorem 2. Let P = {Pi}o~i<~k. A finite pseudometric space (P, H) has vector signature (n +, n ) if and only if the following quadratic form
*t: P X P ~ ~ +
~(x) = ~
which satisfies the following (a) n(pl, P2) = r~(P2,Pl) (b) n(p, p) = 0
1/2(d 2, + doj - d~i)x%J
12t
i,j--I
VPl, P2 E P Vp e P.
where x = ( x m , . . . , x t) and d o = Fl(pi, pfl, has signature (n +, n-). Here R k denotes the k-dimensional space of k-tuples with no inner product.
The mapping 7r is called a pseudometric function. Definition 2. A pseudo-Euclidean vector space is a vector space V together with an inner product (-,-) satisfying the following:
Corollary. A finite pseudometric space (P, H) can be isometrically represented in the Euclidean n-dimensional space if and only if the corresponding quadratic form is posilive and of rank n.
774
JOSEPH J. LIANG and VIRGINIA CLARSON About the Author VIRGINIAH. CLARSON is with E-Systems, ECI Division, St. Petersburg, Florida. She is the principal investigator in the development of mathematical foundations for military communications systems. Her research includes analysis of the effectiveness of neural networks for solutions to media selection and routing strategy problems. She has also developed techniques for measuring undetected message error rate, sizing, and timing problems in large fault tolerant communications systems. This work has included analysis using pattern recognition techniques, statistical decision making systems, fuzzy set and neural networks for large-scale and complex communications operations. Virginia Clarson holds an MA and Ph.D. in Mathematics from the University of South Florida, where she specialized in Statistical and Syntactic Pattern Recognition. This work included the development and implementation of analytical algorithms for classification applications, feature extraction and optimization. Dr Clarson is a member of MAA, AAAI, and ACM. About the Author JOSEPHJ. LIANG is a Professor and Associate Director of the Institute for Constructive Mathematics at the University of South Florida. He is directly responsible for major research projects in the development of Computational mathematics and algebraic manipulation for computer algorithms, programming and coding. Dr Liang is a recognized authority on computer coding and pattern recognition, and has directed major research projects on fractal imagery, neural networks and artificial intelligence. Dr Liang was Visiting Research Professor, Institute of Computer and Decision Sciences, National Tsing Hua University, Hsinchu, Taiwan. He has participated in 30 major national and international research conferences on electronics, communications and computer systems. These conferences include the conference on Computers and the Computer Science Colloquim in Tainan, Taiwan in 1986 conferences at Beijing University and Tianjin Institute of Technology in 1987. He presented lectures at the AMS special session on Finite Field Theory in Knoxville, Tennessee in 1988 and the International Conference on Algebraic Number Theory in Obererwolfach, Germany in 1986, 1979, 1977, 1975 and 1973. Dr Liang has authored 21 professional papers, and his work has been cited in eleven fundamental textbooks on Number Theory, Coding, Algebraic and Computational Concepts. He is presently writing papers on Math Applications of Computer Graphics, and Parallel Algorithms of Problems in Number Theory. He served as Referee for the Journal of Number Theory, IEEE Transactions on Information Theory, IEEE Transactions on Education, the International Journal of Mathematics, the AMS Monthly, and the Crelle Journal. Dr Liang received his Ph.D. from Ohio State University. He is a member of the MAA, AMS, and Sigma Xi.