Parametric images as a tool for quantitative normative evaluation

Parametric images as a tool for quantitative normative evaluation

P a r a m e t r i c I m a g e s as a T o o l for Q u a n t i t a t i v e Normative Evaluation Michael L. Goris Features e x t r a c t e d from static ...

2MB Sizes 2 Downloads 162 Views

P a r a m e t r i c I m a g e s as a T o o l for Q u a n t i t a t i v e Normative Evaluation Michael L. Goris Features e x t r a c t e d from static or dynamic scintigraphic images can be used to produce p a r a m e t r i c or functional images. Those images typically map a d y n a m i c p a r a m e t e r (temporal encoding) or a quantit a t i v e measure of t h e t r a c e r distribution. In this r e p o r t , w e consider t h e cases in w h i c h t h e nature of t h e e x t r a c t e d f e a t u r e is well suited for a q u a n t i t a t i v e n o r m a t i v e evaluation, ie, w h e r e t h e values obtained can be directly or indirectly c o m p a r e d w i t h e x p e c t e d normal values. The major difference b e t w e e n this approach and t h e m o r e c o m m o n heuristic or synoptic i n t e r p r e t a t i o n of " i m a g e s " lies in t h e underlying modeling: t h e model " p r e d i c t s " a minimal w a s h o u t rate, a match b e t w e e n ventilation and perfusion rates in t h e lungs, homogeneous contraction in t h e left ventricle, an e x p e c t e d angular distribution of thallium in t h e myocardium, or t h e absence of an additional kinetic feature. T h e q u a n t i t a t i v e aspect of t h e analysis is based in all cases on an approach t h a t o v e r c o m e s or is less sensitive to morphological or structural biological variability: in some cases t h e

patient provides t h e normalizing data, as in ventilation-perfusion ratios. In o t h e r cases, t h e model predicts homogeneous results (as in phase analysis) or a range of normal physiological values (for x e n o n washout). Less commonly, t h e analysis requires a t r a n s f o r m a t i o n of t h e data, as is t h e case in the analysis of myocardial perfusion, which f o l l o w s a polar t r a n s f o r m a t i o n of t h e image, and an analysis based on angular coordinates. In a n o r m a t i v e approach, p a t h o l o g y is defined as a deviation (in this case a q u a n t i t a t i v e deviation) f r o m t h e norm. The n e x t step is to model specific abnormalities: early right-sided reappearance of t h e bolus in a rightto-left shunt, t h e appearance of " a b n o r m a l " kinetic factors, or a comparison w i t h t h e e x p e c t e d distribution of b l o o d f l o w (or location of defects) in certain c o r o n a r y lesions. H o w e v e r , in all cases, one should note t h a t t h e analysis does n o t c o m p r o m i s e t h e m a j o r f e a t u r e of " i m a g i n g , " t h a t is t o recognize regional, rather than exclusively global malfunction. 9 1987 b y G r une & S t r a t t o n , Inc.

N G E N E R A L , parametric images are under-

scintigraphy. The functional characteristic follows from the mechanism that controls the measured tracer distribution. Even in liver scintigraphy, one can claim that the spatial tracer distribution, and hence the spatial mapping of count rate densities in the static image, is a reflection of blood flow and reticuloendothelial function. Liver scintigraphy is truly functional rather than structural or morphological imaging. Parametric imaging is a form of information extraction, and by necessity is associated with information loss. The advantage discussed here is the possibility it gives to exclude interpatient variability due to structure or morphology and not to functional differences. The result is an image that is more easily evaluated in a quantitative, rather than synoptic fashion. Consider liver scintigraphy. In the limiting situation, the variability of count rate densities can be due either to a (normal) variation in liver mass projecting in different points of the imaging plane, or to a local decrease in either flow or reticuloendothelial function. The data themselves do not provide the information to distinguish between those possibilities. Judgment is based on mental images of "normal" liver mass distribution and morphology and takes biological

stood to be images in which a particular Ifeature of dynamic data is mapped in a manner that maintains the regional or morphological relationships, eg, one can map the times at which maximum count rate densities are reached in different points of the detection plane. In that sense, the advantage of parametric images is that they allow the representation or description of dynamic phenomena in a single static image, rather than in a series of frames or, alternatively, in cinematic mode. The term "parametric imaging" has often been taken to be synonymous to functional imaging. This duality of meaning masks the fact that most scintigraphic images in nuclear medicine are in fact functional images. This is so even when they are no more than the mapping of count rate densities at equilibrium in a single static image, as in liver scintigraphy or skeletal

From the Department of Nuclear Medicine, Stanford University School of Medicine, CA. Address reprint requests to Michael L. Goris, MD, PhD, Nuclear Medicine Department, Stanford University Medical Center, Stanford, CA 94305. 9 1987 by Grune & Stratton, Inc. 0001-2998/87/1701-0002505.00/0

18

Seminars in Nuclear Medicine, Vol XVII, No 1 (January), 1987: pp 18-27

NORMATIVE IMAGING

19

variation into account in an heuristic manner. There is no a priori count rate density distribution to which the actual count rate density distribution can quantitatively be compared. There are three general methods by which the natural morphological variation can be discounted. In the first, one dynamic feature about which normal ranges are known is computed and mapped. The prototype of this study is the computation of fractional ventilation rates in lung ventilation scintigraphy. In general, the derived parameter has a dimension of time independent of the organ's spatial distribution of mass. In the second, internal standards are used to eliminate individual morphological variations. The prototype is the computation of perfusion-ventilation ratios. In the third, abnormal features, eg, features known not to be present in the normal case, are searched. The prototype is found in factorial analysis with oblique factors. PARAMETERS WITH TEMPORAL ENCODING

Alveolar Gas Turnover

Pulmonary ventilation studies were the first parametric images in nuclear medicine. The principle is simple enough at the alveolar level. During each respiratory cycle, a fraction "v'/V of alveolar gas is exchanged with the outside. If at time t = 0 the alveole contains a tracer in quantities A(0), the kinetics of the gas during subsequent breathing can be described by the state equation (eq): d A / d t = - kA

(eq 1)

where A represents the alveolar activity. The solution to this differential equation is given by: A(t) = A(0)e -kt

(eq 2)

The relationship between k and ~'/V is given by the respiratory rate R: k = 'r

x R

(eq 3)

Equation 3 makes the link between the physical reality, which is cyclic (inspiration-expiration), and the mathematical model described in equation 1, which assumes continuous kinetics. The transition is reasonable to the extent that A(t) is observed at a slow sampling rate in relation to the respiration rate R.

In the case of external detection, the count rate density at any location in the detector plane is assumed to be linearily related to the activities in volumes projecting into that detector location. However, those activities are present in multiple alveoli, hence C(t) = ~ Ai(t)

(eq 4)

To assume that the count rate density follows the functional form of equation 2 is to assume that all the projecting alveoli have equal fractional ventilation rates "~/V, which is in fact what the early investigators seemed to assume. ''2 Nevertheless, the normal case is characterized by nearly uniform values of k across the pulmonary fields. Regions of decreased alveolar ventilation are easily recognized, either as regions of heterogeneous values for k, or because average normal values for k and V/V are known from the physiological literature. The parametric image mapping of k or "v'/V is normative in two senses. The expected distribution of values is known from independent data, and the image has to yield homogeneous values in the pulmonary fields, independent of the size and shape of the lungs. However, the connection between k and V/V is not always straightforward. We noted above that the count rate densities are linearily related to the activities in multiple alveoli. If the values of ~'/V are not identical in all concerned alveoli, then the temporal evolution of C(t) cannot adequately be described by a single exponential mimicking the one in equation 2. Furthermore, the state equation (equation 1) itself is not entirely correct, because the bronchial tree works as a series of mixing volumes. Well-ventilated alveoli will pick up activity from the bronchial tree, which came from lesser ventilated alveoli. All that can be said is that C(t) is probably better fitted by a sum of exponentials, each having its own exponent k, but that the relation between any k value and V/V is not simple. To accommodate those deviations from the simple model would require better data than we have and the result would somehow defeat the original purpose, which was to recognize easily regions with abnormal ventilatory kinetics. Gen-

20

MICHAEL L. GORIS

erally, however, one can state that the average time spent by the tracer in a group of alveoli is inversely proportional to the average fractional ventilation rate in those alveoli. The relationship between the residency time or average time spent by the tracer in the system, and the area/dose was originally described by Zierler. 3 Alpert et al 4 applied the principle in ventilatory studies. For discrete data points, the average residency time is derived from a sum of the counts, normalized by the initial value: = ~ C (n)/C (0)

(eq 5)

where C(n) are the count densities in interval n, and C(0) the count density in interval n = 0. For a single exponential i = 1/k or in our case 1/[ = ~'/V 9 R. In the complex case described above, t is a weighed average as follows: = ]~(Ai/ki)/ZAi

(eq 6)

Equation 6 illustrates that the value of t in any given point is determined by the degree of abnormality of k (and by derivation of V/V), but also by the relative fraction of alveoli in which the abnormality is present.

Ventricular Kinetics and Phase Analysis In the previous example, there were two normative standards, the first based on normal physiological ranges, the second on the expectation of homogeneity across the pulmonary field, irrespective of the underlying morphological variations. It is this second normative criterium that is used in so-called phase analysis. The underlying mathematical model is simple. All continuous and bounded functions can be fitted by a Fourier series, which is a sum of sinusoidal functions: F(t) = A(0) + n~ A(n)SIN[nat + ph(n)]

(eq7)

In this expression t is time, n is the order of the harmonic, A(n) is the amplitude of the nth harmonic, and ph(n) is the phase shift of the nth harmonic. The term "a" is a constant that converts T, the outer boundary of the function, to a multiple of 360 ~ The term "A(0)" is the amplitude of the zero harmonic and is proportional in the average value of the function.

It is the merit of Geffers5 to have realized that a ventricular volume curve is fairly fitted by using the zero and first harmonic only. The approximation would have been too crude for global ventricular analysis, but for regional analysis of kinetics, the nature of the approximation had interesting consequences. We have external normative criteria for the global behavior of the left ventricle, usually expressed as the ejection fraction, and ejection rate. But we cannot assume to find the normal values in all projection points of the ventricle in the image. Indeed, at the edge of the ventricular projection, the changes in count rate densities that are synchronous with the cardiac cycle are in fact due more to displacement than to contraction. At those sites, one would expect a local ejection fraction of 100%. However, near the base of the ventricle, the count rate densities do not change that much, since contraction and motion seem to be directed towards the base. Knowledge of normal global values gives us no knowledge of normal regional values, except to the extent that we expect high values towards the free edges. On the other hand, we know that the ventricle contracts in a near synchronous fashion. Geffers' insight was that the first harmonic's phase shift was a good measure of synchronicity of contraction. The problem with what was now commonly called phase analysis was twofold. It did not appear that regional ventricular pathology could sensibly be detected by temporal criteria only,6 and it was not immediately obvious what degree of homogeneity could indeed be expected. The second problem was ingeneously solved by Bacharach,7 who defined a normal distribution and weighed aberrant values by proximity criteria.

Time of Arrival and Shunt Detection In the two previous examples, the internal normative criterium was homogeneity. In both cases, the image expressed temporal characteristics of the kinetics, rather than amplitudinal, precisely to avoid the modulation of underlying structure on the expected amplitudinal characteristics. In the next example, the external normative criterium is not exactly quantitative, but physio-

NORMATIVE IMAGING

21

logical nevertheless. We assume a "normal" continuity in the central circulation, which defines the sequences for a dynamic image of the first transit of a radioactive bolus. In the normal, the bolus appears first on the venous side, progresses from right atrium to right ventricle, and from there through the pulmonary artery to the lungs. From the lungs it reconcentrates in the left atrium to progress from there through the left ventricle and aorta to the systemic vascular system. Eventually the bolus reappears, after a complete circulation time, in the left atrium (Fig 1). In the case of a leftto-right shunt, this normal progression is disturbed by an early reappearance of the bolus on the right side (Fig 2). This analysis requires only the determination of the pulmonary phase of the bolus' progression and the demonstration of a Fig 2. Time of maximum activity mapping in first pass angiocardiography: atrial septal defect. The Fig is organized in the manner of Fig 1. (A and D) One can sea that the bolus reenters the right atrial region in the postpulmonary phase. (Reprinted w i t h permission from Goris ML, Briandet PA: The Clinical and Mathematical Introduction to Computer Processing of Scintigraphic Images. New York, Raven, 1983.)

postpulmonary peak in the region of right-sided structures. 8 In this case, the normative aspect is not simple, but requires underlying logic and still an amount of pattern recognition. The latter limitation is compounded by the fact that leftand right-sided structures always overlap to some extent. However, this application is included in this section because the kinetic has been reduced to temporal parameters to the exclusion of all amplitude modulations, which are expressions of size and shape variations. INTERNAL AND EXTERNAL NORMALIZATION Fig 1. Time of maximum activity mapping in first pass angiocardiography: normal case. In this display, the times are encoded in gray shades, w i t h darker shades indicating later times. The bolus is followed from left subclavian to abdominal aorta. Times are regrouped in three prepulmonary, one pulmonary, and t w o postpulmonary (B). (A) Regions sharing a post- and prepulmonary maximum are blanked out. A t bottom, the analysis is restricted to the pre- and postpulmonary phases, respectively (C and D). Note that the region of the right heart does not have a postpulmonary maximum within the first passage of the bolus. (Reprinted w i t h permission from Goris ML, Briandet PA: The Clinical and Mathematical Introduction to Computer Processing of Scintigraphic Images. New York, Raven, 1983)

In some circumstances, there exists independent information concerning the expected distribution of the count rate density. This normative information can be either internal or external. The prototype of parametric imaging with internal normalization is the mapping of ventilationperfusion ratios.

Ventilation-Perfusion Ratios It is interesting to note that in this case, as in the preivous one, the analysis is based on a model for pathology. One assumes that pulmonary

22

embolism in particular, or primary vascular pulmonary disease in general, is characterized by a discrepancy between the regional distribution of perfusion (rates) and ventilation rates. Ventilation rates should not be confounded with the fractional ventilation rates mentioned above. They are symbolically represented by ~" or (~'/V) x V and represent the fractional ventilation rate multiplied by the ventilated volume. 9 Regional ventilation rates can be obtained by computing (1/~) • C(0) in the manner described in equation 5, or by multiplying k • R by the equilibrium count rate density in the case of xenon washout studies. In the case of krypton 81 m studies, the equilibrium count rate densities can be shown to be proportional to the regional ventilation rates, l~ In both cases, one obtains a value proportional to regional values of V, with the proportionality determined by the dose (administered activity) and the counting efficiency. Static perfusion scintigraphy yields regional count rate densities proportional in the same manner to 0, the regional perfusion rate. A comparison between both images requires some normalization for the dose and the counting efficiency. Normalization is easy on the basis of the integrated value over the lung fields in both images (but there are some problems as we will see) and yields both regional ventilation and perfusion rates as a fraction of the total ventilation and perfusion rates. One expects that the ratio of the relative regional perfusion rates and relative regional ventilation rates computed in this manner will be near unity in all points of the lung field, and that a perfusion deficit would be easy to demonstrate, either by a ratio image or a difference image. However; there are three major problems following the simple normalization described above. First, if there is an important perfusion defect after normalization by integration, values higher than unity would be found in the normal regions. Second, the normalization assumes that nontarget background would be equal in the original perfusion and ventilation images. Third, and most importantly, if the aim is normative, one would need a measure of the significance of the discrepancy. All three problems are handled by a method in which the images are normalized to each other,

MICHAEL L. GORIS

not by integration, but by linear regression. Let Q(I) be the values of the net counts in the perfusion image and V(I) the values of the ventilation rates, including nonpulmonary values due to background. Paired points in both images are related by the equation: V(I) = a 9 Q(I) + b + / -

see'

(eq 8)

where b represents nonpulmonary background, usually in excess in the ventilation image and "a" the normalization factor. The term see represents the standard error of the estimate for a particular value of Q(I). The parameters a, b and see' are defined by linear regression analysis, in which the paired data are represented by the values of Q and V in corresponding points of both images. Precise normalization is now obtained by recomputing V(I)' = [(V(I) - b)]/a

(eq 9)

and equation 8 can then be rewritten as V(I)' + Q(I) = / -

see'/a

(eq 10)

The term "see'/a" is important, since it tells us to what extent we expect both values to be equal and thus yields a measure of aberration. A comparison image can now be constructed either as a ratio image or as a difference image, in which, however, the differences are expressed in units of see'/a (Fig 3). Furthermore, a response operator curve can be constructed in which the sensitivity and nonspecificity are defined for aberrations in units of see'/a. 12'13 In this example, we note two important features. The final result is normative in terms of a pathological model (primary vascular pulmonary disease) and the criteria are internal, computed as an expected degree of correspondence, in the case under study. Structural or morphological modulation is thus overcome, not by temporal encoding, but by a comparison that cancels structural variation. The method is still subject to a limitation that is not clinically relevant. If the discrepancy is too large (multiple large pulmonary emboli), this approach tends to overestimate the normal values for see'.

Circumferential Profiles In all previous cases, the original data were dynamic, except perhaps when V is derived from equilibrium data in Krypton 81m ventilation

NORMATIVE IMAGING

Fig 3. V / Q ratio mapping. (A) Perfusion image is on the top left. A t the bottom the ratios are shown as black, if they are within 2 (C) or 3 (D) standard errors of the estimate, (C) has the highest sensitivity; (D) is more specific for primary vascular disease. The gray shade is not linear, but illustrates two (arbitrary) normative limits. (Reprinted with permission from Goris ML, Briandet PA: The Clinical and Mathematical Introduction to Computer Processing of Scintigraphic Images. New York, Raven, 1983.)

images. But normative parametric imaging is not restricted to dynamic data. Our original definition requires only that it allows one to minimize or eliminate variations due to structure or morphology, and not to function. The prototype of this approach is found in quantitative (Thallium 201) myocardial perfusion scintigraphy. Two questions need to be answered: Is there a region with a significant deficit of 2~ uptake, when the tracer is administered during maximal stress, and is there significant redistribution of the tracer within the myocardium in the three to four hours following the administration during stress? The term "significant" is loaded and presumes that we know in one case what the normal distribution is, and in the second case that distributions can be quantitatively compared. The measurement does not allow uninterrupted time sequencing, since multiple views are necessary and the patient could not be kept unmoved for the time during which redistribution occurs. A direct (point-by-point) comparison of the stress images with the redistribution images would therefore be nearly

23

impossible due to problems of image registration. We already discussed how count rate distributions that are not defined exclusively by function (as in the case of fractional ventilation rates), but also by morphology (size and shape) are not easy to calibrate quantitatively. A solution is provided by the particular morphology of the myocardium, which surrounds the left ventricular cavity. It is possible to sample the count rate distribution radially from the geometrical center of the cavity to the periphery of the myocardium. Along an arbitrary number of equally spaced radii, one can sample any attribute of the count rate density distribution. The original methods 14-16 for which the term "circumferential profile method" was coined consisted of sampling for the maximum value along the radii. In this case the term is well chosen, since the set of maximal values tends to be located circumferentially along the cavity. If the number of radii is kept constant, there is an implicit size normalization. Regardless of the size (and shape) of the myocardium, the sample will always contain a number of values equal to the number of radii. Further morphological variation is accommodated by starting the sampling angle in reference to the apex, either by indicating the apex, or by aligning the myocardial image at the time of acquisition. All the data in the image are thus reduced to an n-dimensional vector, where n is the number of sampling angles or radii. This data reduction is extreme, and we did propose an alternative method 17in which not only the maximum count rate density, but also the average count rate density and total count rate are recorded. Regardless of the sampling value, it remains that the n-dimensional sample vector is normalized independently of morphological variations and thus strictly comparable, not only between two recordings of the same projection in the same patient (stress i m a g e versus redistribution image), but also between any given patient and a set of normal values obtained from other recordings. It should be noted that interpatient comparison still requires a normalization for dose, counting efficiency, and counting time. Intrapatient comparison (stress v redistribution) can be

24

obtained, pari passu, by counting time normalization. The so-called washout analysis is based on the latter approach. By extension, one can consider this n-dimensional vector as a functional image. Normal values can be empirically established, either for relative distribution or for washout. The relation between this method and classic parametric imaging is well illustrated by the so-called idealized images that we introduced (Fig 4). In this approach, the sampling value is not reduced to the maximum count rate density (M), but includes, as stated above, information about average (A) and total (T) count rate(s) (densities). The model assumes a function C(x) representing the count rate densities along any radius, where x is the distance from the origin. The function C(x) is parameterized so that T is the integral of C(x) from x = 0 to x = X, where X is the outer border of the myocardium, the maximum value of C(x) is equal to M, and that C(x) is zero, except over a distance defined by T / A .

MICHAEL L. GORIS

The values of M, A, and T for each radius are used to define C(x) for each angle, from which one can reconstruct an image that is identical (in shape and size) from patient to patient (for each projection), but varies only according to the sampling values. If such an image can be reconstructed for an individual image, it also can be reconstructed from the normal profiles or vectors, obtained from a set of control cases. Hence, a stress (idealized) image can be compared point-bypoint with the same patient's redistribution image, or with the normal average image, and this with perfect registration. Thallium myocardial scintigraphy data therefore can be analyzed in a manner similar to ventilation/perfusion studies where the acquisition methods did not make registration and point-by-point comparison difficult. In the case of myocardial perfusion studies, a comparison to external standards is added at the cost, one must remember, of (more or less extreme) data reduction. SEARCH FOR ABNORMAL FEATURES

Factorial Analysis

Fig 4. Idealized thallium myocardial perfusion images. The original (or real) data are shown in (A) and (C). They are background subtracted and filtered 45 left anterior oblique projections from two patient studies. (C) was zoomed by a factor of 2. The images differ in size, shape and densities, in (B) and (D) they are represented in the parametric form (in "idealized" images). Size and shape are identical, since the analysis goes through a step where count rate densities and angular locations are the only values extracted. One recognizes the similarity of the density distribution between "real" and "ideal" and the differences in densities between the two cases are, if anything, more apparently being isolated in the parametric images.

In the previous sections, most methods were characterized by the fact that the resulting image was used to detect a deviation from the norm in the statistical sense. The time mapping for shunt detection was the only case where a specific feature (early reappearance of the bolus in the right-sided region) was revealed. In all other cases, the "quantitative analysis" simply revealed that the values were more or less different from the expected value. In our analysis of thallium myocardial perfusion studies, we complemented the analysis by a stochastic one that took into account the size, location, and distribution of the abnormalities, which were compared with the features of abnormal casesJ v In this method, we assumed that a particular deviation from the norm in one of nine sites could be considered as a quantitative symptom, with a given sensitivity for any of 30 possible combinations of coronary lesions. The analysis was sequential Bayesian. It was an attempt to breach the gap between purely normative and diagnostic analysis. However, there exists an alternative approach in which abnormality is recognized by the

NORMATIVE IMAGING

25

appearance of a kinetic feature, which is known not to be present in nonpathological cases. This approach was originally described by Bazin et al as the method of oblique or physiological factor analysis 18 and later applied to equilibrium-gated cardiac bloodpool studies. 19 The principle of oblique factor analysis is simple enough, even though the descriptions have been complicated by nonstandard terminology. The data set consists of a dynamic image, with p frames, with resolution n x n. One can consider the data set as a set of n-squared vectors of length p: Xij = [xij(1), xij(2) . . . . xij(p)]

(eq 11)

where ij defines the matrix element, called indifferently dixel (dynamic pixel) or trixel (tridimensional pixel). The n-squared vectors represent the raw data set. The method assumes that all the vectors in the set can be described by a linear combination of a limited number of physiological factors that represent the basic (or physiological) kinetics underlying the observed count rate density changes: Xij=vijV+aijA+bijB+Nij

(eql2)

In this particular case, it is assumed that represents the kinetics of the ventricles, A that of the atria, B the background, and Nij the random noise at location ij. The terms "vij," "aij," and "bij" represent the relative contribution, respectively, of ventricular, atrial, and background kinetics to the count rate density (changes) at location ij. The interesting aspect of this approach is the following. First, at a particular location ij, the term "aij" represents the contribution of atrial activity independently of the contribution of other overlapping structures. Second, if in any location there is a contribution of abnormal kinetics (ie, neither ventricular, atrial, nor background by dyskinesis), then the vector Xij is described by: Xij = vijV + aij.~ + bijB + pijP~ + Nij

(eq 13)

where pij represents the contribuiton at location ij of the pathological kinetic factor P. What we have then is the appearance of an abnormality as a discrete value. In our opinion, this is the major

potential for the method. However, there are a few limitations that have to be kept in mind. This author does not believe that the term can properly be called background, but that it may in fact represent structure. Furthermore, the number of factors that could potentially contribute to a (more exact) description of all the vectors "Xij is dependent on the noise level, as we shall see. Hence, in noiseless data one could expect that the factor A could in fact be decomposed in an atrial factor ~, and a vascular factor A'. At the limit, one cannot a priori state that the ventricular factor must be unique (since we know that there is a time delay in the contraction of various parts of the ventricle), nor can one necessarily expect the pathological kinetics to be sufficiently homogeneous to be well described by a single factor P. The oblique, or physiological factor analysis, is an extension of the analysis in principal components, the clearest description of which can be found in the report by Oppenheim and Apple-

dorn.20 The theory again is simple: a set of vectors of dimension p can be described unequivocally by the linear combination of p vectors E(i), called eigenvectors. Xij = eij(1)E(1) + eij(2) + . . . eij(p)E(p)

(eq 14)

In actuality, only a few (m < p) of the eigenvectors are necessary to describe the original vectors with any degree of confidence. As m increases, the noise contribution becomes more dominant. But for m = p, the duplication is exact. The method thus results in a data compression, since instead of the n x n original vectors (for each trixel) originally described by p values each, we now have n x n times m < p values of eij describing the original data set. Furthermore, if the eigenvectors are ranked according to the significance of their contribution and are not used when their contribution cannot be distinguished from noise (by statistical criteria), then the method results in a noiseless description of the data. It is to that precise purpose that Oppenheim and Appledorn use the analysis. Furthermore, they create an m-dimensional look-up table, in which various parameters can be found for any given set of values of eij.

26

MICHAEL L. GORIS

In contrast, Schmidlin attempted to extract features directly from the relative contribution of each factor, 2~ but, in contradistinction with Bazin's approach, there was no underlying reason to expect that the eigenvectors would correspond to specific physiological kinetic features. We have not described the method by which eigenvectors are defined. Schmidlin defines them from the covariance matrix of a library of vectors obtained from cases believed to represent all the pathological and normal variations one would expect. Oppenheim derives them from the covariance matrix of a set of vectors derived from a mathematical model of pathological and normal kinetics. Bazin, who uses the eigenvectors to derive the physiological factors, defines the eigenvectors from the observed vectors in the image under study. In the two former cases, there is potential for lack of representativity. A particular pathology could be unrepresented in the set from which the eigenvectors were derived. In Bazin's method, there is a potential lack of generality, in each case "pathology" is redefined. It should be noted that the method tends to be noise sensitive. The number of factors contributing significantly is a function of the underlying kinetic features, but also of the noise in the data. Higher data densities yield more normal factors. Nevertheless, this approach has the merit of distinguishing, or attempting to distinguish, between normal and abnormal features.

Alternative Approach The oblique factor approach is not the first method based on the search of discrete abnormal

features. Bell and DeNardo described such an approach for pulmonary ventilation studies. 22 Their approach has the elegance of simplicity. They found empirically that xenon 133 washout data could be well described by a two-exponential model: c(t) = A 9 e -at + B 9 e -bt. The first term represents alveolar kinetics, the second, in the normal case, the contribution from nonpulmonary (background) activity (due to the partial absorption of the tracer in blood and tissues). In the normal, the ratio A / ( A + B) is close to unity. If poorly ventilated alveoli are present, this ratio decreases. The method is not pure, since B is non-zero, even in the most normal cases. But the principle was nevertheless there: abnormality was defined as the predominance of a discrete (B) feature. We have attempted to describe functional or parametric images in terms of their contribution to normative analysis. Normative analysis is generally made possible if structural, or morphological, variation can be either eliminated (by temporal encoding) or accounted for (by internal or external normalization), or if pathology can be defined in terms of a specific pathologic kinetic feature. It is worthwhile to reflect on the common features of all the methods described. They are all heavily dependent on data reduction and on modeling. Data reduction follows from the need to extract particular features; modeling in all cases lies on the basis of the normative judgment that will follow.

REFERENCES

1. Mclntyre WJ, Inkley SR: Functional lung scanning with xenon-133. J Nucl Med 10:355, 1969 (abstr) 2. DeRooMJK, Goris M, CosemansJ, Gyselen A, Billiet L, Vander Scheuren G: Computerizeddynamicscintigraphy of the lung. J Beige Radiol 51:339-348, 1968 3. Zierler KL: Equations for measuring blood flow by external monitoringof radioisotopes. Circulation Res 6:309321, 1965 4. Alpert NA, Correira JA, McKusick KA, Shea W, Brownell GL, Potsaid MS: A simple functional imageof lung ventilation--The mean transit time. Proceedings, Fifth Symposiumon Sharing of Computer Programsand Technologyin Nuclear Medicine. Salt Lake City, 1975, pp 161-171 5. Geffers H, Adam WE, Bitter F, Sigel H, Kampmann H: Data processing and functional imaging in radionuclide ventriculography. Proceedings, Fifth International Confer-

ence on Information Processing in Medical Imaging. Oak Ridge National Laboratory,Oak Ridge, TN, 1977, pp 322332 6. Walton S, Yiannikas J, Jarrett PH, Brown NJG, Swanton RF, Ell PJ: Phasic abnormality of LV emptyingin coronaryartery disease. Br Heart J 46:245-253, 1981 7. Bacharach SL, Green MV, Bonow RD, deGraaf CN, Johnston GS: A methodfor objectiveevaluationof functional images. J Nucl Med 23:285-290, 1982 8. Goris ML, Wallington J, Baum D, Kriss JP: Nuclear angiocardiography:Automated selection of regions of interest for the generation of time-activitycurves and parametric image display and interpretation. Clin Nucl Med 1:99-106, 1976 9. Goris ML: Scintigraphic evaluation of pulmonaryventilation. Teaching Editorial. Clin Nucl Med 7:142, 1982

NORMATIVE IMAGING

10. Fazio F, Jones T: Assessment of regional ventilation by continuous inhalation of radio-active krypton-81m. Br Med J 3:673~76, 1975 11. Goris ML, Daspit SG: Krypton-81m ventilation scintigraphy for the diagnosis of pulmonary emboli. Clin Nucl Med 6:207-212, 1981 12. Goris ML, Daspit SG: Lung ventilation studies with Kr-81m, in Guter M (ed): Progress in Nuclear Medicine: New Radiogasses in Practice. Basel, Karger, 1978, pp 69-92 13. Baumert JE, Lantieri RL, Fawcett HD, Goris ML: The diagnostic value of a fully automated thresholding algorithm for ventilation-perfusion mismatches. J Nucl Med 20:613, 1979 (abstr) 14. Vogel RA, Kirch D, LeFree MT, Rainwater PO, Jensen DP, Steele RP: Thallium-201 myocardial perfusion scintigraphy: Results of standard and multipinhole tomographic techniques. Am J Card 43:787-793, 1979 15. Burow RD, Pond M, Schafer AW, Becker L: "Circumferential profiles." A new method for computer analysis of thallium-201 myocardial perfusion images. J Nucl Med 20:771-777, 1979 16. Maddahi J, Garcia EV, Bermans DS, Waxman A, Swan HJC, Forrester J: Improved noninvasive assessment of

27

coronary artery disease by quantitative analysis of regional stress myocardial distribution and washout of thallium-201. Circulation 64:924-934, 1981 17. Goris ML, Gordon E, Kim D: A stochastic interpretation of thallium myocardial perfusion scintigraphy. Invest Radiol 20:253-259, 1985 18. Bazin JP, DiPaola R, Gibaud B, Rougier P, Tubiana M: Factor analysis of dynamic scintigraphic data as a modelling method. An application to the detection of metastases. Les Colloques de I'INSERM: Information Processing in Medical Imaging. INSERM 88:345-366, 1979 19. Cavailloles F, Bazin JP, DiPaola R: Factor analysis in gated cardiac studies. J Nucl Med 25:1067-1079, 1984 20. Oppenheim BE and Appledorn CR: Functional renal imaging through factor analysis. J Nucl Med 22:417-423, 1981 21. Schmidlin P: Factor analysis of sequence scintigrams. Fifth International Conference on Information Processing in Medical Imaging. 1977. Oak Ridge National Laboratory, Oak Ridge, TN, 1977 22. Bell RL, DeNardo GL: Enhanced scintigraphic information display using computer-generated ratio techniques. J Nucl Med 11:655-659, 1970