Methodological considerations for the evaluation of spatio-temporal source models

Methodological considerations for the evaluation of spatio-temporal source models

227 Electroenc~7~halography and clinical Neurophysiolo&9', 79 (1991) 227-240 ~t':, 1991 Elsevier Scientific Publishers Ireland, Ltd. 0013-4649/91/$0...

1MB Sizes 0 Downloads 16 Views

227

Electroenc~7~halography and clinical Neurophysiolo&9', 79 (1991) 227-240

~t':, 1991 Elsevier Scientific Publishers Ireland, Ltd. 0013-4649/91/$03.50 ADONIS 1101346499100133X EEG 911177

M e t h o d o l o g i c a l c o n s i d e r a t i o n s for the evaluation of s p a t i o - t e m p o r a l source m o d e l s * Andr6 Achim h, Francois Richer b and Jean-Marc Saint-Hilaire

~'

HC)pital Notre-Dame, Montreal (Canada), " Unicersit~; de Monlr~;al, Montreal (('am~du), and ~' Unicersit~; du Qu~;bec, Montreal (Canada)

(Accepted for publication: 11 Februan5 1991))

Summary The effects of physiological noise and modelling procedure on spatio-temporal source modelling (STSM) solutions were examined by adding EEG noise (3 4%, of total energy) to synthetic signals from 3 dipoles producing topographies that partially cancelled at the surface of an homogeneous sphere. Three patterns of source activation profiles were each associated with two EEG records. The STSM solutions were subjected to statistical tests that detect signal left in the residuals. All models substantially accounted for noise. Essentially correct models left no consistent signal in the residuals, but other statistically acceptable models were also occasionally found. Unambiguous correct solutions were found for moderately correlated source activations. Statistically adequate 2-source and 3-source models were found for the 4 data sets incorporating nearly parallel activations of 2 sources. When all 3 sources had comparable amplitudes, the 2-source models represented compromises containing gross mislocalizations but correct 3-source models were flmnd. When one of the parallel sources was attenuated by 75~, only 2-source models satisfactorily approximated the 2 main sources. The localization errors in the "correct" models ranged from 2.5% to 13% of the sphere radius. Thus, even with a perfect signal propagation model, STSM could at best claim approximate localizations from data containing structured noise. Key words: Spatio-temporal source modelling; EEG: Electrophysiological noise

S p a t i o - t e m p o r a l source m o d e l l i n g (STSM; Scherg a n d V o n C r a m o n 1985, 1986; A c h i m et al. 1988a; De M u n c k 1989; Scherg 1989) i n t e r p r e t s the successive E E G or M E G scalp t o p o g r a p h i e s of an electrophysiological event as the s u m m e d activity of a few fixed n e u r o e l e c t r i c g e n e r a t o r s in the brain. S T S M solutions provide localizations, o r i e n t a t i o n s a n d wave forms describing g e n e r a t o r s that could explain the spatio-temporal data. For m a n y research a n d clinical applications, STSM a p p e a r s as an i m p o r t a n t i m p r o v e m e n t over topographic m a p p i n g of the s c a l p - r e c o r d e d n e u r o e l e c t r i c activity b e c a u s e it directly provides an electrophysiological i n t e r p r e t a t i o n of the data. Such a progress could be particularly significant, given that the visual interp r e t a t i o n of scalp t o p o g r a p h i e s in terms of their u n d e r lying cerebral sources is extremely difficult when a

* This research was supported by grants from the Medical Research Council of Canada, the Fonds de la Recherche en Sant4 du Qu4bec, the Molson Foundation, and by an Epilepsy-Canada/Parke-Davis Fellowship to A.A. ())rrespondence to: A. Achim, Service de Neurologie, H6pital Notre-Dame, 15611 Sherbrooke Est, Montreal, Que. H2L 4MI (Canada).

n u m b e r of distinct b r a i n areas are s i m u l t a n e o u s l y active. T h e reliability of STSM models, however, is still an o p e n question, a n d there are c u r r e n t l y few guidelines to a p p r e c i a t e how c o m p e l l i n g is a given STSM solution. W h e n an S T S M m o d e l is r e p o r t e d as providing the best fit for the data, given the n u m b e r of sources allowed, there is no proof that this is the best possible fit, as o p p o s e d to only the best fit r e a c h e d by the search p r o c e d u r e . F u r t h e r m o r e , there is usually no p r o o f that the m o d e l is sufficient to completely account for the signal part of the data, leaving only noise as residuals. Finally, it is q u e s t i o n a b l e that the best fit from the correct n u m b e r of signal sources should necessarily be the correct solution, given that the assumption of i n d e p e n d e n t errors in m e a s u r i n g the signal is clearly violated by physiological noise. Thus, besides the anatomo-physiological plausibility of a source model, its assessment should ideally consider two essential questions: does this source m o d e l a p p r o p r i a t e l y a c c o u n t for all the signal in the data and, if so, are there substantially different m o d e l s that are also compatible with the data, p e r h a p s providing only trivially inferior fits? T h e precision of the source position estimates, alt h o u g h an i m p o r t a n t c o n s i d e r a t i o n , a p p e a r s secondary

228 to having correctly identified the number and gross localizations of the sources. Indeed, any confidence interval specification can only indicate the precision of the estimates provided that the source parameters are correctly approximated. The present report directly addresses the problem of finding the best fit and that of assessing whether the resulting model appropriately accounts for the signal part of the data, in the presence of structured electrophysiological noise. The results further bear on the precision of the source localizations in the presence of structured electrophysiological noise and, indirectly, on the question of alternate models also compatible with the data.

Modelling basis for STSM STSM can be viewed as an extension of single source localization that specifically deals with the problem of simultaneously active brain areas. Occasionally, the scalp topography of a neuroelectric signal at a fixed time predominantly reflects the activity of only one active brain region. The gross localization of the source may then be achieved by visual inspection, taking into account the principles of current propagation from the source to the electrodes (e.g., Gloor 1985). This can also be achieved more precisely by static single dipole localization methods (reviewed by Wood 1982, and Fender 1987). The simplifications made about the geometry of the source and of the head as a volume conductor are the main factors affecting the accuracy of quantitative single-source localization. When a single restricted brain area is active at a given time, the accuracy of source localization is generally considered to lie within acceptable limits for a large class of applications, provided that an adequate signal-to-noise ratio can be achieved (e.g., Fender 1987). When, however, the topography at a specific latency results from the simultaneous activity of a number of cerebral areas, the assumption that the source of the signal is restricted enough to be adequately represented by a dipole is violated. A single dipole may sometimes account for a large portion of the data variance and yet be localized far from any of the active brain areas (e.g., Achim et al. 1988a). On the other hand, fitting multiple dipoles to a single scalp topography tends to introduce errors because the number of free parameters approaches the number of data values and, in such cases, even low-amplitude residual noise will be accounted for, making the localizations very unreliable. The problem of localizing the origin of distributed brain activity is more adequately dealt with by fitting a number of sources simultaneously on the data from a number of consecutive scalp topographies. This is the basis of the STSM approach.

A. ACHIM ET AI,. A typical STSM procedure starts with a crude initial approximation of the source configuration. As for single dipoles, an appropriate model of signal propagation in the volume conductor is used to calculate the scalp topography corresponding to each specified source. Linear regression then associates a waveshapc to each of these topographies. The initial approximation is iteratively modified to maximize the fit between the recorded spatio-temporal data and the values produced by the model. For phasic neuroelectric activity, STSM is typically more plausible physiologically than the alternate interpretation that models time-varying topographies as the displacement of a unique focus of activity whose position, orientation and intensity vary across time and often successfully account only for a fraction of the observed topographies. STSM is also more parsimonious than the moving single source model, requiring fewer parameters to account for a complete spatiotemporal data matrix. Furthermore, the notion that brain activity could be spatially distributed in many situations, and thus inappropriately represented by a dipole, does not, in principle, constitute a problem for STSM. The various brain areas contributing to a distributed process are likely to show some asynchrony due to axonal conduction delays. For STSM, the various areas of a distributed source would normally appear as distinct sources having their own activation profiles that may be strongly correlated. The asynchrony between active brain areas is not strictly required by STSM, but it contributes to make STSM a privileged technique for distributed processes since much of the power of STSM comes from the fact that it capitalizes on both spatial and temporal information, i.e., fluctuations in the shape of the scalp topography across time.

Factors affecting STSM solutions The variables affecting the reliability of STSM solutions have not been systematically investigated. Some important hurdles reside in the required non-linear global optimization, namely local minima effects and sensitivity to the initial approximation. One objective of the present work was to verify whether the procedures we developed to manage these optimization problems would generally find the intended solutions from deliberately misleading data sets. Data-related factors could also be important in determining the reliability of STSM solutions. These include the spatial overlap between source fields, the temporal overlap between their activation profiles, the signal-to-noise ratios and the spatio-temporal patterning of electrophysiological noise. Our main purpose was to explore current limits of STSM related to these

EVALLJATION OF SOURCE MODELS factors. We approached this by seeking existence proofs of the possible misleading effects of noise structure and of spatio-temporal overlap between source potentials. The acknowledgement of such possible effects would constitute an important perspective to appreciate specific STSM solutions and could seriously question the appropriateness of relying on a unique solution. Factors related to optimization

STSM is normally described as a global optimization problem, i.e., one in which the single best possible fit is required. For practical reasons, STSM has only been implemented as a local optimization problem, i.e., one in which a reasonably correct initial approximation is required and iteratively improved. Unless restrictions are placed on source order (e.g., by forcing the sources to maintain a specified rank ordering on some parameter, which could prevent simple transitions from a good initial approximation to the target solution) or polarity, STSM optimization problems necessarily contain a multiplicity of equivalent solutions (obtained by exchanging the parameter values of one source with those of another source or by inverting any source polarity). This point is important to realize because it annihilates any hope that the error function would be globally concave (i.e., wherever you start, there is only one minimum safely reached by "sliding down" on the error surface, in which case the optimization algorithms would differ only in their speed of reaching the minimum). The result thus often depends on the initial approximation and there is no guarantee that an absolute best fit has been found, unless the fit is perfect. With few parameters to optimize, e.g., the position parameters of a single dipole, the error function might be concave over a large portion of the search space. Then, the choice of an initial approximation and of a specific optimization algorithm will mostly affect the time it takes to find the bottom of the concavity. In general, however, the more numerous are the parameters of the fitted model, the more the local minima of the error function proliferate and the more precise must be the initial approximation. The large number of parameters simultaneously optimized in STSM makes the iterative improvement approach very susceptible to being trapped in a local minimum. We have previously observed that large localization errors may occur with STSM when the optimization procedure strictly improves on the initial approximation, without having efficient means of escaping local minima (Achim et al. 1988a). The problem of escaping local minima is less severe when a good initial approximation can be provided. In a sense, this begs the question, requiring one to know the solution in order to find it correctly. There may be some procedures that are more reliable than others for

229 producing a good initial approximation. A common practice consists in developing the initial approximation by a series of steps, each keeping the number of optimized parameters low. This includes optimizing one or a few sources, fixing them temporarily, and then adding and optimizing more sources. A convenient step, described by Scherg and Von Cramon (1986), consists in introducing the sources as a triplet of orthogonally oriented dipoles that share a common location; in this way, no parameter is required for source orientation while all distinct temporal patterns of activity originating from the shared location are simultaneously captured using only 3 parameters for the position of the triplet. The effects of different strategies for developing an initial approximation on the accuracy of the final solution are currently unknown and are explored here, along with the efficacy of a technique implemented to escape local minima of the error function.

Factors related to the data structure

In STSM, a source refers to a spatially restrained set of neuroelectric generators that share (on a statistical basis) a common temporal activation profile. The degree of temporal and spatial overlap among the potentials from different sources should have a strong effect on the difficulty of decomposing the data into the correct source activities. When the activation profiles of two spatially distinct sources are essentially parallel, their net topography changes only in intensity and polarity over time. If topographic ambiguities are produced by source field overlap, the more the source activation profiles differ, the less the incorrect interpretations of the ambiguity should constitute a good fit across all the topographies and thus the easier it should be for STSM to find the correct source interpretation. It appears likely that the severity of the effect of a given amount of temporal overlap should depend on noise. For instance, Achim et al. (1988a) created noise-free simulated MEG data in which the outward magnetic signal from one source essentially cancelled the inward signal from another source whose temporal pattern correlated 0.78 with that of the first source. A number of the resulting consecutive topographies could be explained at about 95% by single dipoles at a distance from the actual two sources, but STSM found the correct solution that accounted for 100% of the signal variance. Another data-related factor is physiological noise. STSM procedures have provided acceptable localizing information for high signal-to-noise ratio evoked potential recordings (Scherg and Von Cramon 1985, 1986). Recently, however, STSM has been applied to data likely to contain non-negligible physiological noise, such as late evoked potentials (Scherg et al. 1989; Simpson

230

et al. 1989) and M E G or E E G interictal spikes (Barth et al. 1989; Achim et al. 1990). It is generally acknowledged that the amount of noise is reflected in the size of the confidence interval of the various source parameters. Little attention, however, has been paid to the effects of the spatio-temporal organization of the noise on STSM solutions, even though it is known that the "noise" from multichannel E E G or M E G recordings is not spatio-temporally random but is rather well organized topographically and temporally (e.g., Hjorth and Rodin 1988). Whenever E E G noise in the data is non-negligible, STSM should be expected to exploit the noise regularities to further minimize the residuals, in much the same way that multiple regression solutions exploit sampling error. The STSM solution obtained is thus likely to contain spurious or displaced sources that account for topographically organized noise. Although overall such solutions can account for a portion of the variance larger than the relative energy of the signal in the data, they could also leave true signal unaccounted for in some channels (local undermodelling) in favour of accounting for more noise in other channels (local overmodelling). Besides this possible tradeoff of neglecting some signal in favour of noise, the presence of spatio-temporally structured noise also likely hinders STSM by increasing the number of local minima of the error function. Also, while incorrect solutions should produce imperfect fits of the signal part of the data, the mere presence of noise, by forcing a tolerance for imperfect fits, makes it hard to distinguish perhaps imprecise but essentially correct solutions from erroneous solutions that explain the data similarly well. As a minimal safeguard against incorrect STSM interpretations, a requirement of completeness can be imposed on any source model, i.e.,, to be considered adequate a model should minimally explain satisfactorily all the signal contents of the data. This is more usefully expressed as the constraint that there should be no signal, i.e., only noise, in the residuals. This constraint specifically means that models should be rejected if either they do not completely account for the signal portion of the data or they predict signal that is not present in the data. In those two cases, subtracting the model from the data respectively leaves or injects signal in the residuals. Thus residuals that contain signal are indicative of incorrect models. When the data matrix to which STSM is applied is an average of independent instances of the electrophysiological event, objective statistical tests can be applied to detect signal in the averaged residuals. The residual orthogonality test ( R O T ) was devised specifically for this purpose (Achim et al. 1988b); it uses the residuals from the individual instances of the event being modelled and evaluates whether these residuals

A. ACIIIM ET AI_.

depart systematically from mutual orthogonality. A significant positive correlation among the residuals from independent trials indicates that signal is still present in these residuals. While the R O T cannot positively identify the correct solution, it should identify as incorrect many solutions that would otherwise appear acceptable, thus contributing importantly to reduce the set of acceptable source models for a given data set. The above considerations suggcst that a number of variables could affect STSM models. On this basis we designed an STSM procedure which incorporates features to escape numerous local minima, multiple initial approximations, and the R O T to detect the presence of signal in the residuals. We examined the behaviour of this STSM procedure in modelling deliberately misleading data sets containing simulated signal from overlapping sources and low levels of E E G noise. Our aims were to document the potential problems discussed above, including patterned noise, source overlap and source signal-to-noise ratio, and to investigate the capacity of our STSM procedure to identify the sources in such adverse conditions.

Methods

Wave form simulations The signal for all simulations was computed from perfectly dipolar sources in a homogeneous 1-sphere volume conductor (using the formula described by Fender 1987, p. 363), and the same physical model was used for STSM analysis such that any localization inaccuracy cannot be attributed to the source and volume conduction models. The activation profile of each source was derived from a cosine function and incorporated amplitude and latency signal variability between independent instances of the simulated phenomenon. Two E E G records obtained on separate days from an adult subject were sampled at irregular intervals to provide realistic noise epochs. Developing the deliberately misleading data sets started by positioning 3 sources to produce partial mutual cancellation of their respective topographies. Three problems were created with the same 3 sources by varying the temporal overlap in their activation profiles or their relative peak amplitudes. Fig. 1 shows the positions and orientations of the sources and, for each problem, the average activation profile of each source. Details of wave form generation are given in Note 1. Problem A was built so that a single source was active at early and at late latencies, but one source (b) was never active alone; the peak amplitudes of the sources were in the ratio of 2 : 2 : 1. In problems B and C, the activation profiles of the 3 sources overlapped a great deal. In problem B, the 3 sources had equal peak

E V A L U A T I O N OF S O U R C E M O D E L S

231

at that channel. These ratios ranged from 7 to 13 depending on the problem and were selected so as to produce data in which the spatio-temporal cumulated standard variance of the mean (estimated energy noise contents) was between 3% and 4% of the total variance (7% in problem AI').

Modelling procedures a

b

...°o.°..... . . . . . . . . . . .

a

P

~

°

c

..

b

°°"°"°

a

b

Problem C

Fig. 1. Locations and orientations of the 3 dipoles in all problems (top). Average wave forms of the sources in problems A, B, and C (bottom).

amplitudes whereas in problem C, one of the strongly overlapping sources (c) had its peak amplitude decreased by 75%. Two versions of each problem were generated, corresponding to the 2 E E G records used as noise; A1, B1, and C1 incorporated the first set of noise epochs and problems A2, B2, and C2, the second set. A seventh problem (AI') was developed to investigate the differential robustness of the various STSM exploration methods to noisier data. problem AI' was identical to problem A1 except that it had a lower signalto-noise ratio. Fig. 2 shows the wave form distribution for one of the problems (A1) to illustrate the difficulty of visual inspection to resolve the contributions from different brain regions. For each problem, 96 signal trials and 96 noise ( E E G ) trials were prepared. Each trial was composed of the computed signal and the recorded E E G values for the 19 channels of the 10-20 system (transformed to average reference for analysis) over an interval of 30 time points (150 msec). For noise epochs, the baseline at each channel was set to the channel average amplitude over the 96 trials before applying the average reference. The signal was scaled such that the mean of the 96 trials had a sum of squares across sites and latencies of 10,000; then the channel with maximal signal energy was determined and the noise was scaled to represent a specified amplitude signal-to-noise ratio

In general, STSM consists of the following steps: (a) describe the approximate position and orientation of a small number of sources that could account for the data; (b) calculate the topography of each source (polarity and relative intensity at each recording site for the given montage); (c) estimate a wave shape for each source such that the combination of all sources (topographies and wave shapes) best reproduces the spatio-temporal average data matrix; (d) evaluate a goodness-of-fit index, typically the sum of squares of residuals (differences between the data and the signals produced by the model); (e) slightly modify the current values of the parameters describing the sources (through an optimization algorithm) and repeat from step (b) as long as the goodness-of-fit index can be improved. In our applications, developing a model always started by selecting the best-fitting source triplet (3 orthogonal co-located sources) from 16 fixed initial positions (8 in each hemisphere) followed by optimiza-

Fig. 2. Distribution of average wave forms of problem A1 at 19 scalp sites. Wave forms are averages of 96 trials and comprise simulated signal and noise from E E G referenced to linked mastoids. Amplitude signal-to-noise ratio in the wave form of highest amplitude (C4) is8.

A. ACIIIM [T AI..

232 tion of its position. When the position of a triplet had been optimized, its sources were reoriented using principal component analysis such that the first source had the orientation that accounted for the largest possible portion of the total variance explained by the triplet, and the third source had the orientation that accounted for the least variance. The resulting triplet constituted an early model which was transformed into an initial approximation of the source structure by combinations of the following steps: (1) removal of a source not clearly contributing to explain signal in the data (R), (2) global optimization, i.e., simultaneous optimization of all the parameters of all the sources currently in the model (O), (3) addition and optimization of a single source (S) or of a source triplet (T) while maintaining the previous sources fixed. Different combinations of these steps lead to different initial approximations that were used as starting points for a global non-linear optimization. We refer to the different sequences for producing initial approximations as models. Optimized models that are not rejected by any statistical test are called admissible solutions. Two families of models were explored: models that adequately accounted for the signal in the data with as few sources as possible and models that were allowed to contain one additional source, provided that their solution was not simply a near superset of a solution with fewer sources. Not surprisingly, it turns out that if N sources constitute an admissible solution, then a solution with the same N sources plus one source anywhere will also adequately account for the signal; the localization of the extra source could thus be quite unreliable. For our problems, solutions containing 2, 3, or 4 sources were investigated. Three models were derived for, all problems. All models started with the estimation of the best-fitting triplet (T) and ended with the global optimization (O). In model T R O (Triplet-Remove-Optimize), the least significant source in the triplet was removed and the remaining two sources were globally optimized. Model T R O S O (Triplet-Remove-Optimize-Source-Optimize) added a single source to model T R O (by selecting from the same 16 standard positions as for the addition of a source triplet), optimized it while the other two were maintained fixed, and then optimized all 3 sources together. Finally, another 3-source model (model TO) was obtained by directly optimizing together the 3 sources of the best-fitting source triplet. For those problems in which the 2-source solution (model T R O ) was found inadequate, a 4-source model was estimated by optimizing the 2 most contributing sources from 2 triplets without intermediate optimization (model T R T R O ) . In all cases, this model showed one source that did not clearly contribute to the solu-

tion. That source was therefore removed and the 3 remaining sources were re-optimized.

Optimization Optimization was performed by a modified simplex algorithm (Nelder and Mead 1965). The solution (local minimum) to which the simplex converged was used as a starting point for a new simplex having an orientation in multidimensional space different from those of the previous initial simplexes. This was done until 4 consecutive optimizations converged to the same solution. Each initial simplex occupied a large volume, so that the p a r a m e t e r space may bc broadly sampled. The above procedure typically results in escaping a number of local minima of the optimization criterion, particularly when the number of parameters to optimize is larger than 5. The optimization criterion (fit index) used was the sum of squared residuals. Occasionally in our procedure, a restriction was introduced that penalized the fit index associated with tentative solutions having one source within a specified spherical region in the head: this prevents any source from remaining within that spherical area and is sometimes efficient to prevent convergence on a local minimum that is otherwise hard to escape. Moreover, all the optimizations penalized the fit index for nearly superimposed sources (i.e., two sources with very similar positions and fairly parallel orientations). For reasons still not understood but possibly related to sparse spatial sampling, nearly superimposed sources associated with huge nearly symmetrical wave shapes frequently contribute to fairly good fits of the data. These represent special local minima at which even a very slight perturbation of a single p a r a m e t e r may cause a disproportionate increase of the fit index.

Statistical procedures Models were evaluated and compared using a number of statistical indices. The models were fitted on the average spatio-temporal matrix. All the statistical procedures, however, used the data from individual trials grouped into 16 sub-averages of 6 trials each. The expected sum of squared residuals (the estimated noise energy) was computed by cumulating over latencies and recording sites the squared standard error of the mean, which was then expressed as a percentage of the sum of squared values (the total energy) of the average spatio-temporal matrix. The remaining statistical procedures were based on the residual orthogonality test (ROT). For the purpose of this statistical test, signal is functionally defined as whatever pattern is constant from trial to trial. If a model exactly reproduces this signal, not predicting any other signal not present in the data and not selectively

t:VALIJAI'ION OF SOURCE MOI)EI.S accounting for the residual noise, then subtracting the signal of the model from independent trials should leave essentially independent noise samples. The expected correlation between these noise samples is zero, as is the sum of cross-products. The R O T is simply a t test comparing the average sum of cross-products of all possible pairs of trials to zero, Sub-averages may be used instead of individual trials when the number of trials is large; this reduces computation time and keeps the number of sums of cross products smaller than the number of data points. For a given model, the spatio-temporal signal produced by crossing each source topography with its associated wave shape and by summing across the sources was subtracted from each of the 16 independent sub-average matrices. For a global ROT, these matrices of residuals were transformed into vectors. For testing individual latencies or recording sites, the appropriate row or column vectors were extracted from the matrices of residuals. For each test, the sums of cross-products were calculated for all 16 × 15 + 2 possible pairs of vectors of residuals. The t test comparing the average of these 120 sums of cross-products to zero thus had 119 degrees of freedom. The signal in our simulations was made variable (see Note 1) because this likely occurs to some extent in many natural situations of event-related potentials and interictal spikes. ]'his has no importance for the modelling proper, which uses only the averaged data, but could adversely affect the ROT, which uses the individual trials and implicitly assumes that the signal is constant. It thus appeared useful to verify that the R O T remains sensitive when the signal fluctuates across trials. Signal still present in the residuals (undermodelling) biases the sums of cross-products positively. On the other hand, subtracting a portion of the average noise from the residuals (overmodelling) biases the sums of cross-products negatively. Our R U T results are reported as cumulative probabilities, i.e., the area under the curve to the left of the t score produced by the test. R O T cumulative probabilities less than 0.05 (large negative t scores) are taken to indicate overmodelling, while undermodelling is identified by R O T probabilities greater than 0.95 (large positive t scores). We used the global R U T as well as the latencyspecific and channel-specific ROTs to evaluate tim models. A model was rejected as inadequate if it showed undermodelling on any of these tests. Since least-square solutions account as closely as possible for the whole data, not only their signal portion, some degree of overmodelling can be expected and therefore models were not rejected despite showing significant overmodelling. Finally, a statistic was devised to test whether a source contributes significantly to the model, i.e.,

233 whether it significantly accounts for signal. This sourcc test is similar to the ROT, but without subtraction of a model. It consists in verifying that the wave shape associated with the source over the trials (with the remaining sources present) carries some signal that biases the sums of cross-products positively. This is done by a t test comparing the mean sum of crossproducts of the wave shapes tk3r all pairs of trials to zero. When the wave shapes are derived from the average data by least-squares procedures, this source test is biased in fawmr of detecting signal even when onl~ noise is present (because the positively correlated noise samples contribute to build up the average residual noise while negatively correlated noise samples tend to cancel each other). Sources that failed to provide strong evidence ( R O T probabilities > 0.99) of signal in their average wave shape dcspite this positive bias were systematically excluded from the final models. Before applying STSM proper, two reference solutions were determined for each problem. The first reference model (Exact Solution) contained the exact position and orientation of each source but their wave shapes were derived linearly from the dala. Secondly, the Exact Solution served as the best possible initial approximation for a global optimization on each problem. The resulting Optimized Exact Solutions (OES), not the Exact Solution, represent the optimal solutions thal STSM is expected to find and, therefore, is the proper reference for all the solutions derived below. Solutions differing from one another only by the order in which the sources are described (e.g., the first 5 parameters describing source b rather than source a) or by the polarity attributed to a given source were considered identical. To observe the effect of signal variability on the ROT, a copy of problem A1 was constructed that contained the same noise and the same average signal across trials, but in which the average signal was constant from trial to trial. The ROTs on this constant signal version were compared with those on the corresponding variable signal version (A1) at the Exact Solution and at the OES. Because of the noise structure and because the generating sources were deliberately selected to produce ambiguous scalp topographies that may be approximated by different models, we did not assume a priori that the OES would necessarily constitute the absolute best fit. We expected, though, that both the Exact Solution and the OES would satisfy all the ROTs. Whether other local minima, better or worse than the OES, would also constitute admissible solutions remained an open question. The success of our STSM procedures was to be evaluated according to whether the OES would systematically be among as small number of identified admissible models.

A. ACHIM ET AL.

234 TABLE 1

Residuals expected from noise level and observed at Exact and Optimized Exact Solutions (OES), and localization errors at OES expressed as fraction of head radius, for problems A, B and C illustrated in Fig. 1. Problems with same initial letter differ only by the noise added. Problem

Sum of squared residuals (as a % of data sum of squares) obtained

Error at Optimized Solution fl)r source

From standard variance A1 3.leaf AI 7.1% A2 3.5% BI 3.3c/~ B2 3.9G~ C1 3.9c',; ('1 (2 main sources) C2 3.9¢} C2 (2 main sources)

At Exact Solution

At Optimized Solution

a

b

c

0.8% 1.9% 0.8% 0.99~ 0.8% 1 .()'7/~ 2.69;().851 2.6'71

0.6% l.fl% 0.4cA ().6r/~ 0.5% 0.6G 2.01:4 ().4~;~ 1.5g;-

0.031 0.043 0.025 0.047 0.060 0.073 (1.043 11.274 {1.077

0.036 {).1/58 0.1135 0.068 0.074 11.101 0.028 {l.0711 0.1149

0.032 0.1140 0.030 11.tl59 O. 130 0.345

Results Exact and optimked exact solutions Table I shows the squared noise estimates for each problem, i.e., the residuals expected from the variation of the sub-averages about their grand mean. These can be compared to the residuals obtained at the Exact Solution and at the OES. It can be seen that both solutions systematically produce substantially smaller residuals than the noise estimate, which means that they account for some of the noise present in the data. Table I also shows the localization errors of the sources in the OES, expressed as a fraction of the head radius. Optimized sources were reasonably close to their exact position (localization errors between 2.5% and 13% of head radius), except for problems CI and C2, in which one source had a much smaller amplitude than the other two sources. For those two problems, the exact 2-source solution (excluding the source with a smaller amplitude) left significant signal in the residuals of two channels for C1 and of one channel for C2. The corresponding 2-source OES, however, passed all ROTs. At the Exact Solution, the probability associated with the global R O T was less than 0.025 for all 7 problems, clearly demonstrating that significant overmodelling should not be taken as a reason to reject a solution when the wave shapes are least-squares estimates. Versions of problems A, B and C without any E E G added as noise all required 3 sources in the solution to pass all ROTs. The Exact Solution, leaving no residuals, was the only solution found that passed all tests. Even in these noise-free data, however, the first few minima on which the simplex converged did not represent the correct solution. For instance, in the optimization of model T O for problem A without noise (0.4% expected residuals due to signal variability), the first simplex converged on a solution that left 3.4% residuals and had source b seriously mislocalized; the Exact

0.336

Solution was reached at the convergence point of the third simplex. For the same problem, the final optimization in model T R O S O converged consecutively at 1.9%, 0.8% and 0.7% residuals and could not escape the latter local minimum; the R O T s indicated significant signal unaccounted for at 6 latencies and at 5 channels. When noise was present, the number of simplex reinitializations before no further improvement could be found was usually much higher, often above 10. In the comparison of the ROTs in problem A1 with and without signal fluctuation from trial to trial, variability of the signal was found to generally decrease the R O T probability: the global R O T probability went from 0.015 to 0.011 at the Exact Solution and from 0.010 to 0.007 at the OES when the signal acquired variability. Although R O T increases were seen at a few channels, the channel by channel R O T probabilities generally decreased, the average changing from 0.158 to 0.139 at the Exact Solution and from 0.132 to 0.119 at the OES. Thus trial-to-trial variability in the signal wave shapes slightly biases against detecting signal in the residuals. Problem A 1 After optimizing the best-fitting source triplet on problem A1, it was found that only two of its sources accounted significantly for signal in the data. The global optimization of these two sources (model T R O ) left 6.0% residuals and the ROTs showed significant residual signal at several channels and latencies. When a third source was optimized (model TROSO), the solution obtained was identical to the OES presented in Table |. The signal was completely accounted for at all channels and latencies ( R O T probabilities < 0.22). The global optimization from the 3 sources of the optimized source triplet (model TO) also converged to the OES. In model T R T R O , the two significantly contributing

E V A L U A T I O N OF SOURCE MOI)ELS

235

sources from the best-fitting triplet were kept and maintained fixed while a second triplet was optimized; its non-significant source was then removed and the 4 remaining sources were optimized. This resulted in 0.3% residuals accompanied by overmodelling at several latencies and recording sites. One of the sources did not account for signal in the data as clearly as the remaining 3 ( P = 0.97 compared to P > 0.995 for the others). Removing that source and optimizing the remaining 3 sources gave the OES.

/\

Problem ,41' In this noisier version of problem A1, the 2-source model T R O left 7.4% residuals with significant residual signal at several channels and latencies. Model T R O S O yielded a statistically adequate model with 2 . 1 ~ residuals and a maximal R O T probability of (I.28. The localization errors for the 3 sources were respectively 0.(t82, 0.262 and 0.040, source b being mislocalized mostly because of a grossly overestimated eccentricity. That the OES was not found in this case indicates that a local minimum was reached that could not be escaped using our current procedures. Model TO, however, converged to the OES which accounted for all the signal ( P < 0.23 for all ROTs). Model TR~IRO had one weakly contributing source ( P = 0 . 9 7 ) . When it was removed and the solution reoptimized, the OES was again obtained.

Fig. 3. Average (thick line) and individmd (thin lines) residual wave shapes oblained in problem A2 by subtracting the wave shape produced by model T R O S O fl)r channel P4 from each of 16 independent replications of the response at that channel. The horizontal axis represents the zero level. The ROT detected a significant pattern of deviation from baseline in those residuals tl tailed t t119) 2.23, P = 11.{114) which appears to be duc to a negative (downward) bias during the first half of the epoch.

Problem A2 In problem A2, model T R O left 5.9% residuals with several R O T s showing significant signal in the residuals. Model T R O S O , with one additional source, yielded a solution with 1.0% residuals; the R O T probabilities were below 0.24 at all sites except at P4 ( P = 0.99), which lead to rejection of this solution as inadequate. Fig. 3 shows the superimposed residuals from the 16 independent sub-averages at that channel; the significant R O T corresponds to the detection of a common pattern of deviation from zero among the independent residuals. In this model, the mislocation distances were respectively 0.307, 0.t)42 and 0.019. Source a was close to midline in the left hemisphere at an eccentricity of 0.43. Placing a restriction sphere of radius 0.2 centred on the position of this source and reoptimizing globally permitted to escape the local minimum in which the previous optimizatkm was trapped and resulted in the OES (0.4% residuals). When the sources of the best-fitting source triplet were optimized globally (model TO), the residuals were at 1.1%; the R O T probabilities were below 0.30 at all channels except, again, at P4 ( P > 0.995), indicating an inadequate model. The distances between obtained and actual source positions were respectively 0.319, 0.039 and 0.(130. The 2-triplet model (model T R T R O ) , with the sub-

\

sequent removal of its non-contributing fourth source (P 0.67) and re-optimization, converged to the OES. Problem B1 In problems BI and B2, sources b and c had almost parallel activations. In problem BI, optimizing the best two sources from the original optimized source triplet (model "FRO) left 1.5~ residuals with a maximal R O T probability of 0.84 across channels, thus constituting a marginally admissible solution. The localization errors to the nearest sources (a and c) were 0.038 and 0.249; the latter source in the model thus constituted a compromise between sources b and c. Clearly, the approximation of source c in the OES (3 sources) is much better than in model T R O (2 sources). Model T R O S O converged to the OES. Model T O produced 0.62% residuals (compared to 0.59% in the OES) with signal completely explained at all channels and all latencies (maximal R O T probability of 0.20). However, the localization errors were respectively 0.023, 0.562 and (i).31)1. This model was clearly not a superset of model T R O . This relationship is less clear for the earlier 3-source solution (TROSO), in which the source approximating source c is at a distance of 0.195 from the corresponding approximation in model TRO. Problem B2 For problem B2, model "FRO produced an admissible 2-source solution with 1.1% residuals, a global R O T probability of 0.038 and a maximal channel R O T probability of 0.45. In this model, the localization errors to the nearest sources (a and c) were 0.101 and (/.229. Adding one optimized source and re-optimizing globally (model T R O S O ) placed the new source in the chin area and left 0.6% residuals, now with a maximal channel R O T of 0.18; the source test detected signifi-

236

cant signal ( P > 0.995) associated with each source, even the unrealistic third source. When we did not optimize this third source separately before the global optimization, the model reproduced the OES (0.5% residuals), as did model TO. This 3-source solution appears as a near superset of the T R O 2-source solution: the distances between the estimates of the corresponding sources in the two solutions were 0.075 and 0.103 for sources a and c respectively. Yet, as in problem B1, the 3-source solution is substantially more accurate than the 2-source solution for localizing source C.

Problem ( 7 In problems CI and C2, the activation of source c was again nearly parallel to that of source b, but its amplitude was decreased by 75%. In problem C1, model T R O produced the 2-source OES described in Table I, which had a maximum R O T probability of 0.85. 3"his marginally admissible 2-source solution showed smaller errors than the 3-source OES in localizing the two strong sources of problem C. The 3-source OES, however, can be considered a near superset of model TRO, with distances of 0.053 and 0.121 between the two approximations of sources a and b respectively. Model T R O S O produced quantitative indices quite similar to those of the OES, with residuals at 0.567% (compared to 0.591% for the OES, showing that this was not the least-squares solution) and a maximal R O T probability of 0.19. The fit index of this model is thus better than that of the OES; yet only source a is acceptably localized, the distances from the exact positions of the sources being respectively 0.031, 0.306, and 0.413. This model is not a superset of model TRO, a distance of 0.322 separating the two approximations of source b. Of relevance to the discussion of the subset-superset relationship, we noted that adding a single optimized source to model TRO, without the global optimization, left the new source at a distance of 0.078 of source c. The residuals were at 0.9%, the global R O T probability was 0.009 and the maximal channel R O T probability was 0.31. This is a strict superset of the admissible T R O solution and yet the extra source was appropriately placed; this characteristic was lost by the global optimization. Thus, in this case, the initial approximation was much closer to the Exact Solution than were the least-squares solution or the OES. To verify whether this already successful initial approximation was at or near a local minimum (for all sources optimized together), we repeated its global optimization with a very small initial simplex. This did not stay in the immediate vicinity of the initial approximation. It did not converge, however, to the leastsquares solution but to the OES.

A. ACH1M ET AL.

Model TO also yielded the OES, rather than the least-squares solution, with a maximal channel ROT probability of 0.19. Contrary to what was seen in problems B1 and B2, here both 3-source solutions (OES and least-squares) did not constitute improvements in the accuracy of localization of the two sources of model TRO; they rather constituted decrements in localization accuracy.

Problem C2 In C2, the 2-source model "FRO converged to the 2-source OES, with a maximal channel ROT probability of 0.61. Model T R O S O yielded a solution very similar to the 3-source OES, with a maximal channel R O T probability of 0.16. The 3-source solution was not a near superset of the 2-source solution, as the approximation of source a in model T R O S O was at a distance of (I.335 from the approximation of the same source in model TRO. Model TO did not find the OES: it left 0.9% rcsiduals with a maximal R O T probability of 0.25 across channels; the localization errors were respectively 0.421, 0.110 and 0.376. Again, this model was not a superset of model TRO, with the two approximations of source a at 0.358 from each other. Model T R T R O was also tried on this problem, because it had proved successful in all 3 problems in which it was needed. The source test showed that all 4 sources contributed significantly to explain some signal. The sources, however, were all misplaced. Source a was the most closely approximated of all 3 sources, with a localization error of 0.144. As for problem C1, the 3-source solutions (and, here, a four-source solution) constituted decrements in localization accuracy of sources a and b, without even correctly localizing the weak source c.

Discussion If problems C1 and C2 are considered as containing only two significant sources, we observe that, in all cases examined, the correct solution was among the admissible solutions found by our STSM procedures, despite our attempts to fool the system when we synthesized the data. This required the use of several initial approximations and, for each of them, a multiplicity of optimizations (initializing a new simplex with tile solution at which the previous simplex converged) to escape local minima. In some problems, however, the modelling procedures also lead to other statistically acceptable solutions that accounted for nearly as much of the data variance. For problems A I and A2 that did not have extreme overlap in the activation profiles of the sources, no

EVALUATION OF SOURCE MODELS 2-source solution satisfied the statistical tests. With 3 sources, the different initial procedures all converged on the least-squares solution. When the noise level was increased (problem A I ' ) , only model T R O S O did not achieve the least-squares solution. Problems B1, B2, C1 and C2, containing substantial temporal overlap between the activation profiles of two sources, were deficient on an important source of information guiding STSM in localizing the sources, namely the temporal fluctuation of their combined topography. Nevertheless, a correct decomposition was among the small set of admissible solutions found for these problems, which included 2-source and 3-source solutions. As anticipated, the large temporal overlap between two of the sources permitted the ambiguous pattern of their combined scalp topographies to be maintained over time. This resulted in a poor signalto-noise ratio of the disambiguating information (i.e., of the mismatch over time resulting from the incorrect interpretation of the ambiguous topographies). The structured noise explained by the model compensated for this small signal in the residuals, at least enough to prevent the R O T from detecting a significant mismatch. For problems B1 and B2, the least-squares 3-source solutions provided essentially correct localizations while the 2-source solutions mislocalized one of the sources. For problems CI and C2, the opposite was seen; the 2-source solutions provided correct localizations for the two significant sources, and the various admissible 3-source solutions produced serious mislocalizations. The incorrect 2-source solutions were similar in B1 and B2, reflecting the fact that there were too few sources and that one incorrectly localized source represented a compromise position to account simultaneously for the topographic effects of more than one source. On the contrary, the incorrect 3-source solutions differed between C1 and C2, reflecting the fact that there were only two significantly contributing sources and that the 3-source models were greatly influenced by residual noise. Failure of a solution to replicate in the same subject thus appears an indication of unreliability and could constitute a rejection criterion when an alternate admissible solution exists. Good replication of a solution, however, is not a proof of reliability, for the unreliability may come from the ambiguous structure of the signal rather than from the noise. In all cases, these so-called correct localizations had to tolerate deviations from perfect localizations by distances from 2.5% to 13% of the head radius. This is clearly an effect of the spatio-temporally organized residual noise, since perfect solutions were found when no E E G was added to the signal. Although we did not compare structured noise with independent noise at each data point, it seems clear that the effects of

237 independent noise would tend to cancel while those of correlated noise tend to reinforce one another. While it seems that STSM can be expected to provide a correct interpretation of the gross structure of the spatio-temporally recorded signal, claiming precise localizations appears unjustified unless extremely favourable signal-to-noise ratios were obtained. This correct interpretation may, however, be one of a number of admissible alternatives. The experience gained from running these simulations can be expressed as a number of observations about STSM, most of which should apply to a much broader class of problems than those studied here. (1) The model consisting of the exact positions and orientations of the sources of the signal and of associated temporal activation profiles obtained by linear regression incorporates much noise in the wave shapes, leaving systematically less residuals than the estimated noise contents of the data. (2) In the presence of spatio-temporally organized residual noise, the exact localization solution is not a minimum of the error function. The least-squares optimization produces an opportunist compromise between perfectly explaining the signal and explaining as much of the noise as it can. As seen in Table I, this opportunist repositioning of the sources led, on the average, to a further 40% reduction of the cumulated squared error index. Overmodelling thus appears hardly avoidable. Furthermore, since significant global overmodelling occurs even at the exact positions and orientations of the sources, this should certainly not be considered a valid cause of model rejection when source localization is the prime interest. (3) Even in the absence of noise, the least-squares criterion function may present many local minima. Accepting the first point of convergence of the optimization procedure would often produce serious localization errors. When the simplex algorithm is used, reinitializing the simplex with its previous solution is an effective mean of escaping many local minima, and thus of finding much better solutions. This approach, however, did not necessarily led to the least-squares solution: in many instances our procedure stopped at suboptimal local minima that it could not escape. In those cases, the further use of a variety of initial approximations lead to solutions that fitted the data more closely, identifying the former solutions as suboptimal. (4) The specific sequence of steps taken to elaborate an initial approximation can make a substantial difference in the best solution produced by the global optimization procedure. Although it is often efficient, in terms of computing time, to solve the problem in parts (e.g., first optimizing for a subset of the sources, then adding sources and optimizing them while maintaining fixed the previously introduced sources, and finally

238 using this result as the initial approximation for the global optimization), we reached the conclusion that this approach favours suboptimal global solutions. As seen in particular in A I ' , the optimization of a subset of the sources may force one dipole into an intermediate position of compromise between two of the actual sources, so that it simultaneously accounts for a reasonable part of the activities of those two sources. Also, as was the case with B2 the separate optimization of an added source may lead it to a spurious position where it accounts well for noise because one of the earlier sources already accounts simultaneously, though imperfectly, for a pair of sources. In both cases, the 2-source initial approximations derived by this approach constituted hard-to-escape local minima when the third source was introduced, thus preventing the correct solution from being identified. Although the opposite approach of elaborating an initial approximation containing more sources than required (model T R T R O with problems A1, A I ' and A2) generally led to the least-squares solution in the present applications, experience with other problems (and with problem C2 of this study) showed that this is not a general rule. (5) Theoretical discussions of source modelling (e.g., Fender 1987; De Munck 1989; Scherg 1989) suggest that the least-squares solution for the correct number of sources, if it was known, would be the best achievable estimate of the source structure. Experience with the present problems raises doubts about this on 2 grounds. First, a concept of the correct number of sources is not as obvious as it first sounds. Problem C, with no noise, has 3 sources; but this reduces to only 2 significant sources when noise is added whose energy constitutes between 3% and 4% of the energy of the spatio-temporal data matrix. It seems obvious that there should nearly always be small sources that do not contribute significantly to the data and are therefore not expected to be localized by the STSM procedures (e.g., brain-stem sources in studies of long latency event-related cognitive responses). Second, although in our problems the least-squares solution was in the vicinity of the Exact Solution and was the best possible choice among the set of admissible solutions that were obtained for each problem, it is far from obvious that this is necessarily the case. In problem C1 analysed with 3 sources, a solution yielding a lower error criterion than the OES could be found and was much worse than the OES. Moreover, the Exact Solution is not even a local minimum; its fit index tends to be more than 50% higher than the least-squares. Thus, at least one possible solution (the Exact Solution) is much more accurate than the OES yet does not reproduce the signal-plus-noise data quite as well. Finally, one is never sure to have found the absolute least-squares fit for the spatio-temporal data matrix.

A. ACHIM ET AL. Thus, it does not seem prudent to reject solutions that otherwise a p p e a r reasonable, simply on the grounds that another solution was found that leaves slightly less residuals. Whenever the context of application of STSM permits, if several reasonably good but substantially different solutions may be found, they should be considered as part of a "differential diagnosis" for the data set. The concept of differential diagnosis or the temporary acceptance of alternate solutions compatible with the data currently available implies a recognized need for other sources of localizing information to further restrict the set of possible solutions. For example, with E E G data, selective M E G recordings should often be able to eliminate most of the alternatives. (6) If the strategy shifts from finding a unique solution, presented as the only possible explanation for the data, to finding a set of substantially different solutions that are compatible with the data, the R O T (or other similar statistical test) becomes an essential tool because it constitutes a strong constraint to objectively restrict the class of models that may be described as compatible with the data. As expected, even in the presence of some trial to trial variability of the signal, the R O T applied latency by latency and, more informatively, channel by channel proved useful to reject many inadequate models that left or introduced signal in the residuals. Whether the R O T will generally restrict the admissible solutions to a very small number remains to be seen. In the present applications, particularly in problems A1 and A2, a number of apparently good solutions could be rejected because they left significant signal in at least one channel. When the physical model (i.e., dipoles in a sphere) used in STSM is a simplification of the actual situation in which the data were generated (i.e., extended sources in the head of a real subject), it is expected that the R O T s will not generally be as low as they were here; it would be surprising however that the small differences between the actual topography of a physical source and the computed topography would systematically result in the detection of signal unaccounted for; it is rather expected that this would partly c6unteract the negative bias of the R O T due to the fact that STSM solutions also account for much of the noise and the further negative bias associated with variability of the signal from trial to trial. (7) We speculated that a solution which is a near superset of another that already satisfies all the R O T s (with fewer dipoles) should have little interest since there should be no information gained by the addition of a source when all the signal appears already explained. Although we still lack a strict definition of a near superset, we observed in problems B1 and B2 that a 3-source solution having similarities with a 2-source

EVALUATION OF SOURCE MODELS

solution already satisfying all the statistical criteria constituted a significant improvement in localization accuracy. In problems C1 and C2, however, the opposite was seen: the new positions of the first two sources were further away from the exact locations and the added third source was erroneously positioned. Furthermore, the added source in a strict superset of an acceptable solution in problem C1 was seen to be quite accurately localized. Our initial speculations about the near subset-superset relationship thus appear of little practical use in deciding which models should be retained when such a relationship holds. As an exception to this, we observed that, in an admissible solution, sources that do not contribute significantly to explain the signal can be removed and this still leaves an admissible model that can be further optimized. There appears to be no justification for retaining sources that seem to account only for noise in the data. The above conclusions and observations are mostly based on existence proofs from deliberately misleading data sets. Although the n u m b e r of sources and the signal-to-noise ratios were selected to correspond to actual epileptic spike problems (Achim et al. 1990), the prevalence, in real data sets, of the difficulties encountered here remains unknown, except for the large number of local minima. All questions associated with faithfully representing the geometry and electrical properties of the sources and of the head as a volume conductor were deliberately ignored, except to acknowledge that geometrical simplifications should force the presence of some signal in the residuals, which should partially compensate for the noise being accounted for. Another effect of ignoring the actual geometry of the brain has been called to our attention, namely that removing the smearing effect of the skull causes the wave shapes at neighbouring channels to be less correlated than they should be. This, however, should not affect the validity of our simulations for the purpose of demonstrating possible pitfalls of STSM, because very similar problems could have been produced with a 3-shell model of the head by correcting the eccentricities of our arbitrarily selected sources. We applied STSM to deliberately misleading test data to learn about what could go wrong. One objective was to develop prudence in interpreting STSM models derived from our own data or published in the literature. Another objective was to identify important directions for the methodological development of STSM. The general conclusion from the present work is that STSM can be reliable, although not extremely precise, at finding the sources responsible for the signal, but that it should not, in isolation from other sources of information, be expected to provide a unique unambiguous solution in all cases. STSM methodology, however, is still very much in evolution. Further

239

progress will come from the systematic use of restrictions on where the optimization may bring the sources in the head, coupled with optimization criteria modified to incorporate the ROTs. By forbidding sources from the area in which a source is believed to be, either alternate admissible solutions will be identified or the failure to find admissible solutions will constitute strong evidence that the area contributes to the data.

Note 1

In all problems, the 3 sources had monophasic activation profiles controlled by 3 parameters: peak amplitude (PA), onset latency (OL), and duration (DU). Between onset and offset the activation profiles were generated by: Amplitude (t) = PA (0.5-0.5 cos((t

0.5-OL) 2 z-/DU))

On successive epochs, signal was generated by giving a normal distribution to the 3 parameters with means and nominal deviations given below. For each parameter separately, a correlation of 0.89 was introduced between the 3 sources (by multiplying the nominal deviation by two-thirds of a common N(0, 1) distributed random number plus one-third of an individual random number). The same pseudo-random number sequence was used for all problems to exclude random number effects as a possible cause of differences among the problems. The mean p a r a m e t e r values are given as [PA, OL, DU]. For problem A, they were [20, 3.5, 15], [20, 7, 15] and [10, 15, 30] for sources a, b and c respectively. For problem B, they were [20, 4, 15], [20, 8, 20] and [20, 7, 20]. For problem C, they were as in problem B except for the amplitude of source c which was reduced from 20 to 5 units. The nominal deviations for all sources in all problems were 3 sampling points for onset latency and 1 sampling point for duration; the nominal deviation for peak amplitude was 2 units for sources a and b and 1 unit for source c, in all problems.

References Achim, A., Richer, F. and Saint-Hilaire, J.-M. Methods for separating overlapping sources of neuroelectric data. Brain Topogr., 1988a, 1: 22-28. Achim, A., Richer, F., Alain, C. and Saint-Hilaire, J.-M. A test of model adequacy applied to the dimensionality of multichannel average auditory evoked potentials. In: D. Samson:Dollfus et al. (Eds.), Statistics and Topography in Quantitative EEG. Elsevier, Paris, 1988b: 161-171. Achim, A., Richer, F., Wong, P.K.H. and Saint-Hilaire, J.-M. Application of spatio-temporal dipole modelling and model adequacy test to averaged inter-ictal spikes in rolandic epilepsy. Electroenceph. clin. Neurophysiol., 1990, 75:S1 (abstract).

240 Barth, D.S., Baumgartner, C. and Sutherling, W.W. Neuromagnetic field modeling of multiple brain regions producing interictal spikes in human epilepsy. Electroenceph. clin. Neurophysiol., 1989, 73: 389-402. De Munck, J.C. A Mathematical and Physical Interpretation of the Electromagnetic Field of the Brain. Ph.D. Dissertation. University of Amsterdam, 1989. Fender, D.H. Source localization of brain electrical activity. In: A.S. Gevins and A. R6mond (Eds.), Methods of Analysis of Brain Electrical and Magnetic Signals. Elsevier, Amsterdam, 1987: 355 -403. Gloor, P. Neuronal generators and the problem of localization in electroencephalography: application of volume conductor theory to electroencephalography. J. Clin. Neurophysiol., 1985, 2: 327354. Hjorth, B. and Rodin, E. Extraction of "deep" components from the scalp EEG. Brain Topogr., 1988, 1: 65-69. Nelder, J.A. and Mead, R. A simplex method for function minimization. Computer J., 1965, 7: 308-313. Scherg, M. Fundamentals of dipole source potential analysis. In: F.

A. ACHIM ET AL. Grandori and G. Romani (Eds.), Auditory Evoked Electric and Magnetic Fields. Topographic Mapping and Functional Localization. Karger, Basel, 1989. Scherg, M. and Von Cramon, D. Two bilateral sources of the late AEP as identified by a spatio-temporal dipole model. Electroenceph. clin. Neurophysiol., 1985, 62: 32-44. Scherg, M. and Von Cramon, D. Evoked dipole source potentials of the human auditory cortex. Electroenceph. clin. Neurophysiol., 1986, 65: 344-360. Scherg, M., Vajsar, J. and Picton, T.W. A source analysis of the late human auditory evoked potentials. J. Cogn. Neurosci., 1989, 1: 336-355. Simpson, G.V. Scherg, M., Ritter, W. and Vaughan, H.G. Localization and temporal activity functions of brain sources generating the human visual ERP. In: EPIC IX int. Conf. on Event-Related Potentials of the Brain. Poster session 1, 1989:12 13. Wood, C.C. Application of dipole localization methods to source identification of human evoked potentials. Ann. NY Acad. Sci., 1982, 388: 139-155.