MEG data

MEG data

NeuroImage 101 (2014) 610–624 Contents lists available at ScienceDirect NeuroImage journal homepage: www.elsevier.com/locate/ynimg Wedge MUSIC: A n...

3MB Sizes 0 Downloads 36 Views

NeuroImage 101 (2014) 610–624

Contents lists available at ScienceDirect

NeuroImage journal homepage: www.elsevier.com/locate/ynimg

Wedge MUSIC: A novel approach to examine experimental differences of brain source connectivity patterns from EEG/MEG data Arne Ewald a,b,c,⁎, Forooz Shahbazi Avarvand d,b, Guido Nolte a a

Dept. of Neurophysiology and Pathophysiology, University Medical Center Hamburg-Eppendorf, Martinistr. 52, 20246 Hamburg, Germany Berlin Institute of Technology, Machine Learning Laboratory, Franklinstr 28/29, 10587 Berlin, Germany NIRx Medizintechnik GmbH, Baumbachstr. 17, 13189 Berlin, Germany d Fraunhofer FOKUS, Kaiserin Augusta Allee. 31, 10589 Berlin, Germany b c

a r t i c l e

i n f o

Article history: Accepted 8 July 2014 Available online 17 July 2014 Keywords: Electroencephalography Magnetoencephalography Functional connectivity Source localization Imaginary part of coherency Volume conduction Class differences

a b s t r a c t We introduce a novel method to estimate bivariate synchronization, i.e. interacting brain sources at a specific frequency or band, from MEG or EEG data robust to artifacts of volume conduction. The data driven calculation is solely based on the imaginary part of the cross-spectrum as opposed to the imaginary part of coherency. In principle, the method quantifies how strong a synchronization between a distinct pair of brain sources is present in the data. As an input of the method all pairs of pre-defined locations inside the brain can be used which is computationally exhaustive. In contrast to that, reference sources can be used that have been identified by any source reconstruction technique in a prior analysis step. We introduce different variants of the method and evaluate the performance in simulations. As a particular advantage of the proposed methodology, we demonstrate that the novel approach is capable of investigating differences in brain source interactions between experimental conditions or with respect to a certain baseline. For measured data, we first show the application on resting state MEG data where we find locally synchronized sources in the motor-cortex based on the sensorimotor idle rhythms. Finally, we show an example on EEG motor imagery data where we contrast hand and foot movements. Here, we also find local interactions in the expected brain areas. © 2014 Elsevier Inc. All rights reserved.

Introduction Investigating neuronal communication is crucial to understand cognitive processing and its links to behavior (e.g. Dosenbach et al., 2007; Siegel et al., 2011; Vincent et al., 2006, 2008). A fundamental mechanism of communication within the brain is the synchronization of oscillatory signals generated by large populations of neurons (e.g. Engel et al., 2001; Fries, 2005; Hipp et al., 2012; Schoffelen et al., 2005; Singer, 1999; Varela et al., 2001). Measurement techniques such as functional magnetic resonance imaging (fMRI) or near-infrared spectroscopy (NIRS) are spatially highly precise but do not measure neural activity directly. Instead, hemodynamic blood level changes are observed that are linked to neural activity. These occur on a relatively long time scale, i.e. in the range of seconds, and are therefore unable to pick up the rich underlying neuronal dynamics. In contrast to that, recording local field potentials (LPF) or electrocorticography (ECoG) captures neuronal activation directly inside the brain but requires neurosurgery. Hence, magnetoencephalography (MEG) and electroencephalography (EEG) are

⁎ Corresponding author at: Dept. of Neurophysiology and Pathophysiology, University Medical Center Hamburg-Eppendorf, Martinistr. 52, 20246 Hamburg, Germany. E-mail address: [email protected] (A. Ewald).

http://dx.doi.org/10.1016/j.neuroimage.2014.07.011 1053-8119/© 2014 Elsevier Inc. All rights reserved.

well-suitable tools to examine brain dynamics due to their noninvasiveness and high temporal resolution which has been widely demonstrated in the past years (e.g. Colgin et al., 2009; Fries et al., 2001; Larson-Prior et al., 2013; Stam et al., 2007a; Womelsdorf et al., 2007). However, there exist intrinsic challenges to conclude on synchronized brain sources from EEG or MEG data measured on the scalp. Due to volume conduction inside the head each electromagnetic signal generated by a single brain source (i.e. a population of neurons) is recorded by multiple measurement sensors (Nunez et al., 1997). Hence, data observed at scalp level are a mixture of the underlying source activities (Baillet et al., 2001). Unfortunately, the mathematical inversion, i.e. calculating brain sources from EEG or MEG data, is an ill-posed problem (Sarvas, 1987). In the context of synchronization between brain sources another problem arises due to volume conduction. As a single source signal is captured by many sensors, any connectivity metric between these sensors would detect a relationship. Although this issue has been addressed methodologically more and more over the past years (Avarvand et al., 2012; Ewald et al., 2012, 2013; Gonzalez et al., 2011; Haufe and Nikulin, 2012; Marzetti et al., 2013; Nolte and Müller, 2010; Nolte et al., 2004, 2006; Pascual-Marqui, 2007; Schoffelen and Gross, 2009; Stam et al., 2007b; Vinck et al., 2011), it is still common practice to calculate source activity including additional assumptions and apply connectivity metrics in source space. Nonetheless, these

A. Ewald et al. / NeuroImage 101 (2014) 610–624

artifacts of volume conduction1 do not seem to vanish by solving the inverse problem. For example, Haufe et al. (2012) and Sekihara and Nagarajan (2013) show that also on source level spurious interactions can arise due to artifacts of volume conduction. In principle, one can differentiate two analysis pipelines for functional brain source connectivity analysis. One is the projection of the sensor time series to source space and the subsequent analysis of relationships between the estimated source time series. However, this approach might be affected by artifacts of volume conduction as described above. The other is a decomposition of sensor signals into statistically dependent spatial source patterns which afterwards can be localized (see e.g. Nolte et al., 2006; Haufe et al., 2014). Here, the question remains if the source activity meets the statistical assumptions. In the present paper we present a combined and data-driven approach to directly estimate synchronized sources. We introduce a novel method to localize pairwise synchronized brain sources in the frequency domain robust to the problem of volume conduction. It is based on the concept of the imaginary part of coherency (ImCoh) introduced in Nolte et al. (2004). The ImCoh neglects zero-phase synchronization between two time series incorporating the assumption that these are mainly evoked by instantaneous volume conduction effects. In contrast to that, significantly non-vanishing values for ImCoh cannot be explained by volume conduction effects and must arise from true interactions between distinct sources. However, ImCoh is a quantity normalized by power which itself also depends on non-interacting sources. Hence, the interpretation especially of differences of ImCoh for different conditions may be misleading. In general, it is possible that such a difference is caused by effects which are not related to the interaction. The main idea behind the presented method is to estimate the contribution of two sources at distinct locations inside the brain to the imaginary part of the cross-spectrum (ImCs) only and to quantify this contribution properly. In contrast to ImCoh, independent sources have no systematic effect on ImCs and any difference between two ImCs must be due to a difference of the interacting system. A different way to motivate the use of the cross-spectrum is to consider it as a frequency representation of the covariance matrix. The novel algorithm is related to the Multiple Signal Classification (MUSIC, see Mosher et al., 1999b) method. For MUSIC, a scan over all brain voxels is executed and for each voxel it is evaluated how well a source at a distinct voxel coincides with the data, i.e. a subspace of the covariance matrix. Here, we propose a similar procedure but for interacting sources in the frequency domain. Furthermore, due to the described artifacts of volume conduction, we utilize the imaginary part of the cross-spectrum. The paper is organized as follows: In section Methodology we shortly recapitulate the role of the imaginary part of coherency and the concepts behind other methods used in this paper, such as RAP MUSIC. Furthermore, we introduce the novel methodological approach, which we term Wedge MUSIC, for three different variants. In section Simulations we present results of several simulations: First, to evaluate the performance of Wedge-MUSIC in all three variants, second, to evaluate the effect of mislocation of a given reference source, and third, to compare the investigation of class differences with the maximized imaginary part of coherency as introduced in Ewald et al. (2012). In section Results for real MEG and EEG data we show the application on real data in two single subject cases: a) on resting state MEG data and b) on EEG data obtained from imaginary hand and foot movements to demonstrate the investigation of class differences. Finally, the results are discussed in section Discussion and advantages and pitfalls of the new method are outlined.

1 By “artifact of volume conduction” we only mean that a (significantly) non-vanishing coupling measure may be caused by a mixing of non-interacting sources. However, if there is true source interaction then the actual values also of measures which are robust to artifacts of volume conduction still depend on how brain signals are mapped to sensors. We emphasize that the dependence on volume conduction also of ‘true’ interaction measures is really the basis of any localization.

611

Methodology In the current section we review the basic concept of the imaginary part of coherency and its maximization that includes a modified beamforming approach. Furthermore, we describe the idea of RAP MUSIC as it is used in this paper to find synchronous sources as a prior step for two Wedge MUSIC variants. Finally, we introduce the novel methodology termed Wedge MUSIC in all three variants. The imaginary part of the cross-spectrum and coherency Coherence is a well-known and established signal processing tool to detect bivariate synchronization in terms of phase-locking in the frequency domain. Also in field of EEG and MEG data analysis it has been widely used and discussed (e.g. Fries, 2005; Nunez et al., 1997, 1999; Schoffelen et al., 2005). Mathematically, coherence is based on the complex cross-spectrum that is defined for each two Fourier-transformed time series xp ð f Þ ¼ r p eiϕp and xq ð f Þ ¼ r q eiϕq by E D E D  i ϕ −ϕ C pq ð f Þ ¼ xp ð f Þxq ð f Þ ¼ r p r q e ð p q Þ :

ð1Þ

Here, f denotes the frequency, * the complex conjugate, r signal amplitudes, ϕ signal phases and 〈.〉 denotes the expectation value which is usually approximated by averaging over a large number of trials (Bendat and Piersol, 1971). One can observe from Eq. (1) that the cross-spectrum expresses the phase difference at f weighted by signal amplitudes. Coherency2 is defined by normalizing the complex crossspectrum by the total signal power in each channel which leads to D

E i ϕ −ϕ rp rq e ð p q Þ Cohpq ð f Þ ¼ rffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi D ED Effi : r 2p r 2q

ð2Þ

In Nolte et al. (2004) it has been shown that an instantaneous mixing of neuronal sources into sensors, i.e. a phase relation ϕp − ϕq = 0, only contributes to the real part of the cross-spectrum. This is also true for coherency as the normalization in Eq. (2) is purely real-valued. Hence, all significant deviation from zero of the imaginary part of the crossspectrum (ImCs) or coherency (ImCoh) can be interpreted as true interaction between brain sources because volume conduction effects occur approximately instantaneously for the frequencies of interest in EEG and MEG (below ∼2 kHz). On the contrary, synchronization between brain sites usually requires some time (Rodriguez et al., 1999; Varela et al., 2001), which may lead to a non-instantaneous synchronization with ϕp − ϕq ≠ 0. Please note that a phase delay exactly equal to 2π or any multiple of 2π would not be detected by the ImCs or the ImCoh. This and other issues are extensively discussed in Nolte et al. (2004). In the following, we base all further calculations on the imaginary part of the cross-spectrum (ImCs) as a complete second order statistic of the data robust to volume conduction effects. RAP MUSIC The proposed methodology, outlined below, can be applied on predefined brain sources. For example, these can be the result of a prior source localization or regions of interest based upon prior knowledge from other studies or measurement modalities. In the present paper, we use Recursively Applied and Projected Multiple Signal Classification (RAP MUSIC, Mosher and Leahy, 1999) as a prior source reconstruction technique. RAP MUSIC is a variant of the MUSIC algorithm first described in Schmidt (1986) and applied to EEG in e.g. Mosher et al. 2 Please note that we term the complex quantity ‘coherency’ whereas the absolute value of coherency we call ‘coherence’. This is in line with the literature as usually the absolute value is considered when referring to coherence.

612

A. Ewald et al. / NeuroImage 101 (2014) 610–624

(1999a). The principle idea behind MUSIC is to separate data into a p ≪ N dimensional signal subspace and a N − p dimensional noise subspace where N is the number of measurement sensors and p the size of the pre-defined signal subspace A. Usually, the signal subspace is obtained by applying a singular value decomposition on the covariance matrix of the data. As we are interested in synchronized sources instead of sources with a strong power, we here utilize the imaginary part of the cross-spectrum (ImCs) to define the signal subspace. This idea has been introduced and demonstrated in simulations in Avarvand et al. (2012) and Ewald et al. (2013). The signal subspace A is spanned by the first p orthonormal left-singular vectors, i.e. the first p columns, of the matrix U from the decomposition T

ℑðCð f ÞÞ ¼ Uð f ÞSð f ÞV ð f Þ;

ð3Þ

where C is the cross-spectrum defined in Eq. (1), U is a unitary matrix containing the left-singular vectors as columns, S a diagonal matrix with the singular values on the diagonal and VT again a unitary matrix with ()T denoting the matrix transpose and ℑ() the imaginary part of a complex quantity. Due to utilizing ImCs instead of the covariance matrix, we employ the proposed methodology in the frequency domain and all equations are dependent on f. This is achieved by choosing a specific frequency or, alternatively, by averaging the cross-spectrum over frequencies in a certain band as rhythmic EEG and MEG activity is characterized by frequency bands (Nunez and Srinivasan, 2006). For the ease of reading we omit the frequency dependency in the following. After having established the N × p dimensional signal subspace A, the consistency of a source at voxel j on a pre-defined grid inside the brain with the signal subspace is quantified. This procedure is iterated over all possible brain voxels j = 1…M, which we call a scan. In the case of EEG and MEG the consistency can be specified by the angle ϑ between the corresponding lead field Lj ∈ ℝN × 3 at a particular voxel and the signal subspace. The leadfield Lj describes how a three-dimensional dipolar (see e.g. De Munck et al., 1988) source at voxel j is mapped into measurement sensors and is also often called the forward model. The angle can be expressed as   LT AT AL 2 j j cos ϑ L j ; A ¼ : LTj L j

ð4Þ

From a MUSIC scan one can easily identify the voxel with a forward model having the smallest angle with the subspace. The identification of other sources is not trivial, because a) this requires the identification of local minima, and b) such local minima may arise as a ‘ghost source’, i.e. a reflection of the first source which can also be observed at wrong locations. RAP MUSIC is a way to handle this issue: After an initial MUSIC scan, i.e. after calculating the angle ϑ according to Eq. (4) for each voxel j, the most prominent source is projected out of the signal subspace A and the MUSIC scan is repeated. This projection procedure is repeated for a pre-defined number of sources ns to suppress the influence of the different sources on each other within the localization procedure. Wedge-MUSIC In the following we present a novel method to identify synchronous neuronal sources from EEG or MEG data. The idea behind it is to determine how much an interaction between two sources at specific voxels i and j inside the brain contributes to the imaginary part of the crossspectrum (ImCs). Therefore, we first derive how a synchronization of two single sources is reflected in the ImCs. Let's consider two single brain source time series transformed in the T e ¼ ðe frequency domain y y1 e y2 Þ ∈ ℂ21 . These are mapped into sensor space according to the linear source mixing model by their respective toe ¼ Ty e with T = (t1, t2) ∈ ℝN × 2. The source topographies pographies via x t1 and t2 depend on the leadfields of the particular brain voxels i and j and

the dipole moments, i.e. orientations, as will be described in more details in the following. Now, the cross-spectrum, introduced in Eq. (1) reads D E D   E D E e Ty e H ¼T y ey eH TH ¼ TCeTT ; ex eH ¼ T y Cex ¼ x y

ð5Þ

with ()H denoting the Hermetian, i.e. the matrix version of the complex conjugate * that was previously used. As the topographies included in T are real-valued quantities, we use the matrix transpose ()T here. In this bivariate scenario, the imaginary part of the cross-spectrum on source level   ℑ Cey only depends on a single number a due to its antisymmetric nature and can be written as     0 a : ℑ Cey ¼ −a 0

ð6Þ

Hence, we can reformulate the imaginary part of the cross-spectrum for two sources on sensor level as     0 a T T ℑ Cex ¼ T −a 0 0 1 ð0−t 21 Þa t 11 a B ð0−t 22 Þa t 11 a C Cðt ; t ÞT ¼B @ ⋮ ⋮ A 1 2 ð0−t 2N Þa t 1N a

ð7Þ

T

¼ að−t2 ; t1 Þðt1 ; t2 Þ   T T ¼ a −t2 t1 þ t1 t2   T T ¼ a t1 t2 −t2 t1 ¼ aðt1 ∧t2 Þ:

Eq. (7) describes the influence of two synchronized sources on the imaginary part of the cross-spectrum on sensor level. It happens to be equivalent to the wedge product or exterior product denoted by “⋀”, introduced by Grassmann (1844) and further discussed in Clifford (1878), of the brain source topographies T. To quantify the effect of a bivariate source synchronization on ImCs, we suggest to define a signal subspace in analogy to the MUSIC procedure described in section RAP MUSIC. Therefore, we also use a low-rank approximation of ImCs with p dimensions obtained from a singular value decomposition as described in Eq. (3). Hence, the subspace we term Ep is defined by

ℑðCÞ ≈

p X

uk sk vk ≡ Ep :

ð8Þ

k¼1

To evaluate if the synchronization of a pair of brain sources is contained in the ImCs, we propose to subtract the wedge product of the two source topographies from Ep and to figure out if the matrix Ep loses rank. If a synchronization between sources at voxels i and j with their respective topographies ti and tj is contained in Ep, subtracting (ti ⋀ tj), with the scale a of Eq. (7) absorbed into these topographies, from Ep would lead to a rank decline in the resulting matrix. If no phase synchronization between sources at voxels i and j would be present in the measured data, the rank of   ^ ¼ E − t tT −t tT E p p i j j i

ð9Þ

would be unchanged and equal to the rank of Ep. To quantify the change ^ . If a of rank, we propose to look at the product of all singular values of E p synchronization between sources with given topographies is perfectly contained in the ImCs, one singular value will be equal to zero and

A. Ewald et al. / NeuroImage 101 (2014) 610–624

613

therefore the whole product. Mathematically, this approach leads to a function

and, finally, to the interaction measure we term scalar Wedge MUSIC (scWM) that can be expressed by

     p p ^ ¼ ∏ SV E − t tT −t tT ; g i; j ¼ ∏ SV k E p k p i j j i

^ ¼ h i; j

k¼1

ð10Þ

k¼1

  ^ . Conclud^ denotes the k-th singular value of the matrix E where SV k E p p ing, Eq. (10) describes a function to estimate how much a particular synchronization between two sources at voxels i and j is contained in measured data or rather the imaginary part of the cross-spectrum. As stated before in this section, an individual source topography, e.g. tj depends on the particular pre-calculated leadfield at the corresponding location Lj ∈ ℝN × 3 and the dipole orientation αj ∈ ℝ3× 1. This can be expressed as t j ¼ L jα j:

ð11Þ

The dipole moment αj is usually not known which leads to the three different applications of the proposed method. The first one is to optimize over the dipole moment αj = (αj1 αj2 αj3)T describing the dipole strength in each direction in the three-dimensional Cartesian coordinate space. Assuming that a given topography ti originated from a prior source reconstruction at voxel i, e.g. from RAP MUSIC, we define the measure of interaction between two sources at voxels i and j as hi; j ¼

g i; j ð0Þ  : minα g ij α j

ð12Þ

Please note that the function g introduced in Eq. (10) now depends on the three dimensional dipole moment that is determined by minimizing g with respect to αj. The interaction measure h described in ^ p by taking the Eq. (12) quantifies the decrease of rank of the matrix E ratio between no assumed synchronization between sources at voxels i and j and synchronization for the best possible dipole orientation at j. If a phase synchronization is perfectly contained in the ImCs, the denominator in Eq. (12) will be close to zero leading to a large value of h, in fact tending towards infinity. The measure h defined in Eq. (12) can be evaluated over all voxels j = 1 … m which leads to a scan over the entire brain space in order to investigate with which source another source with topography ti is interacting by means of phase synchronization robust to volume conduction effects. We will call a source at i a ‘seed voxel’ or ‘reference voxel’ in the following as a scan is executed to determine an interaction with respect to this reference voxel. Due to the analogy of a brain scan and the application of subspaces, we subjoin this method to the family of MUSIC algorithms. Including the wedge product as derived in Eq. (7), we term this method Wedge MUSIC. In particular, we term this variant with a given reference voxel and the optimization over dipole moment as a Wedge MUSIC scan. A second possible variant of applying Wedge MUSIC is to determine the interaction between two given source topographies or two sources with known dipole directions and locations. These could again be identified by a prior source localization technique. Consider a given set of brain sources at a certain frequency or frequency band, the question arises ‘Which source is interacting with which?’ in a bivariate sense. A conceivable scenario would for example be a system consisting of two interacting source pairs that will also be used in the simulations in section Simulations. Here, only the relative strength between the two (normalized) topographies ti and tj matters, i.e. if a source with given dipole direction is stronger with respect to the other. This can be ^ which leads to a slightly different prodexpressed by a single number α uct of singular values    p ^ Þ ¼ ∏ SV k Ep −α ^ ti tTj −t j tTi ; g^i; j ðα k¼1

ð13Þ

g^i; j ð0Þ : ^Þ minα^ g^i; j ðα

ð14Þ

A third variant of Wedge MUSIC is not to use any prior knowledge or source localization algorithm to obtain reference voxels but to scan over all pairs of voxels i and j. In this case the dipole orientations αi ∈ ℝ3× 1 and βj ∈ ℝ3× 1 for both topographies ti = Liαi and tj = Ljβj are unknown. This can again be solved with the optimization introduced before. Although intuitively 6 parameters would have to be optimized, namely the dipole strength in x, y and z-direction, the absolute strength of the individual topographies is irrelevant for the present calculations as long as the product remains constant. Therefore, the optimization can be expressed by only 5 variables in polar coordinates. This variation we term complete Wedge MUSIC in the following. Modified beamforming and maximizing the ImCoh A different method that is as well capable of investing the relationships between given sources is a beamforming approach that includes maximizing the imaginary part of coherency on source level (Avarvand et al., 2012; Ewald et al., 2012). In this section we shortly review the idea as we compare Wedge MUSIC in a simulation. In general, classical beamforming (Capon, 1969; Gross et al., 2001; Robinson and Vrba, 1999; Sekihara and Scholz, 1996; Van Veen et al., 1997) can be considered as a two step procedure. In the first step, spatial filters B ∈ ℝM × 3 × N (M: number voxels on a pre-defined grid; N number of measurement channels) are determined such that the signal of a source at a particular voxel is passed with unit gain to the measurement sensors whereas all activity from other sources is minimized, i.e. suppressed. Applying these filters to the measured data leads to source activity for each pre-defined voxel in x-, y- and z-direction. Second, a dipole direction has to be chosen which is usually done by taking the direction with maximum power. In the approach presented in Avarvand et al. (2012) the dipole direction is chosen such that the imaginary part of coherency between a dipole at source location i and a given reference voxel j is maximized. Considering the beamformer weights Bi ∈ ℝ3× N at voxel i and a given source topography tj ∈ ℝN × 1 for a source at voxel j, the dipole moment di ∈ ℝ3× 1 maximizing the imaginary part of coherency between two sources at voxels i and j can be expressed in the frequency domain by   T −1 di ¼ Bi ℜ ðCÞBi Bi ℑðCÞt j :

ð15Þ

Finally, the maximized imaginary part of coherency is given by tTj ℑðCÞBTi di maxImCohij ¼ qffiffiffiffiffiffiffiffiffiffiffiqffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi tTj Ct j dTi Bi CBi T di

ð16Þ

as derived in more details in Avarvand et al. (2012). Please note that the quantities in Eqs. (17) and (18) still depend on the real part of the crossspectrum ℜ(C). In Ewald et al. (2012) it has been shown that for maximizing the ImCoh, the real part of the cross-spectrum serves essentially as a whitening matrix. In contrast to that, Wedge MUSIC calculations are exclusively based on the imaginary part of the cross-spectrum. Constructing surrogate data for statistical testing In order to statistically test the results of Wedge MUSIC for a single class, we will use an approach of constructing surrogate data to compare with. The procedure is explained in detail in Shahbazi et al. (2010). The general idea is to destroy all source interactions present in the data but to maintain all other data properties. This is achieved by generating surrogate data from the original measured data. In a first step an

614

A. Ewald et al. / NeuroImage 101 (2014) 610–624

independent component analysis (ICA) is performed to obtain noncorrelated and, even more important, statistically maximally independent time series. These resulting N time series are then shifted by time (N − 1) ∗ T to wipe out all possibly remaining dependencies in between the time series of the ICA components. The time T is chosen to be five times the length of a single data epoch in the implementation used for this work which is far beyond the auto-correlation time within each time series as a necessary prerequisite. As a particular ICA implementation we used Temporal Decorrelation source Separation (TdSEP; Ziehe (1998)). After having constructed independent time series from the original data on source level, data are projected back to sensor space by the mixing matrix obtained from the ICA. On this surrogate data, all subsequent calculations are performed in the exact same fashion as for the original data. Finally, the results for RAP and Wedge MUSIC applied on a subspace of the imaginary part of the cross-spectrum are statistically compared for the real and the surrogate data. Simulations In this section we investigate the feasibility of applying Wedge MUSIC in different simulations. Therefore, we modeled dipoles in different regions in the brain as shown in Fig. 1.A. For visualization purposes the sources have been modeled in a common plane such that only a single MRI slice has to be shown. The underlying neuronal system consisting of the four sources displayed in Fig. 1.A has two synchronized pairs, one in each hemisphere. The simulations are treated as EEG recordings with an electrode set of 56 channels according to the standardized 10–20 system. As a head model we used a standardized MNI head consisting of 152 averaged brains (Fonov et al., 2011). The MNI brain was parceled in a continuous grid with 1956 voxels. Leadfields are generated as described in Nolte and Dassios (2005). The dynamics of a bivariate underlying neuronal system have been modeled with a 5-th order autoregressive model (AR) with the following non-zero coefficients: A11,1 = 0.45, A12,2 = 0.45, A22,2 = −0.9, A12,1 = 1, A41,1 = −0.65, where the subscript denotes the matrix element and the superscript the time delay. From the coefficients one can observe that a single non-diagonal element is non-zero which indicates a connection between the two sources. Noise was modeled on source level as Gaussian white noise for each brain voxel, projected to the scalp. Data and noise

were normalized by their respective power (mean over all channels at peak frequency) and summed with a fixed signal-to-noise ratio of SNR ¼ pffiffiffiffiffi ffi 1= 10. The described way of simulating the data was common for all the following simulations. Individual differences are outlined in the respective section. Fig. 1.B shows the imaginary part of coherency between all pairs of sensors over frequency. One can identify a strong signal peak at 12 Hz which is picked as a frequency of interest for Wedge MUSIC. As automated selection of a frequency of interest we employed the Global Interaction Measure (GIM) introduced in Ewald et al. (2012). The frequency dependent and univariate function GIM(f) can be seen as the maximized imaginary part of coherency with respect to an optimally chosen spatial filter. In the simulations as well as in the examples for real data we will use an SNR-optimized version of the GIM to select a particular frequency bin within a given band. The signal-to-noise optimization is performed as follows: For each frequency bin f a noise level Gn(f) = GIM(f + δf) + GIM(f − δf) and a pseudo-signal Gs(f) = GIM(f) − Gn(f) are estimated. Finally, Gð f Þ ¼ Gs ð f Þ=Gn ð f Þ ¼

GIMð f Þ−GIM ð f þ δf Þ GIMð f þ δf Þ

ð17Þ

is computed to identify frequencies with strong synchronized signals. This measure G(f) is also displayed in Fig. 1.B and shows a strong peak at 12 Hz. Fig. 1.C depicts the connection topography of the underlying neuronal system at 12 Hz in a so called head-in-head plot. For each sensor, the connections to all other sensors are color coded. Warm and cold colors arise due to the antisymmetric nature of the imaginary part of coherency. Hence, both a strong red color as a strong blue color indicate a connection of the respective brain sites. However, the modeled neuronal sources and their respective dynamics cannot be clearly recovered by visualizing the ImCoh on sensor level. This example even more illustrates the need for methods making the transition to source space for investigating neuronal interactions. Scalar Wedge MUSIC We investigated the behavior of the scalar Wedge MUSIC variant (scWM) defined by Eq. (14) in realistically simulated EEG data as

Fig. 1. Simulated data. A.: Modeled dipoles and their interaction profile. Two synchronized, i.e. interacting sources in each hemisphere. There exists a connection between sources within but not across hemispheres. B.: The imaginary part of coherency for all 56 × 56 sensor pairs and the SNR optimized Global Interaction Measure (GIM). C.: The imaginary part of coherency on sensor level at 12 Hz. Each small circle represents the connection (ImCoh) of a particular measurement channel to all other measurement channels.

A. Ewald et al. / NeuroImage 101 (2014) 610–624

described above. The data included two interacting source pairs, each in one hemisphere, as illustrated in Fig. 1.A. The result of an initial RAP MUSIC scan based in the imaginary part of the cross-spectrum for four sources is shown in Fig. 2.A. One can observe that the sources well coincide with the modeled dipoles that are also shown in Fig. 2.A. Please note that the order of sources is essentially arbitrary and further calculations such as Wedge MUSIC depend on the outcome of the RAP MUSIC results. For all possible pairs of the four sources obtained by the RAP MUSIC scan, scalar Wedge MUSIC has been evaluated according to Eq. (16). The results can be found in Table 1. One can observe a high value of scWM for the source pair consisting out of sources 1 & 2 and for the source pair constisting out of sources 3 & 4. This is expected and in line with how the interaction is modeled as shown in Fig. 1.A. Diagonal elements, i.e. the interaction of a source with itself, are equal to 1 as nominator and denominator of Eq. (14) are identical and the optimization does not ^ Þ defined in Eq. (13). The magnitude of scalar change the function g^i; j ðα Wedge MUSIC for truly interacting source pairs is about 20 times bigger compared to the non-interacting source pairs. To distinguish these results from chance, we chose two random voxels inside the brain and repeated the calculation of scWM n = 50,000 times. Each time a new ARmodel with the same properties was generated. The result can be found in Fig. 2.B showing the distribution for scWM in the case of drawing random voxels. Approximately, the results for random voxels exponentially decay from 0 to around 2. The maximum value obtained in this simulation was 7.14, demonstrating that values around 40 for scWM unlikely occurred by chance. Wedge MUSIC scan An alternative to calculating scWM between given source pairs is to perform a Wedge MUSIC scan over all pre-defined voxels inside the brain with a given reference voxel. One advantage is a plausibility check. Let's consider a single synchronized source pair. Then, in each scan with one of the two sources as a reference voxel, the respective other source is supposed to be found. This effect can be observed in the simulation depicted in Fig. 3.A. All four simulated sources are found by RAP MUSIC as shown in the left column. As the simulation is identical to the one for scalar Wedge MUSIC, the RAP MUSIC sources are identical (see Fig. 2.A). Again, the order is arbitrary and has to be taking into account for further analysis. Each one of these four RAP MUSIC

615

Table 1 The results of scalar Wedge MUSIC (scWM) for all source pairs. The labeling of the sources corresponds to the results of the RAP MUSIC scan shown in Fig. 2.A. scWM

Source 1

Source 2

Source 3

Source 4

Source 1 Source 2 Source 3 Source 4

1.00 41.47 2.20 1.14

41.47 1.00 1.02 1.27

2.20 1.02 1.00 38.68

1.14 1.27 38.68 1.00

sources serves as a reference voxel for a Wedge MUSIC scan. The results of each scan, shown in the second column in Fig. 3.A, illustrate that the underlying simulated structure of two interacting pairs has been correctly revealed by this procedure. Additionally, we computed statistics by employing a bootstrap procedure with n = 500 resampling runs (Efron and Tibshirani, 1994). According to the arbitrary order of sources, we sorted them in each run by magnitude of the voxel with the maximum RAP MUSIC value. These sorted sources then served as reference voxels for Wedge MUSIC that was also re-calculated in each bootstrap run. The bootstrap results for RAP MUSIC can be found in Fig. 3.B, for Wedge MUSIC in Fig. 3.C, both colored in blue. The same procedure has been applied on the surrogate data that has been generated in order to destroy all data dependencies but to maintain other data properties (see section Constructing surrogate data for statistical testing). The results of RAP MUSIC and Wedge MUSIC for the surrogate can be found in Fig. 3.B and C colored in red. Additionally, the observed results from the original non-bootstrapped data are indicated by vertical lines. For RAP MUSIC as well as for Wedge MUSIC the distributions for the original and for the surrogate data are only slightly overlapping indicating statistical significance for all four sources and both methods. A p-value was calculated by counting the results of all bootstrap runs on the surrogate data that are larger or equal in magnitude than the value observed on the original data. Denoting by r the number of cases out of the n runs where the surrogate data result is bigger than the observed result on the original data, we can calculate a p-value with p¼

rþ1 : n

ð18Þ

According to an alpha level of 0.05, all sources deviate significantly from the surrogate data.

Fig. 2. Simulation results. A.: The result of RAP MUSIC applied on a subspace of the imaginary part of coherency for all voxels inside a single brain slice are colored in red. The originally modeled dipoles are shown in blue. A colorbar for the same sources can be found in Fig. 3.A in the right column. B.: Distribution of scalar Wedge MUSIC for random voxel pairs within the simulated data.

616

A. Ewald et al. / NeuroImage 101 (2014) 610–624

Fig. 3. Results of a Wedge MUSIC scan for simulated data. A.: Results for RAP MUSIC applied on a subspace of the ImCs in the left column. Each RAP MUSIC source (the topography of the dipole at the maximum RAP MUSIC value) served as a reference voxel for a Wedge MUSIC scan. The results of Wedge MUSIC are illustrated in the right column next to the corresponding RAP MUSIC source. B.: Distribution for each single RAP MUSIC source estimated with bootstrap, color-coded in blue. The same estimation for independent surrogate data, color coded in red. The vertical lines show the results observed for the original data. A p-value was calculated according to Eq. (18) C.: Same as in B., but for Wedge MUSIC.

Wrong input location We conducted a further simulation for a Wedge MUSIC scan to evaluate the sensitivity of the method with respect to a wrong location for the reference voxel. Therefore, we used the originally modeled dipole location as the input instead of the results of a RAP MUSIC scan and shifted the dipole in steps of 0.1 cm in randomly chosen x-, y- or z-direction. For each shifting distance the complete simulation was repeated 50 times. The localization error of the scan is obtained by calculating the Cartesian distance between the modeled source and the voxel with a maximum Wedge MUSIC value. Fig. 4 shows the result of this simulation. One can observe that a false location of the reference voxel has only little impact below approximately 1 cm. Exceeding this distance, the location error rises almost linearly. Starting from about 2 cm for one source and 3 cm for the other source, the standard deviation of the 50 runs increases drastically. It is obvious that the method cannot find a synchronized source reliably if the reference location is wrong. However, due to the rather coarse spatial resolution of EEG and MEG, some signal is still contained in neighboring voxels that can be sufficient for a correct Wedge MUSIC scan even if the reference

voxel is slightly mislocated. In the present simulation we observe that method is well capable of dealing with a mislocation for the reference voxel of approximately one centimeter. One can also see from this simulation that the method does not break down completely for larger reference voxel error meaning that synchronous activity is found in arbitrary brain regions. Class differences In an additional simulation, we explore the behavior of a Wedge MUSIC scan including RAP MUSIC for class differences. Both methods only depend on the imaginary part of the cross-spectrum (ImCs) as the data-driven input argument. For two different data classes, the ImCs can well be subtracted from each other such that the proposed procedure can be executed on the respective difference ΔImCs = ImCs1 − ImCs2. Considering the imaginary part of coherency, class differences are not so trivial to access. Having in mind the definition of coherency in Eq. (2), the cross-spectrum is normalized by power. Calculating coherency from differences of cross-spectra is meaningless

A. Ewald et al. / NeuroImage 101 (2014) 610–624

Fig. 4. The impact of a wrong reference location for a Wedge MUSIC scan: Two dipoles have been modeled and both dipole locations served as the reference for one Wedge Music scan. The reference source locations, then, were shifted in arbitrary direction for zero to 5 cm as displayed on the x-axis. The error of the corresponding Wedge Music scan in terms of the Euclidean distance of the original source location and the maximum of the resulting Wedge Music distribution is displayed on the y-axis. The transparent patches show the standard deviation of the error estimated over 50 simulation repetitions for each shifting distance.

because ‘power’ could become negative. Furthermore, the normalization by power for coherency also depends on non-interacting sources. This may lead to a misinterpretation of the results for class differences. For example, differences in overall source power or the noise level between classes would lead to a different ImCoh (see section Comparison of Wedge MUSIC with the maximized ImCoh). These effects are avoided for the methodology presented in this paper. Please note that the two classes do not have to originate from two different experimental conditions. It is also conceivable to test for an altered source connectivity profile with respect to a certain baseline. This could be, for example, a pre-stimulus baseline or an estimation of background noise obtained at neighboring frequencies. For this simulation we modeled two different conditions. In the first condition, the same two pairs of interacting sources were generated as described above and shown in Fig. 1.A. In the second condition, only the source pair in the left hemisphere was present. The parameters of the simulation were the same as before. Fig. 5.A shows the results for the RAP MUSIC scan and the Wedge MUSIC scan both performed on the difference of the imaginary part of the cross-spectrum. One can observe that the source pair on the left hemisphere has vanished in

617

the difference as it was present in both conditions. Only the source pair in the right hemisphere remained in the difference and has correctly been revealed by the combination of RAP and Wedge MUSIC. We tested for significance by performing a permutation test (Welch, 1990). For n = 1000 runs the data epochs were randomly assigned to one of the two classes and the calculation was repeated. Again, sources were sorted by the highest RAP MUSIC value to order the source pairs over the runs. The results for the permutation statistics can be found in Fig. 5.B. For the first source pair not a single Wedge MUSIC result obtained by the permutation was higher than the value obtained for the true class belongings. This indicates that the Wedge MUSIC value for the true class membership is not likely due to some random effect. Denoting by r the number of runs where the permutation result is larger than the original result, we can again calculate a p-value according to Eq. (18). For the second source pair we obtain one permutation value greater than the Wedge MUSIC result for the true class memberships which leads to a p-value of p = 0.001. Hence, we obtain highly significant results for both source pairs demonstrating the applicability of the proposed analysis procedure to class differences. Comparison of Wedge MUSIC with the maximized ImCoh As theoretically motivated and demonstrated above, Wedge MUSIC and RAP MUSIC can both be directly applied on a difference of two ImCs belonging to different data classes, e.g. stemming from different experimental conditions. This is an advantage over the classical ImCoh where power differences of non-interaction sources can lead to significant results although source interactions are identical in both classes. This effect is demonstrated in the following simulation. We constructed a scenario with the same interacting two sources in each condition. These are the most left and the most right dipole shown in Fig. 1.A. Source dynamics have been as well modeled as before and identical for both data classes. Only the noise level varied in the conditions with a factor of 20. As before, we applied RAP MUSIC and Wedge MUSIC on the difference of the imaginary parts of the cross-spectra of both conditions and used a permutation test (i.e. shuffled the class labels of the two conditions for n = 1000 runs) to estimate statistical validity. We compared Wedge MUSIC with the maximized imaginary part of coherency (maxImCoh) as described in section Modified beamforming and maximizing the ImCoh. For the latter, the dipole direction at each voxel is determined such that the ImCoh with respect to a given reference voxel, here the RAP MUSIC source, is maximized. The statistical results of these simulations are shown in Fig. 6. The red histograms show the results of Wedge MUSIC (upper row) and for

Fig. 5. Results of RAP MUSIC and a Wedge MUSIC scan based on the difference of two imaginary parts of cross-spectra for simulated data. A.: The resulting RAP MUSIC sources are shown in the left column. The Wedge MUSIC sources using each of the two RAP MUSIC sources as reference sources are shown in the right column. Hence, each row illustrates a synchronized source pair. The modeled dipoles that are different in both conditions are shown in blue. B.: Results of a permutation test. The histogram shows the distribution for randomly assigned class memberships. The Wedge MUSIC value for the true class membership is indicated by a blue vertical line. A p-value indicated is established according to Eq. (18).

618

A. Ewald et al. / NeuroImage 101 (2014) 610–624

maxImCoh (lower row) for the resulting source maxima. In each permutation run, the maximum of the scan over all voxels has been selected. Comparing the results for the permuted data to the result observed for true class memberships (p-value obtained according to Eq. (18)), we find that the combination of RAP MUSIC and Wedge MUSIC does not produce a significant result for the difference (p b 0.05). This is in line with the modeled data as no interaction difference between the classes has been modeled. In contrast to that, the maximized the imaginary part of coherency leads to a significant finding for both source pairs. Hence, this simulation demonstrates the methodological advantage of using the ImCs over the ImCoh for class differences.

Complete Wedge MUSIC The third variant of applying Wedge MUSIC is without employing any prior source reconstruction technique but to calculate the interaction measure for all pairs of voxels as described in section Wedge-MUSIC. In this case, an optimization over five variables has to be performed for the given number of voxels squared making the application quite time consuming. Therefore, we used a grid containing only 247 voxels for the simulation presented in the following in contrast of using a finer grid for previous simulations. Even in this case, the calculation of voxel wise Wedge MUSIC needed more than 10 h distributed on a computer with 8 cores with each 2.8 GHz processor and in total 16 GB of RAM. Besides the smaller grid, the simulation of data was equivalent to the simulations described above. As a result, an approximately symmetric matrix is obtained, linking each pair of voxels on the given grid to a connection strength. To visualize the strongest components, we took the row-wise maximum value of the matrix as references and plotted the connections of this voxel to all other voxels. Fig. 7 shows the six strongest components obtained by complete Wedge MUSIC. It becomes clear that the two modeled interactions (see Fig. 1.A) are represented within the results. In the ideal case one would expect four strongly interacting sources. One for each modeled dipole with an interaction maximum at the location of the respective other dipole. However, the strongest source interactions unraveled by complete Wedge MUSIC are repetitive. For example the source pair in the upper row on the right in Fig. 7 is basically identical with the source pair in the middle row on the left though slightly mislocated. This is a consequence of the blurred inverse solutions, in general leading to the phenomenon that neighboring voxels may also show large interactions. Solutions to this, as could be possibly given by a clustering approach, are beyond the scope of the present paper. Nevertheless and despite the limitations described, the present simulation

Fig. 7. The results of complete Wedge MUSIC, i.e. the application of Wedge MUSIC without applying any prior source localization. From the connectivity matrix linking each voxel to all other voxels, the maximum in each row is selected. The largest six of these row maxima are chosen as reference sources and are shown as blue circles. For each of this reference source, the connection strength to all other voxels is color-coded.

indicates that complete Wedge MUSIC is capable of providing a meaningful picture of true interactions on source level. Results for real MEG and EEG data In this section we illustrate the application of the combination of RAP MUSIC and a Wedge MUSIC scan based on the imaginary part of the cross-spectrum for measured MEG and EEG data. The first data set consists of MEG resting state data. In the second data set, the subject had to perform imagined hand and foot movements. Here, we focus on the class difference. For both cases, we show single subject examples. Possible approaches to perform group statistics are outlined in the section Discussion. Idle motor rhythms in MEG resting state data

Fig. 6. Comparison of Wedge MUSIC and the maximization of the imaginary part of coherency applied on class differences. In the two data classes the same interacting sources are present. Only the noise level differs for each class. All figures show the results of a permutation test. The results of the permutation runs are indicated in red. The blue vertical lines show the results obtained with the true class membership of epochs. In contrast to maxImCoh, Wedge MUSIC does not produce a significant result which is consistent with the way the data were modeled.

MEG data were recorded continuously in a magnetically shielded room with a 275-channel system covering the whole head (Omega 2000, CTF-Systems). Two squids were malfunctioning such that the analysis comprised 273 MEG channels. The recording session lasted for approximately 10 min where the subject had to focus a fixation cross that was displayed on a screen. No further task was required. The head position relative to the MEG sensors was measured with a set of head coils (nasion, left and right ears) and mapped to an individual anatomical MRI. The head model obtained from the individual MRI was then mapped to a standard MNI template head to prepare for potential group analysis and assure comparability between subjects (Fonov et al., 2009, 2011). The standard MNI brain was parceled into 5003 grid points or voxels. With respect to these voxels, leadfields were generated as described in Nolte (2003). MEG signals were recorded with a sampling frequency of 1200 Hz and sampled down to 200 Hz with the FieldTrip toolbox (Oostenveld et al., 2010). An online low-pass filter with a cutoff frequency of 300 Hz was applied during recording and, furthermore, data was detrended offline, i.e. the mean of each

A. Ewald et al. / NeuroImage 101 (2014) 610–624

Fig. 8. The imaginary part of coherency for all MEG sensor pairs over frequency. By choosing the maximum SNR optimized GIM (see Eq. (17)) within the alpha band (8 ≤ fα ≤ 13 Hz), a frequency of f = 13 Hz (indicated as a red vertical line) has been selected for further source analysis.

619

channel times series was subtracted. Cardiac, muscle related and eye artifacts were removed by visual inspection of 30 ICA (independent component analysis, TDSEP algorithm; see Ziehe (1998)) components obtained from a prior PCA (principle component analysis (Hotelling, 1936)) decomposition of the sensor data. To compute the cross-spectrum as described in Eq. (1) data was divided in segments with length of 1 s and a 50% overlap between segments leading to a frequency resolution of 1 Hz. Fig. 8 shows the imaginary part of coherency, i.e. the normalized cross-spectrum, for all sensor pairs over frequency. The frequency, f = 13 Hz was chosen according to the maximum value for the SNR optimized GIM within the alpha band here defined as 8 Hz ≤ fα ≤ 13 Hz. Fig. 9 shows the four source pairs obtained by RAP MUSIC and a corresponding Wedge MUSIC scan that has been performed for each one of the RAP MUSIC sources as reference voxels. The upper row in each source pair plot shows the RAP MUSIC source, i.e. the reference voxel for the Wedge MUSIC scan. To display the results in as few pictures as possible, each source is shown in three different views: sagittal, coronal and axial. Red lines indicate the slice cutting in each view that is conducted at the maximum of each source. One can observe that two interacting source pairs arise, one in each hemisphere. All sources are located in motor related areas. Apparently, the rhythmic activity used for localization at 13 Hz is the sensorimotor idle rhythm. This rhythm is strongly present in the alpha range if no

Fig. 9. The results for RAP MUSIC and Wedge MUSIC applied on the imaginary part of the cross-spectrum for four sources. The upper row in each plot for one source pair is the RAP MUSIC scan. The voxel with maximum RAP MUSIC value served as a reference for the Wedge MUSIC scan. The respective Wedge MUSIC scan is displayed in the lower row. Each source is shown in sagittal, coronal and axial views. The red line indicates the cut within each brain defined to be at the maximum for each MUSIC scan.

620

A. Ewald et al. / NeuroImage 101 (2014) 610–624

movement is executed or imagined by the subject (Pfurtscheller and Neuper, 1997). This is the case for the present data where the subject was at rest. The sources also correspond to source locations from the somatosensory–motor default mode network revealed by fMRI (see e.g. Raichle et al., 2001; Engel et al., 2013). Hence, the methodological procedure introduced in this paper provides reasonable source locations in this case. A sanity check is that source locations found by RAP MUSIC are consistent with results obtained by the Wedge MUSIC scan. In other words, Wedge MUSIC exactly finds those sources within a scan over the entire brain that has been unraveled before by RAP MUSIC. This can be seen by the fact that the source pairs 1 and 3 and the pairs 2 and 4 are basically identical. Only the respective seed voxels differ. The source resulting from Wedge MUSIC in source pair 1 is the same source as the RAP MUSIC one in source pair 3 and vice versa. The same picture as obtained by the Wedge MUSIC scan is provided by scalar Wedge MUSIC (see section Wedge-MUSIC). Table 2 shows the results for scalar Wedge MUSIC for all combinations of RAP MUSIC sources as shown in the top row in Fig. 9. It is apparent that out of all four sources only two source pairs show a strong scalar Wedge MUSIC result. These are source pairs consisting of source 1 and source 3 and, additionally, of source 2 and source 4. Considering the source locations of each individual source as shown in Fig. 9, this is exactly the same picture as provided by the Wedge MUSIC scan. To test for random effects, we constructed surrogate data where all dependencies within the data have been destroyed (see section Constructing surrogate data for statistical testing). The surrogate data was resampled according to a bootstrap approach and compared to the value of Wedge Music observed for the original data. Fig. 10 shows the results for all four Wedge MUSIC sources. One can observe a narrow distribution for the Wedge MUSIC surrogate results over the n = 500 bootstrap runs. Furthermore, the result for the real data completely differs from the results for the surrogate data yielding a p-value of p = 0.002 for all four sources. Class differences in EEG motor imagery data In a second example, we investigated class differences in interaction for motor imagery data. In a training session for later brain computer interface (BCI) application, the subject was instructed to imagine the movement of either the right hand, the left hand or the foot but not to execute it. Previous research has shown that the brain activity for real executed movements and imagined movements is essentially the same in the motor related areas (Pfurtscheller and Neuper, 1997). Hence, we expect interacting sources in the hand and foot movements related brain areas such as the primary sensorimotor areas. EEG data were recorded with a with multi-channel EEG BrainAmp amplifier using 118 Ag/AgCl electrodes in an extended 10–20 system. We cut data into trials lasting for 3.5 s beginning with the movement instruction onset. This procedure leads to 70 trials for each individual class (left hand, right hand, foot). Data was down sampled to 100 Hz and each trial was segmented into 25% overlapping windows with the duration of 1 s. These segments, then, were Hanning windowed and Fourier transformed to compute the cross-spectrum as described in Eq. (1). To investigate the difference in interaction between hand and foot movements, and to avoid any biasing effect due to a greater amount of data in one condition, we combined left and right hand movements by randomly

Fig. 10. Statistics for the Wedge MUSIC scan (see Eq. (12)) at the voxel pair with maximum RAP MUSIC and Wedge MUSIC result. Red histograms show the distribution obtained from surrogate data. The blue vertical lines indicate the Wedge Music result of the true data. The results differ significantly for the measured data and the surrogate data indicating statistical significance. A p-value is calculated according to Eq. (18).

choosing in total 70 epochs out of these two classes. The imaginary part of the cross-spectrum obtained by the imagined foot movement data was then subtracted from the imaginary part of the crossspectrum of the newly generated hand movement data class. Fig. 11 shows the imaginary part of coherency for the class differences for all channels over frequency. As before, the frequency with maximum SNR optimized GIM in the alpha range was automatically chosen. Finally, RAP and Wedge MUSIC were applied on the difference of the imaginary part of the cross-spectra at 13 Hz. Fig. 12 shows the results of both RAP and Wedge MUSIC in the same style as for the MEG resting state data (see section Idle motor rhythms in MEG resting state data). For each source pair, the RAP MUSIC source is shown in the upper row, the corresponding Wedge MUSIC source in the lower row. Each source space scan is presented in three different views cut at the maximum for each scan. The results shown in Fig. 12 indicate that two source pairs differ between imaginary hand and foot movements. One of them (shown as source pair 1 and source pair 3) is a local interaction within foot movement related brain regions in the right hemisphere. The other one (shown as source pair 2 and source

Table 2 The results of scalar Wedge MUSIC for all combinations of the four sources obtained by RAP MUSIC. scWM

Source 1

Source 2

Source 3

Source 4

Source 1 Source 2 Source 3 Source 4

1.00 1.01 9.37 1.04

1.01 1.00 1.00 4.43

9.37 1.00 1.00 1.13

1.04 4.43 1.13 1.00

Fig. 11. The imaginary part of coherency for the difference of hand vs. foot movement.

A. Ewald et al. / NeuroImage 101 (2014) 610–624

621

Fig. 12. The results for RAP and Wedge MUSIC scans applied on the difference of the ImCs belonging to imagined hand and foot movement. For each source pair, the upper row shows the RAP MUSIC scan and the lower row the Wedge MUSIC scan. The results have been taken to the power of 4 to enhance the visibility of the source maxima.

pair 4) represents a functional synchronization between a brain region associated with right hand movement (RAP MUSIC source in source pair 2) and the foot movement region in the same hemisphere. Due to the nature of a Wedge MUSIC scan one can judge the plausibility of the result. It is shown in Fig. 12 that the sources obtained with RAP MUSIC are found again with Wedge MUSIC. The same picture is provided by applying the scalar Wedge MUSIC variant. Table 3 presents the numeric result for scalar Wedge MUSIC for all combinations of the RAP MUSIC sources (labeled consistently with in the upper row of Fig. 12). One can see that interactions between RAP MUSIC source 1 and 3 and between 2 and 4 are found, but not between other pairs of sources. Fig. 13 presents statistics obtained by permutation testing. Epochs from both data classes (hand and foot) were randomly assigned to two new classes and the whole computational procedure was repeated for n = 500 runs. In Fig. 13.A the statistical results for the Wedge MUSIC scan are shown. Although not for all source

Table 3 The results of scalar Wedge MUSIC between all RAP MUSIC sources for the class difference between imagined hand and foot movements. scWM

Source 1

Source 2

Source 3

Source 4

Source 1 Source 2 Source 3 Source 4

1.00 1.06 6.87 1.01

1.06 1.00 1.51 6.34

6.87 1.50 1.00 1.34

1.01 6.34 1.34 1.00

pairs a significant result p b 0.05 is obtained, we find the original value at the outer limits of the distribution concluding for a non-randomly found effect. To access statistical significance for the scalar Wedge MUSIC result, only scalar Wedge MUSIC has been repeated but not the frequency selection and RAP MUSIC resulting in much faster computation times. Fig. 13.B shows the results being highly significant for the scalar Wedge MUSIC variant.

Discussion In the present paper we introduced a new computational method, called Wedge MUSIC, to estimate synchronized, i.e. interacting, brain sources from EEG or MEG data. The main property of the method is that an interaction measure is constructed entirely from the imaginary part of the cross-spectrum. Specifically, the method does not depend on signal power as is the case for the imaginary part of coherency through its normalization. Such a dependency may cause conceptual problems when studying differences between two conditions: the difference between imaginary coherency may be caused by the presence or absence of non-interacting sources which modify power but not the imaginary part of cross-spectrum. In contrast, the imaginary part of the cross-spectrum itself is not affected systematically by noninteracting sources. Of course, random effects still exist which are suppffiffiffiffi pressed by K for K independent averages but cannot be removed completely.

622

A. Ewald et al. / NeuroImage 101 (2014) 610–624

Fig. 13. Permutation statistics for the difference of imagined hand vs. imagined foot movement. The red histograms show the results of the permuted data (class memberships of epochs were randomly assigned) and the vertical lines the results for the true class memberships. A: for the Wedge MUSIC scan (see Eq. (12)); B.: for scalar Wedge MUSIC (see Eq. (14)).

We have shown the feasibility of applying Wedge MUSIC as well in simulated data and in real world examples. However, some questions remain open: The novel method can be subsumed to the family of MUSIC algorithms. One of the drawbacks of MUSIC algorithms in general is that the number of brain sources that is supposed to present in the data has to be defined a priori. Usually, this number is not known and difficult to estimate. It is a common problem in the field of signal processing to differentiate between meaningful signal components and noise components for the construction of a signal subspace e.g. by a PCA. Hence, many different approaches have been suggested and discussed. A valuable general comparison can be found in Peres-Neto et al. (2005). In the field of neuroscience the issue has been investigated for fMRI data in Hansen et al. (1999) and for EEG/MEG data in Maris (2003). A procedure we have found useful in this context is to, first, pre-whiten the ImCs as described in Ewald et al. (2012). Second, one can compare the magnitude of the eigenvalues of the prewhitened ImCs to the maximum eigenvalue obtained by surrogate data that is constructed as described in the present paper. Here, a resampling approach such as bootstrap also seems to be useful to estimate the largest noise eigenvalue obtained from the surrogate data. However, a detailed investigation on how to determine the number of sources is out of the scope of the present paper. In principle, the situation is common for all MUSIC algorithms. As a rule of thumb for practical application, we hypothesize that a reasonable choice is between 2 and 8. A further issue that is out of the scope of this paper is group analysis. Here again, the situation is common for MUSIC, RAP MUSIC and Wedge MUSIC. As stated before (see section Idle motor rhythms in MEG resting state data), the results of such scans cannot necessarily be interpreted as distributed source activity. Nevertheless, it might be practically meaningful to average the results over all measured subjects for a certain experiment. However, this approach might not lead to satisfying results as EEG and MEG data usually contain a large subject dependent variability and the resulting picture might get too blurry on average. The same is true for the selection of a frequency of interest. Here, we suggest to choose (or better automatically determine) subject specific frequency peaks of the imaginary part of the cross-spectrum within a frequency band of interest to proceed with RAP and Wedge MUSIC as also presented in the examples in this paper. The most convenient way to obtain a reasonable picture of interacting brain sources within a population of measured subjects would be to apply an appropriate clustering approach.

An effect that can be observed in the results presented in this paper is the sensitivity of the method with respect to local interactions. Especially for the resting state data, quite local interactions are found. In principle, one could even think of applying Wedge MUSIC on a single voxel. Because of the blurry source localization a single voxel could include source activity from two interacting sources such that two dipoles with different orientation can be found to be interacting within a single voxel. In contrast to other methods using fixed dipole orientation, Wedge MUSIC does not have any bias towards remote or long distance interactions. To assess the practical feasibility of complete Wedge MUSIC, i.e. Wedge MUSIC without prior source localization, one has to consider computation time. An example on a 64 bit Windows 8 computer with an Intel i7, 2.00 Ghz processor: For a single voxel pair, the evaluation of the Wedge MUSIC measure including the minimization takes approximately t = 0.5 s. Considering a grid with M = 1000 voxels, the computation time results in t ∗ M2 = 138.9 h or 5.8 days. Hence, the idea to use the RAP MUSIC approach to obtain better source localization results and to cluster sources becomes infeasible. Furthermore, any kind of permutation statistics, group analysis or exploratory data analysis over many different frequencies, does not make sense with currently available hardware. Therefore, we recommend using a prior source localization technique such as RAP MUSIC in combination with Wedge MUSIC. Acknowledgments “This project was funded by grants from the EU (ERC-2010-AdG269716), the DFG (SFB 936/A3), the BMBF (031A130), and the EU project ‘Extending sensorimotor contingencies to cognition — eSMCs’, IST270212, esmcs.eu.” References Avarvand, F.S., Ewald, A., Nolte, G., 2012. Localizing true brain interactions from EEG and MEG data with subspace methods and modified beamformers. Comput. Math. Methods Med. 2012. http://dx.doi.org/10.1155/2012/402341 (Article ID 402341, 11 pages). Baillet, S., Mosher, J., Leahy, R., 2001. Electromagnetic brain mapping. IEEE Signal Proc. Mag. 18, 14–30. http://dx.doi.org/10.1109/79.962275. Bendat, J.S., Piersol, A.G., 1971. Random Data; Analysis and Measurement Procedures. Wiley-Interscience New York. Capon, J., 1969. High-resolution frequency–wave number spectrum analysis. Proc. IEEE 57, 1408–1418. http://dx.doi.org/10.1109/PROC.1969.7278.

A. Ewald et al. / NeuroImage 101 (2014) 610–624 Clifford, P., 1878. Applications of Grassmann's extensive algebra. Am. J. Math. 1, 350–358. http://dx.doi.org/10.2307/2369379 (URL: http://www.jstor.org/stable/2369379, ArticleType: research-article/Full publication date: 1878/.). Colgin, L.L.,Denninger, T.,Fyhn, M.,Hafting, T.,Bonnevie, T.,Jensen, O.,Moser, M.B.,Moser, E. I., 2009. Frequency of gamma oscillations routes flow of information in the hippocampus. Nature 462, 353–357. http://dx.doi.org/10.1038/nature08573 (URL: http:// www.nature.com/nature/journal/v462/n7271/abs/nature08573.html). De Munck, J., van Dijk, B., Spekreijse, H., 1988. Mathematical dipoles are adequate to describe realistic generators of human brain activity. IEEE Trans. Biomed. Eng. 35, 960–966. http://dx.doi.org/10.1109/10.8677. Dosenbach, N.U.,Fair, D.A.,Miezin, F.M.,Cohen, A.L.,Wenger, K.K.,Dosenbach, R.A.,Fox, M.D. , Snyder, A.Z., Vincent, J.L., Raichle, M.E., Schlaggar, B.L., Petersen, S.E., 2007. Distinct brain networks for adaptive and stable task control in humans. Proc. Natl. Acad. Sci. U. S. A. 104, 1107–11078. Efron, B., Tibshirani, R.J., 1994. An Introduction to the Bootstrap. Chapman and Hall/CRC (URL: http://www.worldcat.org/isbn/0412042312. published: Hardcover). Engel, A.K., Fries, P., Singer, W., 2001. Dynamic predictions: oscillations and synchrony in top-down processing. Nat. Rev. Neurosci. 2, 704–716. Engel, A.K., Gerloff, C., Hilgetag, C.C., Nolte, G., 2013. Intrinsic coupling modes: multiscale interactions in ongoing brain activity. Neuron 80, 867–886. http://dx.doi.org/10. 1016/j.neuron.2013.09.038 (URL: http://www.cell.com/neuron/abstract/S08966273(13)00869-6). Ewald, A., Marzetti, L., Zappasodi, F., Meinecke, F.C., Nolte, G., 2012. Estimating true brain connectivity from EEG/MEG data invariant to linear and static transformations in sensor space. NeuroImage 60, 476–488. http://dx.doi.org/10.1016/j.neuroimage.2011.11. 084 (URL: http://www.sciencedirect.com/science/article/pii/S1053811911013668). Ewald, A., Avarvand, F.S., Nolte, G., 2013. Identifying causal networks of neuronal sources from EEG/MEG data with the phase slope index: a simulation study. Biomed. Tech. 58, 165–178. Fonov, V.S., Evans, A.C., McKinstry, R.C., Almli, C.R., Collins, D.L., 2009. Unbiased nonlinear average age-appropriate brain templates from birth to adulthood. NeuroImage 47. http://dx.doi.org/10.1016/S1053-8119(09)70884-5 (URL: http://www.sciencedirect. com/science/article/pii/S1053811909708845). Fonov, V., Evans, A.C., Botteron, K., Almli, C.R., McKinstry, R.C., Collins, D.L., 2011. Unbiased average age-appropriate atlases for pediatric studies. NeuroImage 54, 313–327. http://dx.doi.org/10.1016/j.neuroimage.2010.07.033 (URL: http://www. sciencedirect.com/science/article/pii/S1053811910010062). Fries, P., 2005. A mechanism for cognitive dynamics: neuronal communication through neuronal coherence. Trends Cogn. Sci. 9, 474–480. Fries, P., Reynolds, J.H., Rorie, A.E., Desimone, R., 2001. Modulation of oscillatory neuronal synchronization by selective visual attention. Science 291, 1560–1563. http://dx. doi.org/10.1126/science.1055465 (URL: http://www.sciencemag.org/content/291/ 5508/1560). Gonzalez, J.J.,Manas, S.,Vera, L.D.,Mendez, L.D.,Lopez, S.,Garrido, J.M.,Pereda, E., 2011. Assessment of electroencephalographic functional connectivity in term and preterm neonates. Clin. Neurophysiol. 122 (4), 696–702. Grassmann, H., 1844. Die lineale Ausdehnungslehre ein neuer Zweig der Mathematik: dargestellt und durch Anwendungen auf die übrigen Zweige der Mathematik, wie auch auf die Statik, Mechanik, die Lehre vom Magnetismus und die Krystallonomie erläutert. O. Wigand. Gross, J., Kujala, J., Hamalainen, M., Timmermann, L., Schnitzler, A., Salmelin, R., 2001. Dynamic imaging of coherent sources: studying neural interactions in the human brain. Proc. Natl. Acad. Sci. U. S. A. 98, 694–699. Hansen, L.K., Larsen, J., Nielsen, F.A., Strother, S.C., Rostrup, E., Savoy, R., Lange, N., Sidtis, J., Svarer, C.,Paulson, O.B., 1999. Generalizable patterns in neuroimaging: how many principal components? NeuroImage 9, 534–544. http://dx.doi.org/10.1006/nimg.1998.0425. Haufe, S., Nikulin, V., Müller, K., Nolte, G., 2012. A critical assessment of connectivity measures for EEG data: a simulation study. NeuroImage. Haufe, S., Meinecke, F., Görgen, K., Dähne, S., Haynes, J.D., Blankertz, B., Bießmann, F., 2014. On the interpretation of weight vectors of linear models in multivariate neuroimaging. NeuroImage 87, 96–110. http://dx.doi.org/10.1016/j.neuroimage.2013.10.067 (URL: http://www.sciencedirect.com/science/article/pii/S1053811913010914). Hipp, J.F., Hawellek, D.J., Corbetta, M., Siegel, M., Engel, A.K., 2012. Large-scale cortical correlation structure of spontaneous oscillatory activity. Nat. Neurosci. 15, 884–890. Hotelling, H., 1936. Relations between two sets of variates. Biometrika 28, 321–377. Larson-Prior, L.J., Oostenveld, R., Penna, S.D., Michalareas, G., Prior, F., Babajani-Feremi, A., Schoffelen, J.M., Marzetti, L., Pasquale, F.d, Pompeo, F.D., Stout, J.,Woolrich, M., Luo, Q., Bucholz, R.,Fries, P.,Pizzella, V.,Romani, G.L.,Corbetta, M.,Snyder, A.Z., 2013. Adding dynamics to the human connectome project with MEG. NeuroImage 80, 190–201. http:// dx.doi.org/10.1016/j.neuroimage.2013.05.056 (URL: http://www.sciencedirect.com/ science/article/pii/S1053811913005508,\textlessce:title\textgreaterM apping the Connectome\textless/ce:title\textgreater). Maris, E., 2003. A resampling method for estimating the signal subspace of spatiotemporal EEG/MEG data. IEEE Trans. Biomed. Eng. 50, 935–949. http://dx.doi.org/ 10.1109/TBME.2003.814293. Marzetti, L., Della Penna, S., Snyder, A., Pizzella, V., Nolte, G., de Pasquale, F., Romani, G., Corbetta, M., 2013. Frequency specific interactions of MEG resting state activity within and across brain networks as revealed by the multivariate interaction measure. NeuroImage 79, 172–183. http://dx.doi.org/10.1016/j.neuroimage.2013.04.062 (URL: http://www.sciencedirect.com/science/article/pii/S1053811913004096). Mosher, J., Leahy, R., 1999. Source localization using recursively applied and projected (RAP) MUSIC. IEEE Trans. Signal Process. 47, 332–340. Mosher, J., Leahy, R., Lewis, P., 1999a. EEG and MEG: forward solutions for inverse methods. IEEE Trans. Biomed. Eng. 46, 245–259. http://dx.doi.org/10.1109/10.748978. Mosher, J.C., Baillet, S., Leahy, R.M., 1999b. EEG source localization and imaging using multiple signal classification approaches. J. Clin. Neurophysiol. 16, 225–238.

623

Nolte, G., 2003. The magnetic lead field theorem in the quasi-static approximation and its use for magnetoencephalography forward calculation in realistic volume conductors. Phys. Med. Biol. 48, 3637–3652. Nolte, G., Dassios, G., 2005. Analytic expansion of the EEG lead field for realistic volume conductors. Phys. Med. Biol. 50, 3807–3823. http://dx.doi.org/10.1088/0031-9155/ 50/16/010. Nolte, G.,Müller, K.R., 2010. Localizing and estimating causal relations of interacting brain rhythms. Front. Hum. Neurosci. 4. http://dx.doi.org/10.3389/fnhum.2010.00209. Nolte, G., Bai, O., Wheaton, L., Mari, Z., Vorbach, S., Hallett, M., 2004. Identifying true brain interaction from EEG data using the imaginary part of coherency. Clin. Neurophysiol. 115, 2292–2307. http://dx.doi.org/10.1016/j.clinph.2004.04.029. Nolte, G.,Meinecke, F.C.,Ziehe, A.,Müller, K.R., 2006. Identifying interactions in mixed and noisy complex systems. Phys. Rev. E. 73, 051913. Nunez, P.L., Srinivasan, R., 2006. Electric Fields of the Brain: The Neurophysics of EEG. Oxford University Press. Nunez, P.L., Srinivasan, R., Westdorf, A.F., Wijesinghe, R.S., Tucker, D.M., Silberstein, R.B., Cadusch, P.J., 1997. EEG coherency. I. statistics, reference electrode, volume conduction, laplacians, cortical imaging, and interpretation at multiple scales. Electroencephalogr. Clin. Neurophysiol. 103, 499–515. Nunez, P.L.,Silberstein, R.B.,Shi, Z.,Carpenter, M.R.,Srinivasan, R.,Tucker, D.M.,Doran, S.M., Cadusch, P.J., Wijesinghe, R.S., 1999. EEG coherency II: experimental comparisons of multiple measures. Clin. Neurophysiol. 110, 469–486. http://dx.doi.org/10.1016/ S1388-2457(98)00043-1 (URL: http://www.sciencedirect.com/science/article/ B6VNP-43MKB85-G/2/8d76d5c1 9888028c181615c315a68fc5). Oostenveld, R., Fries, P., Maris, E., Schoffelen, J.M., 2011. FieldTrip: open source software for advanced analysis of MEG, EEG, and invasive electrophysiological data. Comput. Intell. Neurosci. http://dx.doi.org/10.1155/2011/156869 (URL: http://www.hindawi.com/ journals/cin/2011/156869/abs/). Pascual-Marqui, R.D., 2007. Instantaneous and Lagged Measurements of Linear and Nonlinear Dependence Between Groups of Multivariate Time Series: Frequency Decomposition. ArXiv e-prints (URL: http://arxiv.org/pdf/0711.1455v1). Peres-Neto, P.R., Jackson, D.A., Somers, K.M., 2005. How many principal components? stopping rules for determining the number of non-trivial axes revisited. Comput. Stat. Data Anal. 49, 974–997. http://dx.doi.org/10.1016/j.csda.2004.06.015 (URL: http://www.sciencedirect.com/science/article/pii/S0167947304002014). Pfurtscheller, G., Neuper, C., 1997. Motor imagery activates primary sensorimotor area in humans. Neurosci. Lett. 239, 65–68. http://dx.doi.org/10.1016/S0304-3940(97)00889-6 (URL: http://www.sciencedirect.com/science/article/pii/S0304394097008896). Raichle, M.E.,MacLeod, A.M., Snyder, A.Z., Powers, W.J.,Gusnard, D.A., Shulman, G.L., 2001. A default mode of brain function. Proc. Natl. Acad. Sci. 98, 676–682. http://dx.doi.org/ 10.1073/pnas.98.2.676 (URL: http://www.pnas.org/content/98/2/676). Robinson, S., Vrba, J., et al., 1999. Functional neuroimaging by synthetic aperture magnetometry (SAM). Recent Advances in Biomagnetism, pp. 302–305. Rodriguez, E., George, N., Lachaux, J.P., Martinerie, J., Renault, B., Varela, F.J., 1999. Perception's shadow: long-distance synchronization of human brain activity. Nature 397, 430–433. http://dx.doi.org/10.1038/17120. Sarvas, J., 1987. Basic mathematical and electromagnetic concepts of the biomagnetic inverse problem. Phys. Med. Biol. 32, 11–22. Schmidt, R., 1986. Multiple emitter location and signal parameter estimation. IEEE Trans. Antennas Propag. 32, 276–280. Schoffelen, J.M., Gross, J., 2009. Source connectivity analysis with MEG and EEG. Hum. Brain Mapp. 30 (6), 1857–1865. Schoffelen, J.M., Oostenveld, R., Fries, P., 2005. Neuronal coherence as a mechanism of effective corticospinal interaction. Science 308, 111–113. http://dx.doi.org/10.1126/ science.1107027 (URL: http://www.sciencemag.org/content/308/5718/111). Sekihara, K., Nagarajan, S., 2013. Residual coherence and residual envelope correlation in MEG/EEG source-space connectivity analysis. 2013 35th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), pp. 4414–4417. http://dx.doi.org/10.1109/EMBC.2013.6610525. Sekihara, K.,Scholz, B., 1996. Generalized wiener estimation of three-dimensional current distribution from biomagnetic measurements. IEEE Trans. Biomed. Eng. 43, 281–291. http://dx.doi.org/10.1109/10.486285. Shahbazi, F.,Ewald, A., Ziehe, A., Nolte, G., 2010. Constructing surrogate data to control for artifacts of volume conduction for functional connectivity measures. In: Supek, S., Sušac, A. (Eds.), 17th International Conference on Biomagnetism Advances in Biomagnetism — Biomag2010. No. 28 in IFMBE Proceedings. Springer, Berlin Heidelberg, pp. 207–210. Siegel, M., Engel, A.K., Donner, T.H., 2011. Cortical network dynamics of perceptual decision-making in the human brain. Front. Hum. Neurosci. 5. http://dx.doi.org/10. 3389/fnhum.2011.00021. Singer, W., 1999. Neuronal synchrony: a versatile code for the definition of relations. Neuron 24, 49–65. Stam, C.J., Jones, B.F., Nolte, G., Breakspear, M., Scheltens, P., 2007a. Small-world networks and functional connectivity in alzheimer's disease. Cereb. Cortex 17, 92–99. http://dx.doi.org/10.1093/cercor/bhj127 (URL: http://cercor.oxfordjournals. org/content/17/1/92). Stam, C.J., Nolte, G., Daffertshofer, A., 2007. Phase lag index: assessment of functional connectivity from multi channel EEG and MEG with diminished bias from common sources. Hum. Brain Mapp. 28, 1178–1193. http://dx.doi.org/10.1002/hbm. 20346. Van Veen, B.D., van Drongelen, W., Yuchtman, M., Suzuki, A., 1997. Localization of brain electrical activity via linearly constrained minimum variance spatial filtering. IEEE Trans. Biomed. Eng. 44, 867–880. http://dx.doi.org/10.1109/10.623056. Varela, F., Lachaux, J.P., Rodriguez, E., Martinerie, J., 2001. The brainweb: phase synchronization and large-scale integration. Nat. Rev. Neurosci. 2, 229–239. http://dx.doi.org/ 10.1038/35067550.

624

A. Ewald et al. / NeuroImage 101 (2014) 610–624

Vincent, J.L., Snyder, A.Z., Fox, M.D., Shannon, B.J., Andrews, J.R., Raichle, M.E., Buckner, R.L., 2006. Coherent spontaneous activity identifies a hippocampal–parietal memory network. J. Neurophysiol. 96, 3517–3531. Vincent, J.L., Kahn, I., Snyder, A.Z., Raichle, M.E., Buckner, R.L., 2008. Evidence for a frontoparietal control system revealed by intrinsic functional connectivity. J. Neurophysiol. 100, 3328–3342. Vinck, M., Oostenveld, R., van Wingerden, M., Battaglia, F., Pennartz, C.M.A., 2011. An improved index of phase-synchronization for electrophysiological data in the presence of volume-conduction, noise and sample-size bias. NeuroImage 55, 1548–1565. http://dx.doi.org/10.1016/j.neuroimage.2011.01.055.

Welch, W.J., 1990. Construction of permutation tests. J. Am. Stat. Assoc. 85, 693–698. http://dx.doi.org/10.1080/01621459.1990.10474929. Womelsdorf, T., Schoffelen, J.M., Oostenveld, R., Singer, W., Desimone, R., Engel, A.K., Fries, P., 2007. Modulation of neuronal interactions through neuronal synchronization. Science 316, 1609–1612. http://dx.doi.org/10.1126/science.1139597 (URL: http:// www.sciencemag.org/content/316/5831/1609). Ziehe, A., 1998. TDSEP — an efficient algorithm for blind separation using time structure. 8th International Conference on Artificial Neural Networks 1998. Proceedings. vol. 2, pp. 675–680.