Neurocomputing 120 (2013) 597–609
Contents lists available at ScienceDirect
Neurocomputing journal homepage: www.elsevier.com/locate/neucom
Performance of different metaheuristics in EEG source localization compared to the Cramér–Rao bound Diana I. Escalona-Vargas a, David Gutiérrez b,n, Ivan Lopez-Arevalo a a b
Center for Research and Advanced Studies (Cinvestav) at Tamaulipas, 87130, Mexico Cinvestav at Monterrey, 66600, Mexico
art ic l e i nf o
a b s t r a c t
Article history: Received 16 May 2012 Received in revised form 1 April 2013 Accepted 15 April 2013 Communicated by I. Bojak Available online 21 May 2013
The problem of electroencephalographic (EEG) source localization involves an optimization process that can be solved through metaheuristics. In this paper, we evaluate the performance in localizing EEG dipole sources using simulated annealing (SA), genetic algorithm (GA), particle swarm optimization (PSO), and differential evolution (DE). The evaluation is performed in terms of the metaheuristics' operational parameters and the signal-to-noise ratio (SNR). Here, the objective function in the optimization problem is the concentrated likelihood function (CLF), while the Cramér–Rao bound is proposed as benchmark. Under these conditions, the metaheuristics' best performances are achieved when the variance of their corresponding dipole source estimates is close to the CRB. Therefore, we exhaustively evaluate the variances of the source estimates obtained through each metaheuristic for the case of realistically simulated EEG data. Our results showed that no significant variations on the performance were introduced by changes in the metaheuristics' operational parameters, specially in one-dipole estimation. For two-dipole estimation, the performance of GA and DE was better than in other metaheuristics, but DE required a fine adjustment of its parameters in order to work. In all cases, the performance decayed as the SNR decreased, while SA and PSO seemed to be very sensitive to the correlation between the sources. Overall, GA was the most attractive technique in terms of performance and computational cost. & 2013 Elsevier B.V. All rights reserved.
Keywords: Electroencephalography Dipole source localization Metaheuristics Maximum likelihood Cramér–Rao bound
1. Introduction Localization of neural brain sources is important in neuroscience, specially in the study of cortical organization and integration [1], and in some areas of clinical neuroscience, such as preoperative planning [2] and epilepsy [3]. In electroencephalographic (EEG) data-based techniques, measurements of the electric potential on the scalp are used to infer the location of the underlying neural activity. This process is known as solving the inverse problem. Many times the solution of the inverse problem involves an iterative solution of the forward problem (i.e., computing the electric potentials given a current source), then it is important to have efficient analytical and numerical solutions of the forward problem in order to minimize the computational burden [4]. In general, when nonlinear optimization procedures are used to solve the EEG inverse problem, the objective function exhibits multiple local solutions, specially under low signal-to-noise ratio (SNR) conditions and in high-dimensional search spaces. Hence, n
Corresponding author. Tel.: +52 81 1156 1740x4513; fax: +52 81 1156 1741. E-mail addresses:
[email protected] (D.I. Escalona-Vargas),
[email protected],
[email protected] (D. Gutiérrez),
[email protected] (I. Lopez-Arevalo). 0925-2312/$ - see front matter & 2013 Elsevier B.V. All rights reserved. http://dx.doi.org/10.1016/j.neucom.2013.04.010
metaheuristic algorithms have been proposed as promising candidates to solve these kind of difficult optimization tasks. Some of the algorithms used for this purpose so far are simulated annealing (SA), genetic algorithm (GA), particle swarm optimization (PSO), differential evolution (DE), and taboo search (TS). Moreover, comparative studies of these techniques for solving the inverse problem have been performed in the past, and a summary of the most relevant ones is provided in Table 1, which also includes some magnetoencephalographic (MEG) data-based solutions, and one that makes reference to event related potentials (ERP). While all those studies provide valuable information regarding the performance of the individual metaheuristics, their evaluation is limited to the mean-squared error (MSE) on the localization of one or multiple dipoles and, in most cases, it is computed under very specific noise conditions. So far, a strict statistical study of these metaheuristic-based optimization methods under more realistic conditions and for different values of their operational parameters has not been performed yet. Furthermore, there is no agreed measure for comparing the performance of different algorithms, though the absolute objective value and the number of function evaluations are two widely used measures [5]. However, a formal theoretical analysis is yet to be developed. In this work, we propose to evaluate the performance of some of these metaheuristics using the Cramér–Rao bound (CRB) as a
598
D.I. Escalona-Vargas et al. / Neurocomputing 120 (2013) 597–609
Table 1 Summary of different optimization algorithms used in the localization of dipole sources. Optimization algorithm
MSE Modality Noise (mm or %) level
Number of sources
SA
0.1–9
MEG
Noiseless 2
SA
0.2
EEG
Noiseless 1
SIMPLEX GA
10 0.4 7 0.1
EEG
SA, SIMPLEX
0.02–18.4
EEG
2 dB, 5 dB, 0%, 10%, 50%
SA
38%
Clustering GA GA
10% 0% ∼10
SA
∼24%
TS
∼4:5%
GA GA
∼4% 217 10
Hybrid GA
137 14
EEG
5%, 10%, 15%
SIMPLEX DE GA
237 14 ∼0 1–4
EEG MEG
Noiseless 1 Noiseless 2
PSO, GA GA
5–20 0.30
EEG MEG
0%, 10% 2 Noiseless 3
PSO
0.1
MEG
Noiseless 3
GA PSO PSO
0.13 1 –
ERP MEG
15 dB 1%, 5%, 10%
2 3
MEG
Noiseless 2
MEG
1%, 3%
2
MEG
0%, 5%, 10%
3
Reference
Haneishi et al. [6] Gerson et al. [7] McNay et al. [8] Khosla et al. [9] Uutela et al. [10]
Nagano et al. [11] Jiang et al. [12]
Zou et al. [13] 2
3 3
Li et al. [14] Sequeira et al. [15] Qiu et al. [16] Jiang et al. [17] Jiang et al. [18] Alp et al. [19] Parsopoulos et al. [20]
generalized benchmark, rather than concentrating only on the mean-square error of a specific algorithm. The CRB provides a lower bound on the variance of any unbiased estimator for a predefined set of parameters and it is independent of the algorithm used for the estimation [21]. Then, the CRB establishes a limit on the performance and, from such limit, it is possible to define confidence regions [22]. The CRB also allows to evaluate the effects of variations in different parameters which, in our case, correspond to the metaheuristics' operational settings, the SNR, the number of estimated sources, and the effect of them being correlated. This paper is organized as follows: in Section 2, we pose the optimization problem in terms of the concentrated likelihood function (CLF) and define its CRB. In the same section we describe the metaheuristics used throughout this paper. In Section 3, we show the applicability of the proposed evaluation method through numerical examples using simulated EEG data. Finally, in Section 4 we discuss our results, while in Section 5 we give our concluding remarks.
parameters. Finally, this section explains the process by which we propose to evaluate the performance of those metaheuristics. 2.1. Measurement model Let us consider that our EEG measurements can be represented by a classical linear measurement model with additive noise. Then, define Yk as the matrix of measurements obtained from k ¼ 1; 2; …; K independent experiments, using an array of m ¼ 1; 2; …; M sensors, and acquiring t ¼ 1; 2; …; N time samples. Under these conditions, the measurement model is given by Y k ¼ AðθÞqðtÞ þ Ek ;
ð1Þ
where qðtÞ is the time-varying dipole moment, θ is the vector of dipole parameters (which, in our case, corresponds to the dipole's position), AðθÞ is the lead-field matrix of size M 3 and whose mth row corresponds to the kernel vector, and Ek is the matrix of noise. The kernel relates the electric potential at an observation point with the current source, the position of the mth electrode, as well as the volume conductor's geometric and electrical properties (for a more detailed description, see [4,23,24] and references therein). In the case of p distinct dipoles, (1) holds with qðtÞ and AðθÞ substituted with qðtÞ ¼ ½q1 ðtÞ; q2 ðtÞ; …; qp ðtÞT , and AðθÞ ¼ ½Aðθ1 Þ; Aðθ2 Þ; …; Aðθp Þ, respectively, where θ1 ; …; θp correspond to the vectors of parameters of each of the p dipoles. 2.2. Concentrated likelihood function Estimating θ and qðtÞ from (1) can be seen in terms of the more general problem of estimating these deterministic parameters from a Gaussian model, then the maximum likelihood (ML) technique can be used for such task. The likelihood function of the measurements is given by LðYð1Þ; …; YðNÞÞ ¼
1 ð2πÞMN ðs2 =2ÞMN
1 N exp − 2 ∑ ðYðtÞ−AqðtÞÞT ðYðtÞ−AqðtÞÞ ; s t¼1
ð2Þ
where A ¼ AðθÞ is used for simplicity in the notation and s2 corresponds to the signal's variance. Thus, the log-likelihood function is ln L ¼ const−MN ln s2 −
1 N ∑ ðYðtÞ−AqðtÞÞT ðYðtÞ−AqðtÞÞ: s2 t ¼ 1
ð3Þ
Furthermore, this function can be concentrated with respect to s2 and qðtÞ by replacing these values with their ML estimates [25] ^ ¼ ðAT AÞ−1 AT YðtÞ; qðtÞ
ð4Þ
1 ^ trfðI−AðAT AÞ−1 AT ÞRg; s^2 ¼ M
ð5Þ
where trfg is the trace operator and R^ is a consistent estimate of the data's covariance matrix, defined as 1 N ∑ YðtÞYðtÞT : R^ ¼ Nt¼1
ð6Þ
Inserting (4) and (5) into (3) we obtain a concentrated likelihood function (CLF) given by 2. The proposed methods
ln L ¼ const−MN ln FðθÞ;
ð7Þ
where FðθÞ is defined as In this section, we first introduce the EEG measurement model and define its corresponding CRB. Next, we provide a brief introduction to each of the metaheuristics we are interested in evaluating, but with a special emphasis on their operational
^ FðθÞ ¼ trfðI−AðAT AÞ−1 AÞRg:
ð8Þ
The CLF is a commonly used simplification for the ML technique, as it has the main advantage of reducing the dimension of the
D.I. Escalona-Vargas et al. / Neurocomputing 120 (2013) 597–609
optimization problem. More details on the derivation of the CLF can be found in [25]. ^ It can be shown that an unbiased estimate of θ (denoted as θ) can be obtained by minimizing (8), i.e. θ^ ¼ arg min FðθÞ : ð9Þ θ
Therefore, the CLF corresponds to the objective function in the optimization problem from which the dipole parameters will be estimated. 2.3. Cramér–Rao bound The CRB is a theoretical lower limit on the variance that provides a benchmark against which we can compare the performance of any unbiased estimate. Some important features of the CRB that makes it a good candidate for becoming a reference on the performance in our case of study are: (i) it is independent of the algorithm used for the estimation; (ii) it is asymptotically tight, meaning that for certain distributions, there exist algorithms that achieve the bound; (iii) it has been widely used to evaluate the efficacy of different estimation techniques and to determine their confidence regions (see, e.g., [22]). From a statistical point of view, the CRB adds certainty to the traditional performance evaluation done in terms of the MSE, as the latter only gives information about the bias of the estimator. Therefore, it is possible to guarantee the efficiency of the estimator as soon as it approaches the CRB since its MSE will be the lowest among all unbiased estimates. Also, in the case of biased estimators of given bias (as those often achieved through metaheuristicbased optimizations), to approach the CRB results in both MSE and variance below those possibly obtained in an unbiased scenario. In our case, the CRB is defined by [26] N −1 CRBðθÞ ¼ s2 ∑ Q ðtÞDT P ⊥A DQ ðtÞ ; ð10Þ t¼1
P ⊥A
T
−1 T
¼ I−AðA AÞ A , where s corresponds to the signal's variance, and Q(t) is a block diagonal matrix of size 9p 3p, and it is given by 2 3 I 3 ⊗q1 ðtÞ 0 0 6 7 0 ⋱ 0 ð11Þ Q ðtÞ ¼ 4 5; 0 0 I 3 ⊗qp ðtÞ 2
where ⊗ is the Kronecker product; D is a matrix of size M 3p containing the partial derivatives of the desired parameters, i.e. D ¼ ½D1 ; …; Dp T ; where ∂Ai ðθÞ ∂Ai ðθÞ ∂Ai ðθÞ ; ; Di ¼ ∂x ∂y ∂z
ð12Þ
for i ¼ 1; 2; …; p;
ð13Þ
in the (x, y, z) Cartesian coordinate system. 2.4. Metaheuristics In this section, we briefly describe the SA, GA, PSO, and DE algorithms, but with a special focus on their operational parameters. 2.4.1. Simulated annealing SA is a stochastic simulation method originally proposed in [27] where the process of annealing is analogous to the process of the optimization. In SA, the value of the objective function takes the role of the system's energy, and the global optimum corresponds to the “ground” energy of the system. Choosing a suitable anneal
599
schedule is very important in order to ensure that the SA algorithm converges to a global optimum. This includes the choice of an initial temperature (T0) and a temperature decrement rule. In this work, we evaluate two scheduling strategies: (a) Temperature decrement rule. In order to avoid an impractically long time of computation, the temperature decrement should be fast enough. An usually adopted decrement rule is the following [9]: T l ¼ αT l−1 ;
ð14Þ
for l ¼ 1; 2; …; L iterations and α o 1. (b) Selection of initial temperature. The value of T0 affects the trade off between the computational cost and the possibility of finding a global optimum. T0 should be high enough to melt the system at the beginning of the annealing. However, choosing an overly high value will consume too much computing time. In practice, T0 is often determined by [28] To ¼
ΔE ; lnðβ−1 Þ
ð15Þ
where ΔE is the average increase in the cost function and β is a predetermined acceptance ratio.
2.4.2. Genetic algorithm GA is an iterative optimization technique inspired in the mechanics of genetics and natural selection [29]. The solution candidates are described as binary sequences (chromosomes), which correspond to a generation of certain population. Operations such as selection, crossover, and mutation are applied upon these chromosomes to produce new generations with better fitness. Here, we evaluate the GA taking into account the following mechanisms: (a) Roulette wheel selection. In this case, GA assigns to each individual a selection probability which is directly proportional to its fitness, then better individuals will have better chances to be chosen. (b) Two point crossover. A new representation of the children in the next generation is created from two random crossover points taken from two available parents. (c) Mutation. Here, genes in an individual are randomly interchanged using a Gaussian distribution or an adaptive feasible mutation (AFM) mechanism. In this last one, directions of mutation are randomly chosen in order to ensure that the generated individuals stay within a feasible region [30]. (d) Hybrid algorithm. This strategy combines GA with constrained nonlinear optimization. In addition to compare the performance of all those variations of GA, we evaluate the effect of different population sizes (Ps). 2.4.3. Particle swarm optimization The PSO algorithm was first proposed in [31]. It is a populationbased optimization technique which was inspired by the special behavior of animals in swarms such fish schools and bird flocks. PSO operates on a population of search points that probe the search space simultaneously. Here, the population is called swarm, and its individuals are called particles. The swarm is randomly initialized in the search space and the particles are allowed to move with an adaptable speed, so they can visit new and unexplored regions. The movement is based on the particle's own experience and the shared experience from other neighboring groups of particles. The search stops as soon as an optimization criterion is attained.
600
D.I. Escalona-Vargas et al. / Neurocomputing 120 (2013) 597–609
Fig. 1. Connection topologies implemented in this paper for PSO. (a) Full, (b) ring, (c) star, (d) mesh, (e) tree, (f) hexlattice, (g) toroidal.
In this work, the analyzed parameters for the case of PSO are: (a) Social and cognitive learning rates. They control the magnitude of the adjustments towards the particles' personal and global best solution candidates. The learning rates are determined by p2 ffiffiffiffiffiffiffiffiffiffi , which holds for φ 4 4, where φ ¼ c1 þ c2 . c1 χ¼ 2 ∣2−φ−
φ −4φj
and c2 are the social and cognitive learning rates, respectively [32]. (b) Connection topologies. They provide a set of abstract communication channels between the particles. Several studies have been performed in order to determine whether the neighborhood topology could affect the convergence of PSO [33–35]. In this work, we implement the different connection topologies shown in Fig. 1.
2.4.4. Differential evolution DE is an optimization method that was developed in [36] which exploits the same operators as GA, but they are executed in a different order. In DE, mutation and crossover modify the solution candidates before selection, that is in an opposite way as GA. DE starts with a population of random vectors, which becomes the current generation. The mutation creates a mutant vector (V) by adding the difference between two population vectors (weighted by a factor Γ) to a third vector population, i.e., the mutant vector is
determined by V ¼ V 3 þ ΓðV 1 −V 2 Þ. The outcome vectors are then mixed with a predetermined target vector (crossover) to generate a trial vector. Finally, the trial vector is accepted for the next generation if it yields a lower objective function's value (selection). The search stops as soon as a predetermined optimization criterion is attained. In order to represent the different strategies used in the case of DE the notation DE/s1/s2/s3 is introduced in [37]. There, s1 indicates the vector to be mutated which can be randomly chosen (“rand” strategy) or it can be the vector of lowest objective function from the current population (“local-best”). s2 is the number of difference vectors used and s3 denotes the crossover scheme, which can be a binomial distribution (“bin” strategy) or it can be the case when the mutant vector is calculated by V ¼ V 3 þ ðΓ þ νð1−ΓÞÞðV 1 −V 2 Þ; where ν is a uniformly distributed random number in the range [0,1]. This last strategy is referred to as “per-vector-dither” strategy [37,38]. Hence, the analyzed parameters in the case of DE are: (a) Γ. It influences the diversity of the set of mutant vectors, where small values of Γ can lead to premature convergence [39,40]. (b) Crossover rate ðζÞ. It can be interpreted as a mutation probability which influences the convergence speed and its value depends on the problem to be solved [39]. In addition to compare the performance for all those parameters, we evaluate different variants of the algorithm: DE/rand/1/
D.I. Escalona-Vargas et al. / Neurocomputing 120 (2013) 597–609
601
considered to be isotropic and to have homogeneous conductivities of 0.33, 0.0042, 1, and 0.33 S/m, respectively. Those values of radii and conductivities were chosen in accordance to the model proposed in [41]. The data were generated using an array of M¼ 37 electrodes located on the surface of the sphere modeling the scalp with a single sensor at the top position and three rings at elevation angles of π=3, π=6, π=9, containing, respectively, 6, 12, and 18 sensors equally spaced in the azimuthal direction. This arrangement is similar to an array made commercially by Neuroscan, Inc. (www. neuro.com).
bin, DE/rand/1/per-vector-dither, and DE/local-best/1/bin. Finally, we also evaluate DE for different values of Ps. 2.5. Analysis of the performance We propose to evaluate the performance of the SA, GA, PSO, and DE in the optimization of the CLF given by (8) when varying their operational parameters under different noise conditions. The evaluation is performed on the solution of the inverse problem by comparing the variances of the dipoles' position estimates against the CRB. Therefore, good performance of the metaheuristics will be achieved if the variance of the estimates gets close to the CRB.
3.1.1. One dipole The dipole source was located (in spherical coordinates) at an elevation angle of ϑ ¼ 0:5235 rad, an azimuth of φ ¼ −1:2 rad, and an eccentricity of ϱ ¼ 83 mm. The dipole's magnitude was allowed to change in time according to
3. Numerical examples In this section, we present a series of numerical examples using simulated EEG data in which the head was modeled with the classical four-shell spherical model, as well as with a realistic model based on the boundary elements.
2
2
qϑ ðtÞ ¼ 15e−ððt−60Þ=8Þ −5e−ððt−40Þ=17Þ ½nA m; 2
3.1. EEG data from a classical spherical model In this section, we present the numerical examples for estimating either one or two dipole sources using data generated in the classical spherical head model. We chose a multishell spherical model which includes four concentric layers for the scalp, skull, cerebrospinal fluid (CSF), and brain. The radii for each of the layers were, respectively, 100, 96, 92, and 89.7 mm. These layers were
qφ ðtÞ ¼ 13e−ððt−60Þ=12Þ −3e−ððt−40Þ=17Þ ½nA m;
ð16bÞ
qϱ ðtÞ ¼ 0:
ð16cÞ
These equations have been used in previous studies to approximate the temporal evolution of typical evoked responses [42]. Next, we added uncorrelated random noise, distributed as N ð0; s2 Þ with different values of s2 in order to achieve mean SNR values of −2; −1; 0:2; 0:6 dB (i.e., ranging from high, moderated, to low noise
MSE (mm) 0
0.1
0.2
0.3
0.4
MSE (mm) 0.5
0.6
0.7
0
0.6
0.1
0.2
0.3
0.4
0.5
0.6
0.7
SA,To = 50 SA,To = 100 SA,To = ∆E ln (−1) GA, generic GA, hybrid GA, AFM
−1
SNR (dB)
0.6
0 SNR (dB)
2
ð16aÞ
0
full conected hexlattice mesh ring star toroidal tree
−1
c1 = c2 = 2.05
c1 = c2 = 2.83 −2
−2
0
0.1
0.2
MSE (mm) 0.3 0.4 0.5
0.6
0.7
0.6
SNR (dB)
0
Case1, Ps = 10 Case1, Ps = 50 Case1, Ps = 100 Case2, Ps = 10 Case2, Ps = 50 Case3, Ps = 50 Case4, Ps = 50
−1
−2 Fig. 2. Average MSE in one-dipole localization as a function of SNR. (a) SA and GA. AFM refers to the case when adaptive feasible mutation was used. (b) PSO using different topologies. (c) DE, where case 1 corresponds to the experiment using the generic strategy (DE/rand/1/bin) with Γ ¼ ζ ¼ 0:9; case 2 corresponds to the generic strategy with Γ ¼ 0:5 and ζ ¼ 0:9; case 3 is the DE/rand/1/per-vector-dither strategy; case 4 corresponds to DE/local-best/1/bin.
602
D.I. Escalona-Vargas et al. / Neurocomputing 120 (2013) 597–609
conditions, such as those found in real EEG signals). Under these conditions, the estimates of the source position were calculated by minimizing (8) as a function of ϑ and φ (which correspond to the parameters of interest), while ϱ was kept fixed to the surface of the sphere modeling the brain cortex. This simplification is performed without loss of generality in our results given that both (8) and (10) are computed for all their Cartesian coordinates. Also note that the dipole signals can be calculated through (4) using the resulting estimates of the source position, but this is beyond the scope of our study as it corresponds to an estimation problem of the linear parameters in (1), which is solved through least-squares and not through metaheuristics. SA, GA, PSO, and DE were used in the estimation of the nonlinear parameters in (1) by solving the optimization problem while adjusting the metaheuristics' individual operational parameters defined in Section 2.4. The estimation process was repeated K ¼100 times for each SNR value with independent noise realizations. Then, at each trial, we calculated the MSE as qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi MSE ¼ ðθx −θ^ x Þ2 þ ðθy −θ^ y Þ2 þ ðθz −θ^ z Þ2 ; ð17Þ
as sθ^ x ; sθ^ y ; sθ^ z ), and compared them against the CRB square-roots. The CRB requires the calculation of the partial derivatives in (13), which was accomplished with a symbolic algebra software (MAPLE 13). Our simulations were performed using operational values taken from the specialized literature which have proved to work well on a wide range of problems:
For the case of SA, we varied T0 and the temperature decrement
where θ ¼ ½θx ; θy ; θz T and θ^ ¼ ½θ^ x ; θ^ y ; θ^ z T are the true and estimated Cartesian coordinates of the dipole's position, respectively. Next, we computed the standard deviations of those estimates (denoted
0
0.5
σθ (mm) 1 1.5
rule. The initial point of the search was ϑo ¼ 0; φo ¼ 0. We introduced upper and lower boundary constraints into the parameters of interest in all cases. In two of the experiments, the SA initial temperatures were T0 ¼ 50, 100, while the temperature decrement rule was computed as Tl+1 ¼T00.95l, for l ¼ 1; 2; …; L iterations. In the third experiment, we used (15) with β ¼ 0:8 and ΔE was approximated from the mean value of 20 representative examples. Then, the optimization was performed using (14) with α ¼ 0:85. Note that, for the values of α and β, we used those suggested in [9]. In GA, we tested its performance for Ps ¼ 20, 50, 100, and 500 chromosomes. In the first experiment, we used the genetic operators described in Section 2.4.2 with a Gaussian mutation mechanism (considered as the generic GA process). In the second, we used a hybrid algorithm with a local constrained
2
0
0.5
SA,To = 50 SA,To = 100 ∆E SA, To = ln (−1) √CRB
−1
−1
−2
−2 σθ (mm) 0.5
1
σθ (mm) 1.5
2
full conected hexlattice mesh ring star toroidal tree
0
0.5
1
1.5
2
0.6
0 SNR (dB)
0
SNR (dB)
2 GA, generic GA, hybrid GA, AFM √CRB
0 SNR (dB)
SNR (dB)
0
0
1.5
0.6
0.6
0.6
σθ (mm) 1
−1
−1
−2
−2
Case 1, Ps = 10 Case 1, Ps = 50 Case 1, Ps = 100 Case 2, Ps = 10 Case 2, Ps = 50 Case 3, Ps = 50 Case 4, Ps = 50
Fig. 3. Standard deviation of θ^ in one-dipole localization as a function of SNR. In all cases, the colors red, green, and blue indicate that the value corresponds to the standard deviation on the x-, y-, or z-axis, respectively. (a) SA, (b) GA, (c) PSO for c1 ¼c2 ¼ 2.83, (d) DE. (For interpretation of the references to color in this figure caption, the reader is referred to the web version of this article.)
D.I. Escalona-Vargas et al. / Neurocomputing 120 (2013) 597–609
standard deviation as a function of SNR for the same metaheuristics is shown in Fig. 3, except for the case of PSO with c1 ¼c2 ¼2.05, which was omitted as its behavior is very similar to the one with c1 ¼c2 ¼ 2.83 shown in Fig. 3c.
search based on a nonlinear multivariate algorithm. In the third experiment, we used the same selection and crossover operators as in the generic process but now we changed the mutation operator to AFM (see Section 2.4.2). In all the GA experiments, the individuals within the initial population were generated randomly without restrictions. For PSO, different connection topologies were implemented (see Fig. 1) as well as two different values of social and cognitive rates: c1 ¼ c2 ¼2.05 and c1 ¼ c2 ¼2.83. Those values were suggested in [32,43], respectively. The connections for the different topologies were all undirected and unweighted. Furthermore, they did not change throughout the process, and they were defined in an adjacency matrix with index numbers uniquely representing each of the particles. All experiments were performed for Ps ¼50, a maximum number of function evaluations of 3000 (as in [20]), and with restrictions in the lower and upper bounds of ϑ and φ. Finally, for DE, we performed four experiments. In the first, the generic approach of DE was used (DE/rand/1/bin) with Ps ¼10, 50, 100, Γ ¼ 0:9, and ζ ¼ 0:9. In the second, we used the generic approach of DE with Ps ¼ 10, 50, Γ ¼ 0:5, and ζ ¼ 0:9. For the third and fourth experiments, we used DE/rand/1/per-vector-dither and DE/local-best/1/bin, respectively, with Ps ¼50, Γ ¼ 0:9, and ζ ¼ 0:9. Those values of Γ and ζ were suggested in [14,37], respectively.
3.1.2. Two correlated dipoles We also performed a series of numerical experiments for simulated EEG data corresponding to two dipole sources within the same foursphere head model as in the previous section. The dipoles were located at an elevation angle of ϑ ¼ 0:5235 rad and an eccentricity of ϱ ¼ 83 mm, while their azimuth angles were φ ¼ −0:6 rad and φ ¼ 0:6 rad, respectively. Both dipoles' magnitudes were calculated with (16), i.e., qðtÞ ¼ ½qϑ ðtÞ; qφ ðtÞ; qϱ ðtÞT ¼ q1 ðtÞ ¼ q2 ðtÞ. Note that this implies that the sources are correlated. Again, we added uncorrelated random noise to the EEG data. Finally, the MSE and CRB were calculated in the same way as described in Section 3.1.1 for SA, GA, PSO, and DE. For the case of SA, GA, and PSO, we used the same criteria to select their parameters as explained in Section 3.1.1. However, in the case of GA, we only evaluated it for Ps ¼100 chromosomes, as the one-dipole localization showed that no significant differences were introduced by increasing the population size, then we only chose a larger value of Ps that in the one-dipole localization to account for the increase in complexity in the two-dipole search. In addition, we added lower- and upper-bound constraints into the values of ϑ and φ for all metaheuristics in all experiments. In the case of DE, a more generic approach was employed, that is the case when Γ ¼ ζ ¼ 0:9 under all conditions of SNR. Furthermore, we decided to perform a simulation (only for the case of SNR¼0.6 dB) where all the possible combinations of Γ and ζ in the range [0.5,1] and [0.2,0.9], respectively, were tested in order to find
Fig. 2 shows the MSE as a function of SNR (averaged over the K ¼100 trials) for SA, GA, PSO, and DE algorithms. There, the MSE is given in millimeters and SNR in dB. For GA in Fig. 2a, only the results corresponding to Ps ¼20 are shown as no differences were encountered for larger populations. The evaluation on the
0
0.1
MSE (cm) 0.3 0.4
0.2
0.6
0.5
0.6
0.7
SA,To = 50 SA,To = 100 ∆E SA,To = ln (−1) GA,generic GA,hybrid GA,AFM DE,BC
0 SNR (dB)
603
−1
−2 MSE (cm)
MSE (cm) 0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0
0.6
−1
−2
0.2
0.3
0.4
0.5
0.6
0.7
0.6 full conected hexlattice mesh ring star toroidal tree
0 SNR (dB)
SNR (dB)
0
0.1
full conected hexlattice mesh ring star toroidal tree
−1
−2
Fig. 4. Average MSE of θ^1 in two-dipole localization as a function of SNR. (a) SA, GA, and DE (best case). (b) PSO for c1 ¼c2 ¼ 2.05. (c) PSO for c1 ¼ c2 ¼ 2.83.
604
D.I. Escalona-Vargas et al. / Neurocomputing 120 (2013) 597–609
between each other. Thus, we decided to measure the efficiency of the metaheuristics, again in terms of the MSE and variance on the estimation, but for a fixed number of evaluations of the objective function. This approach has been previously used in [45–47] and seems well suited for our purposes as it allows to link the efficiency with a common measure of computational cost which, in our case, corresponds to the minimization problem in (9). Therefore, in these series of numerical examples, the stopping criterion of all algorithms was to reach 1000 and 2500 evaluations of the objective function when estimating one and two dipoles, respectively. These numbers of evaluations were chosen as in the previous experiments they provided the metaheuristics with conditions for achieving efficient estimates. Each algorithm was run for 100 independent trials, all with SNR ¼ 0 dB, and for different conditions in the operational parameters of the metaheuristics. In all experiments, a PC Intel Xeon E3 quad-core at 3.10 GHz with 8 GB in RAM was used. Fig. 7 shows the mean MSE and standard deviations for each algorithm for the conditions previously discussed. Note that the hybrid version of the GA was left out of this evaluation since, strictly speaking, hybridizing corresponds to the addition of a secondary method for solution refinement (in our case, constrained nonlinear optimization). Such strategy is used to improve the efficiency at the expense of increasing computational cost, then it cannot be included in this evaluation.
the best configuration for DE. Those ranges of values were suggested in [37]. Then, we chose the best parameters of this exhaustive search to perform additional examples over all conditions of SNR with restrictions in the initial population. Finally, we performed an experiment for the best configuration in which we imposed strict bound constraints in the parameters of interest and the initial population. Fig. 4 shows the MSE as a function of SNR for SA, GA, PSO, and DE algorithms. There, the MSE is now given in centimeters (as the error increased ten-fold, in some cases, in comparison to estimating one dipole) and SNR in dB. The results on the standard deviation for SA and GA are shown in Fig. 5a and b, respectively. Fig. 5c shows the results for PSO when c1 ¼ c2 ¼2.83 was used. Fig. 6a shows the results of the exhaustive search for different values of Γ and ζ in DE. From those results, we chose the values of Γ ¼ 0:5 and ζ ¼ 0:2, and the results of using these optimized parameters in DE are shown in Fig. 6b. Note that, in all cases, only the results corresponding to one dipole are shown as the MSE and CRB corresponding to the other dipole were very similar. 3.1.3. Efficiency versus computational cost Up to this point, the purpose of the numerical examples was to evaluate the efficiency of the metaheuristics as a function of their operational parameters under different estimating conditions. Then, the next series of experiments aim to find if the metaheuristics are capable of solving such problem efficiently and in an acceptable timescale. Similar evaluations have been made individually (i.e., for a single metaheuristic as a function of different operational parameters) in terms of CPU time [44]. However, CPU time is influenced by the choice of hardware, programming language, and implementation style. All them make difficult to compare different metaheuristics
3.2. EEG data from a realistic head model In this section, we present the numerical examples corresponding to estimate two correlated dipole sources using realistically simulated data. In this case, the forward problem was solved by applying the boundary element method (BEM) on tessellated volumes that closely
σθ (mm) 1
2
3
σθ (mm) 4
5
1
0.6
3
4
5
0.6 SA,To = 50 SA,To = 100 SA,To = ∆E ln (−1) √CRB
−1
GA,generic GA,hybrid GA,AFM √CRB
0 SNR (dB)
0 SNR (dB)
2
−1
−2
−2 σθ (mm) 1
2
3
4
5
0.6
SNR (dB)
0
full conected hexlattice mesh ring star toroidal tree
−1
−2 Fig. 5. Standard deviation of θ^1 in two-dipole localization as a function of SNR. In all cases, the colors red, green, and blue indicate that the value corresponds to the standard deviation on the x-, y-, or z-axis, respectively. (a) SA. (b) GA. (c) PSO for c1 ¼ c2 ¼2.83. (For interpretation of the references to color in this figure caption, the reader is referred to the web version of this article.)
D.I. Escalona-Vargas et al. / Neurocomputing 120 (2013) 597–609
= 0.2 = 0.6
= 0.3
= 0.7
= 0.4
= 0.8
z¼ 102.49 mm and x ¼29.74, y¼ 24.52, z¼103.1 mm, respectively, which correspond to regions where the discrepancy between the realistic and spherical head models was relatively small, then the bias in the estimation was not expected to be large. Fig. 9 shows the estimates dipoles' positions for two different noise conditions.
= 0.5
= 0.9
605
√CRB
5 4
σθ (mm)
4. Discussion 3
4.1. Consistency of our results 2 1
0.5
0.6
0.7
0.8
0.9
1
Γ 5 loosely constrained strictly constrained √CRB
σθ (mm)
4
3
2
1
−2
−1
0
0.6
SNR (dB) Fig. 6. Standard deviation of θ^1 in two-dipole localization using DE as a function of (a) different values of Γ and ζ, and (b) SNR for the case of the optimized parameters found from the analysis in (a). In all cases the colors red, green, and blue indicate that the value corresponds to the standard deviation on the x-, y-, or z-axis, respectively. (For interpretation of the references to color in this figure caption, the reader is referred to the web version of this article.)
approximate the geometry of the scalp, skull, and brain. Such volumes are shown in Fig. 8. The EEG potentials were calculated for 64 electrodes positioned over the scalp through the implementation of the BEM forward solution of the OpenMEEG software [48]. Again, noise was added to the EEG data in different amounts in order to evaluate various SNR conditions. Next, the inverse problem was solved using the head model from Section 3.1 but with layers' radii of 89.11, 86.07, 82.03, and 80 mm, as these were the values that offered the best fit to our realistic head model in the least-squares sense. The purpose of solving the inverse problem using realistic EEG data but considering a spherical head model is to introduce a modeling error in the estimation which, ultimately, is going to produce a bias estimate of our parameters of interest. This is the closest to a real-life scenario that we can evaluate, then our aim is to determine the performance of the best metaheuristics' parameters (as found in our previous CRB analysis) under these sub-optimal modeling conditions. Hence, θ^ was computed for the best optimization cases described in Section 3.1.1 while lower- and upper-bound constraints were added in the optimization problem to avoid the divergence from the true solution, then we could relate the bias of the estimate from each metaheuristic with its sensitivity to the modeling error. The sources were located at x ¼3.97, y¼−37.06,
We presented the results of exhaustive performance tests for different metaheuristic methods used in solving the EEG source localization problem. In our study, we evaluated the SA, GA, PSO, and DE algorithms for the cases of one and two correlated dipoles under various noise conditions and when the data were computed using a head modeled with spherical and realistic geometries. In general, our results show MSE values of 0.1–2 mm for the case of the spherical model (except for the case of PSO, where the error became relatively high in the case of estimating two correlated dipoles), whereas for realistic model an average accuracy of 3–10 mm was obtained. These results are consistent with previously reported studies such as those shown in Table 1, as well as those in [49]. In this last study, an average localization error of 10 mm was found for the case of noiseless simulated data computed from a spherical model, while the error was of 2–3 mm in the case of realistically simulated data. In fact, for dipoles that were located in the upper part of the head, the errors reported in that study for a similar spherical model were between 4 and 5 mm, whereas for two correlated dipoles the error increased to 11.2 mm. Even though the results in [49] were obtained for various conditions such as dipoles located at different depths, dipoles' positions, number and position of the electrodes, as well as a different method to solve the inverse problem (while we used the maximum likelihood method, the Marquardt algorithm was used in [49]) our study seems consistent with those results. Nevertheless, a great deal of situations could be explored in order to further validate the results, but this is far beyond the scope of our study. 4.2. General assessment of the results The main contribution of this work is to provide an evaluation of the performance of SA, GA, PSO, and DE algorithms based on the CRB in such way that a realistic assessment of the effects of varying the operational parameters of the algorithms is obtained. In our results, we only observed slight changes in the error as a function of the parameters when estimating one dipole using data obtained from the classical spherical head model (Fig. 2) while for realistically simulated data the error was approximately of 3 mm without significant variations. In the case of estimating two correlated dipoles, we did not observe significant changes in the error as a function of the metaheuristics' parameters (Fig. 4), except for the case of PSO, where the error became relatively high when its parameters were changed (Fig. 4b and c). For high SNR, the standard deviations of the estimates were close to the CRB square-root without significant differences introduced by the variations in the operational parameters when one dipole was estimated (Fig. 3). In the case of two correlated dipoles, we observed slight changes in the standard deviations (Figs. 5 and 6) which can be credited to the variations on the metaheuristics' parameters. The increase in the standard deviations may be explained by the increase in the number of estimated parameters, lower SNR conditions, and correlated dipoles, all of which made the objective function highly non-linear. Under those conditions, it was difficult for the metaheuristics to guarantee global optimality. Therefore, slight changes within the metaheuristics' parameters could be credited for the failure in locating the global optimum (Figs. 5a, c, and 6a).
606
D.I. Escalona-Vargas et al. / Neurocomputing 120 (2013) 597–609
2 MSE x y
1.8 1.6
x
millimeters
1.4 1.2 1 0.8 0.6 0.4 0.2
ic G A, AF M PS O ,f ul l PS O ,r in g PS O ,h ex PS O ,m es h PS O ,s ta r PS O ,t or us PS O ,t re D e E, C as e 1
*
ge ne r
SA
10 0
G
A,
SA ,T
o
SA ,T
o
=
=
50
0
6 MSE x y x
5
millimeters
4 3 2 1
ne
ric G A, AF M PS O ,f ul l PS O ,r in g PS O ,h ex PS O ,m es h PS O ,s ta r PS O ,t or us PS O ,t re D e E, C as e 1
* SA
0 10
ge
=
G
A,
o
SA ,T
SA ,T
o
=
50
0
Fig. 7. MSE and standard deviation of θ^ using SA, GA, PSO, and DE with different initialization parameters, different number of evaluations of the objective function, and SNR¼ 0 dB. SA* corresponds to the case of SA when T o ¼ ΔE =lnðβ−1 Þ. For the PSO algorithm, c1 ¼c2 ¼ 2.83 was used in all cases. In (b), the results only show those corresponding to θ^1 since those for θ^2 are very similar. (a) One-dipole localization (1000 evaluations). (b) Two-dipole localization (2500 evaluations).
4.3. Individual assessment of the evaluated methods 4.3.1. SA In the case of the SA algorithm, we did not find a significant difference in performance when T0 or the decrement rule were changed in one-dipole estimation (Fig. 3a), whereas SA was more sensitive to changes in T0 for all SNR conditions when estimating two dipoles. In particular, we noticed that SA could benefit from using a very large value of T0 in this situation (see, e.g., Fig. 5a where T o ¼ ΔE=lnðβ−1 Þ). Unfortunately, this case is not feasible, because it is too slow for practical purposes. We observed that the SA also had bad performance under low SNR conditions (solid and hollow violet circles in Fig. 9). We think that hybridizing SA with another metaheuristic might improve its efficiency.
4.3.2. GA In the case of GA, the standard deviations did not change between using small Ps (20 chromosomes) or larger populations (100 and 500 chromosomes) when estimating one dipole. In
addition, the standard deviations were very similar in all genetic operators used or when its hybrid version was applied (Fig. 3b). Nevertheless, the results showed that GA was less sensitive to noise than SA and PSO in a two-dipole estimation (Fig. 5). Furthermore, GA was outperformed by DE only after this last method was subject of an exhaustive adjustment of its configuration parameters (Fig. 6). Finally, we observed that the hybrid GA still performed well for all SNR conditions even for the case of estimating two correlated dipoles (solid and hollow dark green circles in Fig. 9).
4.3.3. PSO For the PSO algorithm in one-dipole estimation we did not find significant differences in the standard deviations either when different connection topologies or learning rates were used (Fig. 3c). Nevertheless, the results showed that, in a two-dipole estimation PSO's best performance was achieved for c1 ¼ c2 ¼2.83 (Fig. 4b), whereas the case of c1 ¼ c2 ¼ 2.05 corresponded to the worst performance regardless of the topology used (Fig. 4c). In this
D.I. Escalona-Vargas et al. / Neurocomputing 120 (2013) 597–609
607
Fig. 8. Realistic head model. (a) Head, skull, and brain surfaces. (b) Tessellation of the brain cortex. (c) Head's surface with spheres indicating the electrodes' positions.
Fig. 9. Estimated dipoles' positions for the case of realistically simulated EEG data with different SNR values for our metaheuristics. The solid circles correspond to the case of θ^1 , while hollow circles correspond to θ^2 . (For interpretation of the references to color in this figure caption, the reader is referred to the web version of this article.)
last situation, PSO had good performance only for high SNR. Also note that the topology for which the best performance of PSO in high SNR was achieved in a two-dipole estimation differed from the topology that got the best performance in low SNR. The bad performance of PSO may be explained by the connectivity in the topologies, because the particles found it more difficult to share information and to explore the space efficiently for some topologies than others, specially when correlated dipoles were to find.
4.3.4. DE For the DE algorithm, we did not observe significant differences when the evolutionary operators or size population were varied in the case of one-dipole estimation (Fig. 3d). In addition, we found that DE failed to localize two correlated dipoles, which is in agreement with the results in [14]. However, our exhaustive simulation allowed us to find a combination of parameters for which a solution was attained (Γ ¼ 0:5, ζ ¼ 0:2), but only for the
608
D.I. Escalona-Vargas et al. / Neurocomputing 120 (2013) 597–609
case when we used upper and lower constraints (Fig. 6b). Hence, our results suggested that DE could still be applied, but only through the exhaustive configuration search of its optimal parameters, which in a real-life problem might not be advisable. Finally, DE had a good performance even when modeling errors were introduced, but it was not better than GA (solid and hollow red circles in Fig. 9).
Acknowledgments This work was supported by the National Council of Science and Technology (CONACyT-Mexico) under Grant 101374 and by project number 160022 from Fondo Mixto Conacyt-Gobierno del Estado de Tamaulipas.
Appendix A. Computer implementation 4.4. Efficiency versus computational cost We have found that all algorithms are comparable in terms of solution quality in the estimation of one dipole for a fixed number of evaluations of the objective function. However, we observed that in the case of generic GA the variance increased, which indicates that GA requires a larger number of evaluations of the objective function to attain good estimates. On the other hand, the results confirmed that GA turned out to be more attractive in comparison to SA, PSO, and DE for the case of two correlated dipoles (Fig. 7b) because it provides good estimates in comparison to the other metaheuristics and for the same computational effort. We observed that SA with T o ¼ ΔE=lnðβ−1 Þ and PSO with a full topology are feasible methods. DE may require a larger number of evaluations of the objective function in order to obtain efficient estimates in two-dipole localization.
Here, we provide further detail on the computer implementation of the different optimization methods used throughout the paper. In all cases, we used Matlab™ as development platform. With respect to the cost function in (8), its main component is the lead-field matrix, which can be expressed as a collection of M transposed kernel vectors (i.e., row vectors of size 1 3). Each of those kernel vectors are a function of the dipole's position and the mth-sensor's location (see [4] for more details). Therefore, for the case of our fixed array of sensors, the computation of the kernel vectors as a function of θ was performed through the method proposed in [50]. Once we had the code that implemented (8) in the form of a function of θ, we included it as cost function in the optimization routines, which were the following:
SA and GA. In the case of these two methods, we directly used
4.5. Summary of results Our results showed that the metaheuristics were effective in localizing one dipole. GA is particularly efficient given that the values of Ps used in the experiments were rather small. Furthermore, our results showed that the hybrid GA with local search is a suitable strategy. We also noted that GA and DE were the best algorithms in estimating two correlated dipoles. However, DE required a fine adjustment of their parameters, which renders it impractical. All methods reviewed in this paper seemed to be very sensitive to noise, while SA and PSO also seemed to be sensitive to correlated sources. Similar results were obtained when modeling errors were introduced: GA performed better than SA, PSO, and DE, and the accuracy of all of them greatly depended upon noise conditions, correlation between the sources, dimension of the search spaces, and constraints. In terms of computational cost, GA had the advantage of obtaining good estimates for a fixed number of evaluations of the objective function compared to SA, PSO, and DE.
5. Conclusions We presented a scheme to evaluate the performance of metaheuristic- and EEG data-based source localization methods using the CRB as benchmark. Our study showed that no significant variations on the performance were introduced by changing the selection of the operational parameters of the SA, GA, PSO, and DE in one-dipole estimation, while in general GA and DE had a better performance in two-dipole estimation. However, all methods seemed to be very sensitive to the presence of noise, while SA and PSO also appeared to be sensitive to the introduction of correlated sources, and DE required a fine and exhaustive adjustment of their configuration parameters. In addition, GA showed better performance when modeling errors were introduced, while DE showed a good performance only when strict constraints in the search were enforced.
the tools already implemented in Matlab's 〈http://www.math works.com/products/global-optimization/index.html〉 Optimization Toolbox™ (i.e., SA and GA solvers). PSO. Our implementation of this method was based on Clerc's constriction method which is described in [32]. In order to use this method, we had to define all topologies in the form of connection matrices for which an element with value of one at the ith row and jth column represented a connection between nodes i and j, while zero values indicated no connection between nodes. Once the connections were defined, all the topologies were constructed using a network analysis toolbox freely available at strategic.mit.edu\docs\matlab_networks \canonical_nets.m. DE. In this case, Storn's method as described in [37] was used, and at www.1.icsi.berkeley.edu\ storn\code.html#matl an implementation of such method is freely available.
References [1] B. He (Ed.), Neural Engineering Norwell, Kluwer Academic Plenum Publishers, Boston, Massachusetts, 2005. [2] F. Vatta, P. Bruno, P. Inchingolo, Improving lesion conductivity estimate by means of EEG source localization sensitivity to model parameter, J. Clin. Neurophysiol. 19 (2002) 1–15. [3] J.S. Ebersole, S. Hawes-Ebersole, Clinical application of dipole models in the localization of epileptiform activity, J. Clin. Neurophysiol. 24 (2007) 120–129. [4] J. Mosher, R. Leahy, P. Lewis, EEG and MEG: forward solutions for inverse methods, IEEE Trans. Biomed. Eng. 46 (1999) 245–259. [5] X. Yang, Metaheuristic optimization: algorithm analysis and open problems, in: P. Pardalos, S. Rebennack (Eds.), Experimental Algorithms, Lecture Notes in Computer Science, Springer-Verlag, Berlin, Heidelberg, 2011, pp. 21–32. [6] H. Haneishi, N. Ohyama, K. Sekihara, T. Honda, Multiple current dipole estimation using simulated annealing, IEEE Trans. Biomed. Eng. 41 (1994) 1004–1009. [7] J. Gerson, V. Cardenas, G. Fein, Equivalent dipole parameter estimation using simulated annealing, Electroencephalogr. Clin. Neurophysiol. 92 (1994) 161–168. [8] D. McNay, E. Michielssen, R. Rogers, S. Taylor, M. Akhtari, W. Sutherling, Multiple source localization using genetic algorithms, J. Neurosci. Methods 64 (1996) 163–172. [9] D. Khosla, M. Singh, M. Don, Spatio-temporal EEG source localization using simulated annealing, IEEE Trans. Biomed. Eng. 11 (1997) 1075–1091. [10] K. Uutela, M. Hämäläinen, R. Salmelin, Global optimization in the localization of neuromagnetic sources, IEEE Trans. Biomed. Eng. 45 (1998) 716–723.
D.I. Escalona-Vargas et al. / Neurocomputing 120 (2013) 597–609
[11] T. Nagano, Y. Ohno, N. Uesugi, H. Ikeda, A. Ishiyama, Multi-source localization by genetic algorithm using MEG, IEEE Trans. Magn. 34 (1998) 2916–2919. [12] T. Jiang, A. Luo, X. Li, F. Kruggel, A comparative study of global optimization approaches to MEG source localization, Int. J. Comput. Math. 80 (2003) 305–324. [13] L. Zou, S. Zhu, B. He, Spatio-temporal EEG dipole estimation by means of a hybrid genetic algorithm, in: Proceedings of the 26th International Conference on IEEE EMBS, 2004, pp. 1–5. [14] Y. Li, H. Li, R. He, L. Rao, Q. Wu, G. Xu, X. Shen, W. Yan, EEG source localization using differential evolution method, in: Proceedings of the 26th International Conference on IEEE EMBS, 2004, pp. 1903–1906. [15] C. Sequeira, F. Sanchez-Quesada, M. Sancho, I. Hidalgo, T. Ortiz, A genetic algorithm approach for localization of deep sources in MEG, Phys. Scripta T118 (2005) 140–142. [16] L. Qiu, Y. Li, D. Yao, A feasibility study of EEG dipole source localization using particle swarm optimization, in: IEEE CEC, 2005, pp. 720–726. [17] C. Jiang, J. Ma, B. Wang, L. Zhang, Multiple signal classification based on genetic algorithm for MEG sources localization, in: D. Liu, S. Fei, Z. Hou, H. Zhang, C. Sun (Eds.), Advances in Neural Networks, Lecture Notes in Computer Science, Springer-Verlag, Berlin, Heidelberg, 2007, pp. 1133–1139. [18] C. Jiang, B. Wang, L. Zhang, Particle swarm optimization for MEG source localization, in: 3rd International Conference on PRB, 2008, pp. 73–82. [19] Y.K. Alp, O. Arikan, S. Karakas, ERP source reconstruction by using particle swarm optimization, in: IEEE ICASSP, 2009, pp. 365–368. [20] K.E. Parsopoulos, F. Kariotou, G. Dassios, M.N. Vrahatis, Tackling magnetoencephalography with particle swarm optimization, Int. J. Bio-Inspired Comput. 1 (2009) 32–49. [21] H.L. Van Trees, Detection, Estimation, and Modulation Theory, Part I, John Wiley & Sons, New York, 1968. [22] C.H. Muravchik, A. Nehorai, EEG/MEG error bounds for a static dipole source with a realistic head model, IEEE Trans. Signal Process. 49 (2001) 470–483. [23] D. Gutiérrez, A. Nehorai, Array response kernels for EEG and MEG in multilayer ellipsoidal geometry, IEEE Trans. Biomed. Eng. 55 (2008) 1103–1111. [24] M. Sun, An efficient algorithm for computing multishell spherical volume conductor models in EEG dipole source localization, IEEE Trans. Biomed. Eng. 44 (1997) 1243–1252. [25] P. Stoica, A. Nehorai, MUSIC, maximum likelihood, and Cramér–Rao bound, IEEE Trans. Acoust. 37 (1989) 720–741. [26] D. Gutiérrez, A. Nehorai, C. Muravchik, Estimating brain conductivities and dipole source signals with EEG arrays, IEEE Trans. Biomed. Eng. 51 (2004) 2113–2122. [27] S. Kirkpatrick, C.D. Gelatt, M.P. Vecchi, Optimization by simulated annealing, Science 220 (1983) 671–680. [28] H. Shakouri, K. Shojaee, M. Behnam, Investigation on the choice of the initial temperature in the simulated annealing: a mushy state SA for TSP, in: 17th Mediterranean Conference on Control and Automation, 2009, pp. 1050–1055. [29] D.E. Goldberg, Genetic Algorithms in Search, Optimization, and Machine Learning, Addison-Wesley, Boston, 1989. [30] A. Neubauer, Adaptive non-uniform mutation for genetic algorithms, in: Proceedings of the 5th Fuzzy Days International Conference, 1997, pp. 24–34. [31] J. Kennedy, R. Eberhart, Particle swarm optimization, in: Proceedings of IEEE International Conference on Neural Networks, 1995, pp. 1942–1948. [32] M. Clerc, J. Kennedy, The particle swarm-explosion, stability and convergence in a multidimensional complex space, IEEE Trans. Evol. Comput. 6 (2002) 58–73. [33] R. Mendes, J. Kennedy, J. Neves, The fully informed particle swarm: simpler, maybe better, IEEE Trans. Evol. Comput. 8 (2004) 204–210. [34] P.N. Suganthan, Particle swarm optimizer with neighborhood operator, IEEE Trans. Evol. Comput. 6 (1999) 1958–1961. [35] A.J. Reyes Medina, G. Toscano Pulido, J.G. Ramirez Torres, A comparative study of neighborhood topologies for particle swarm optimizers, in: International Conference on Evolutionary Computation, 2009, pp. 152–159. [36] R. Storn, K. Price, Differential evolution—a simple and efficient heuristic for global optimization over continuous spaces, J. Global Optim. 11 (1997) 341–359. [37] K.V. Price, R. Storn, J. Lampinen, Differential Evolution: A Practical Approach to Global Optimization, Springer, 2005. [38] K. Mullen, D. Ardia, D. Gil, D. Windover, J. Cline, DEoptim: an R package for global optimization by differential evolution, J. Stat. Softw. 40 (2011) 1–26. [39] D. Zaharie, Influence of crossover on the behavior of differential evolution algorithms, Appl. Soft Comput. 9 (2009) 1126–1138. [40] D. Zaharie, Critical values for the control parameters of differential evolution algorithms, in: Proceedings of the 8th International Conference Soft Computing, 2002, pp. 62–67. [41] B.N. Cuffin, D. Cohen, Comparison of the magnetoencephalogram and electroencephalogram, Electroencephalogr. Clin. Neurophysiol. 47 (1979) 132–146. [42] A. Dogandzic, A. Nehorai, Estimating evoked dipole responses in unknown spatially correlated noise with EEG/MEG arrays, IEEE Trans. Signal Process. 48 (2000) 13–25.
609
[43] I.C. Trelea, The particle swarm optimization algorithm: convergence analysis and parameter selection, Inf. Process. Lett. 85 (2003) 317–325. [44] A. Jaszkiewicz, A comparative study of multiple-objective metaheuristics on the bi-objective set covering problem and the Pareto memetic algorithm, Ann. Oper. Res. 131 (2004) 135–158. [45] J.R. Perez, J. Basterrechea, Comparison of different heuristic optimization methods for near-field antenna measurements, IEEE Trans. Antennas Propag. 55 (2007) 549–555. [46] K. Hammouche, M. Diaf, P. Siarry, A comparative study of various metaheuristic techniques applied to the multilevel thresholding problem, Eng. Appl. Artif. Intell. 23 (2010) 676–688. [47] J.S.H. Dominguez, G.T. Pulido, A comparison on the search of particle swarm optimization and differential evolution on multi-objective optimization, in: IEEE CEC, 2011, pp. 1978–1985. [48] A. Gramfort, T. Papadopoulo, E. Olivi, M. Clerc, OpenMEEG: opensource software for quasistatic bioelectromagnetics, Biomed. Eng. Online 9 (2010). [49] B. Yvert, O. Bertrand, M. Thévenet, J.F. Echallier, J. Pernier, A systematic evaluation of the spherical model accuracy in EEG dipole localization, Electroencephalogr. Clin. Neurophysiol. 102 (1997) 452–459. [50] P. Berg, P. Scherg, A fast method for forward computation of multi-shell spherical head models, Electroencephalogr. Clin. Neurophysiol. 90 (1994) 58–64.
Diana Escalona-Vargas received the B.Sc. degree in bionic engineering from the National Polytechnic Institute (IPN) at Mexico city, in 2006. She received the M. Sc. degree in biomedical engineering and physics from the Center for Research and Advanced Studies (Cinvestav) at Monterrey, Mexico, in 2009. She is a Ph.D. candidate in computer sciences at Cinvestav, Tamaulipas Unit, Mexico. Her research interests are in the area of biomedical signal processing, specifically in neuroelectrical signal processing with EEG and MEG data as well as EEG signal processing for brain–computer interfaces.
David Gutiérrez (a.k.a. Dania Gutiérrez) received the B. Sc. degree (with honors) in electrical engineering from the National Autonomous University of Mexico (UNAM), Mexico, in 1997, the M.Sc. degree in electrical engineering and computer sciences, as well as the Ph.D. degree in bioengineering from the University of Illinois at Chicago (UIC), in 2000 and 2005, respectively. From March 2005 to May 2006, she held a postdoctoral fellowship at the Department of Computer Systems Engineering and Automation, Institute of Research in Applied Mathematics and Systems (IIMAS), UNAM. In June 2006, she joined the Center for Research and Advanced Studies (Cinvestav) at Monterrey, Mexico. There, she is an associate professor in the area of medical sciences. Dr. Gutiérrez's research interests are in statistical signal processing and its applications to biomedicine. She is also interested in image processing, neurosciences, and realtime algorithms. Dr. Gutiérrez is a former Fulbright Scholar and she has been a fellow of the National Council for Science and Technology (CONACYT), Mexico, since 1998.
Ivan Lopez-Arevalo is an associate professor in the Information Technology Laboratory of the CINVESTAVIPN at Ciudad Victoria Campus (Mexico). He received his Ph.D. degree in Computing from the Technical University of Catalonia (Barcelona, Spain) in 2006. His research interests include data mining, text mining, as well as knowledge representation and management.