ARTICLE IN PRESS Engineering Applications of Artificial Intelligence 22 (2009) 317–328
Contents lists available at ScienceDirect
Engineering Applications of Artificial Intelligence journal homepage: www.elsevier.com/locate/engappai
Parameter extraction for PSP MOSFET model using hierarchical particle swarm optimization R.A. Thakker , M.B. Patil, K.G. Anil Department of Electrical Engineering, Indian Institute of Technology, Mumbai 400076, India
a r t i c l e in fo
abstract
Article history: Received 25 November 2007 Received in revised form 28 April 2008 Accepted 13 July 2008 Available online 28 August 2008
The particle swarm optimization (PSO) algorithm is applied to the problem of MOSFET parameter extraction for the first time. It is shown to perform significantly better than the genetic algorithm (GA). Several modifications of the basic PSO algorithm have been implemented: (a) Hierarchical PSO (HPSO) in which particles are hierarchically arranged and influenced by the positions of the local and global leaders, (b) memory loss operation due to which a particle forgets its past best position, (c) intensive local search in which the solution space around the global leader is searched with a high resolution, and (d) adaptive inertia which causes the inertia of the particles to change adaptively, depending on the fitness of the population. It is demonstrated that the above features improve the performance of the basic PSO algorithm both for the MOSFET parameter extraction problem and for benchmark functions. & 2008 Elsevier Ltd. All rights reserved.
Keywords: MOSFET parameter extraction Particle swarm optimization Hierarchical particle swarm optimization Memory loss Local search Adaptive inertia PSP MOSFET model Genetic algorithm
1. Introduction Circuit simulation is an indispensable element of the modern circuit design process. The accuracy of circuit simulation depends on how accurately the various mathematical models capture the behavior of circuit elements, especially the MOS transistors. MOSFET (metal–oxide semiconductor field effect transistor) models have become very complex with down-scaling of technology to the sub-100-nm regime. Model parameters need to be accurately extracted to predict the behavior of MOSFET circuits precisely. In the following, the MOSFET parameter extraction problem is described. The drain current in an MOS transistor can be represented as Id ¼ f ðV bs ; V gs ; V ds ; P 1 ; P 2 ; . . . ; Pn Þ,
(1)
where V is the voltage, and b, g, d, and s subscripts stand for the bulk, gate, drain, and source terminals of the device, respectively. P 1 ; P2 ; . . . ; P n denote the parameters of the device which include: (a) physical parameters such as the gate oxide thickness (t ox ), substrate doping density ðNA Þ, poly silicon doping ðN P Þ, etc. and (b) parameters to account for physical effects such as mobility, channel-length modulation (CLM), drain-induced barrier lowering Corresponding author. Tel.: +91 9869876883.
E-mail address:
[email protected] (R.A. Thakker). 0952-1976/$ - see front matter & 2008 Elsevier Ltd. All rights reserved. doi:10.1016/j.engappai.2008.07.001
(DIBL), etc. The task of parameter extraction involves the determination of the parameters P 1 ; P2 ; . . . ; P n , given a set of measured quantities, e.g., Id versus V ds for fixed values of V gs and V bs , so that the values of Id predicted by Eq. (1) match closely the actual experimental measurement. For example, in the MOS LEVEL 1 model, the drain current Id in the linear and saturation regions is described by Id ¼ K n ððV gs V T ÞV ds V 2ds =2Þð1 þ lV ds Þ, V gd XV T ðlinear regionÞ,
(2)
Id ¼ ðK n =2ÞðV gs V T Þ2 ð1 þ lV ds Þ, (3) V gd oV T ðsaturation regionÞ, pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi pffiffiffiffiffiffi where V T ¼ V T0 þ gð V sb þ FF FF Þ. This model is valid for a long-channel device with the channel length more than 1 mm. The parameters for this model, K n , V T0 , g, and FF , can be extracted from Id V g at low V ds (50 mV) and different V bs . Here, K n , V T0 , FF , and g are transconductance, threshold voltage (at V bs ¼ 0), surface potential, and body bias parameter, respectively. These parameters are extracted at a low V ds (50 mV) to avoid the effect of the CLM parameter l. The parameter l is extracted from Id V d for V gs 4V T. The block diagram for parameter extraction is shown in Fig. 1. The optimizer module minimizes the error between measured data and model generated values using a suitable optimization algorithm. The
ARTICLE IN PRESS 318
R.A. Thakker et al. / Engineering Applications of Artificial Intelligence 22 (2009) 317–328
Fig. 1. Device parameter extraction block diagram.
error function can be defined as vffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ! u u 1 X Iexp Imodel 2 d d , Error ¼ t K Iexp d
(4)
where K is the number of measured (experimental) data points, represents experimental data, and Imodel denotes the modelIexp d d generated values. For the LEVEL 1 model, the model-generated values of Id will depend on the parameters, K n , V T0 , g, FF , and l. The task of the optimizer is to extract these parameter values so that the error defined in Eq. (4) attains smaller than a specified value. Basically, the MOSFET parameter extraction problem is a multi-dimensional continuous optimization problem with many local minima (Watts et al., 1999). The complexity of MOS transistor models has increased for the deep submicron devices due to various physical effects, and a large number of parameters are required to represent the device behavior accurately. For example, in the above LEVEL 1 model, only one parameter l is used to model the CLM effect. In comparison with this, four parameters (ALP, ALP1, ALP2, VP) are used to capture the same effect in the PSP (Pennsylvania surface potential) model, which is valid up to sub-100-nm technology and considered in this work. The increased complexity of MOSFET models has made the parameter extraction problem very challenging, thus requiring an efficient global optimization algorithm to be employed. Now, the brief review of various methods that have been used in the past for MOSFET parameter extraction is discussed. Gradient-based methods such as Levenberg–Marquardt method (Ward and Doganis, 1982; Doganis and Scharfetter, 1983), modified Gauss method with steepest descent (Yang and Chatterjee, 1983) are commonly used to extract MOSFET parameters. However, these methods require a very good initial guess for the parameter values, which is difficult to obtain for the present MOSFET models. Also, these methods may fail due to singularity in objective functions and redundancy of parameters. Various global optimization methods, which do not suffer from the difficulties associated with gradient-based algorithms, are investigated and reported in the literature for the MOSFET parameter extraction problem. Fast simulated diffusion and simulated annealing are explored to extract MOS LEVEL 3 parameters in Sakurai et al. (1992) and Gowda et al. (1994), respectively. The use of genetic algorithm (GA) for parameter extraction of various state-of-the-art MOSFET models has been recently reported in the literature. In Li (2007), an application of the GA to BSIM3v3 model parameter extraction has been described with a ‘‘renew operator’’ to maintain diversity in population. The parameter extraction for a surface potentialbased MOSFET model using GA with a ‘‘niching operator’’ and non-dominated sorting procedure is discussed in Keser and Joardar (2000). Parameter extraction for BSIM3 and HISIM
MOSFET models using GA are discussed in Watts et al. (1999) and Murakawa et al. (2005), respectively. The hybridization of GA with least square-fit method to enhance local search is used to extract BSIM3v3 parameters in Antoun et al. (2003). The particle swarm optimization (PSO) algorithm proposed by Kennedy and Eberhart (1995) is a recent stochastic algorithm, which mimics the behavior of birds flocking in search of food. The PSO algorithm is shown to perform better than GA in terms of accuracy and speed for many benchmark functions (Bergh and Engelbrecht, 2004; Riget and Vesterstroem, 2002). It is also observed to give better accuracy in various applications such as recurrent network design (Juang, 2004), optimal reactive power dispatch (Zhao et al., 2005), optimization of electromagnetic devices (Ho et al., 2006), etc. It is the purpose of this paper to explore the effectiveness of the PSO algorithm for the MOSFET parameter extraction problem and to compare it with GA for devices fabricated using 65-nm technology. The PSP model (Gildenblat et al., 2006a), which is representative of advanced MOS models used in the semiconductor industry, has been considered. In addition to the parameter extraction for PSP MOSFET model, some new features are proposed for the PSO algorithms. This paper is organized as follows. In Section 2, GA and PSO algorithms are described along with some modifications demonstrated in the literature. The implementation of hierarchical PSO and some novel features to improve its performance are discussed in Section 3. An overview of the PSP MOSFET model and the parameter extraction strategy used in this work is given in Section 4. Results on benchmark functions and parameter extraction for the PSP MOSFET model are presented in Section 5. Finally, conclusions are presented in Section 6 along with some comments that need to be addressed in the future.
2. GA and PSO In this section, the basic features of GA and PSO algorithm are reviewed along with the improvements suggested by various authors. 2.1. Genetic algorithm The GA was originally developed by Holland (Goldberg, 1989) with an inspiration from biological evolution. In this algorithm, a population of ‘‘chromosomes’’ is created in which each chromosome is assigned n values for the n variables involved in the problem. The chromosomes undergo genetic operations, viz., selection, crossover, and mutation, so as to improve their fitness (i.e., closeness to the solution). In the following, the implementations of these operations are described. 2.1.1. Selection There are various methods for selection operation. In the tournament selection method, implemented in this work, a random number is generated and compared with a fixed number (typically 0.75). If the random number is smaller, the fitter chromosome is selected; otherwise, the other chromosome is selected. 2.1.2. Crossover The implementation of crossover operation depends on the representation used for the chromosomes. In our implementation, a chromosome consists of a set of MOSFET parameters (real numbers) arranged in a certain order. The two-point crossover operation is implemented, as illustrated in Fig. 2(a). In this
ARTICLE IN PRESS R.A. Thakker et al. / Engineering Applications of Artificial Intelligence 22 (2009) 317–328
319
and velocity of the ith particle are denoted by xi and vi , respectively, then xi ðx1i ; x2i ; . . . ; xni Þ,
(5)
vi ðv1i ; v2i ; . . . ; vni Þ.
(6)
Each particle in the population is a candidate for the solution, and the particles are moved towards the fittest particle (i.e., closer to the solution) in the PSO algorithm. In this process, the algorithm finds a better solution and over time it is expected to reach the desired solution. Each particle keeps in memory the best position attained by it during its trajectory. The velocity of a particle is updated on the basis of three vectors (see Fig. 3(b)): (i) the particle’s own velocity (vector A), (ii) the displacement of the particle from its best position (vector B), and (iii) the displacement of the particle from the fittest (globally best) particle (vector C). The particle moves in a direction, which is a weighted addition of these three vectors. The velocity update of a particle at time ðt þ DtÞ is mathematically represented as follows: vi ðt þ DtÞ ¼ wvi ðtÞ þ p1 r 1 ðx¯ i xi Þ þ p2 r 2 ðx¯ g xg Þ,
Fig. 2. Pictorial representation of genetic operations implemented in this work. (a) Two-point crossover operation between chromosomes A and B, resulting in new chromosomes A0 and B0 . (b) Mutation operation on chromosome C resulting in chromosome C 0 .
example, there are seven parameters P 1 ; P 2 ; . . . ; P 7 . The crossover points are denoted by X and Y, which divide the chromosomes into three segments. The middle segment consisting of P3 , P4 , and P 5 of the chromosomes A and B is exchanged, giving rise to new chromosomes A0 and B0 . 2.1.3. Mutation The mutation operation is described in Fig. 2(b). A randomly chosen parameter (P3 in the figure denoted by Z) is mutated, and it is assigned a new value which is randomly selected from the applicable range for P3. Both crossover and mutation operations are performed with certain probabilities. Typically, the mutation probability is chosen to be much smaller than the crossover probability (Goldberg, 1989). Various modifications of the basic GA, such as, compact GA (Harik et al., 1999; Ahn and Ramakrishna, 2003), cooperative coevolutionary GA (Potter and Jong, 1994) have been reported in the literature. In this paper, the basic GA, which is found to be efficient in a variety of applications, is considered with the crossover and mutation operations implemented as discussed earlier. This implementation will be referred to as ‘‘GA’’ in the following. 2.2. Particle swarm optimization (PSO) This algorithm uses a cooperative approach among randomly generated ‘‘particles’’ to find the globally optimum solution. The implementation of the basic PSO algorithm is shown in Fig. 3(a). For a problem with n variables, x1 ; x2 ; . . . ; xn , a population of particles is initially generated by randomly assigning positions and velocities to each particle for each dimension. If the position
(7)
where i is the particle index, t is the time, and Dt is the time interval over which the velocity is being updated. In actual implementation, t is generally taken to be the same as the iteration number, and therefore Dt is numerically equal to 1. The multiplicative constants w, p1 , and p2 are parameters of the PSO algorithm, and r 1 and r 2 are random numbers uniformly distributed in the range ½0; 1. The random numbers r 1 and r 2 provide stochastic nature to the algorithm. The constant w is called ‘‘inertia’’ of the particle, since it represents the influence of the previous velocity of the particle on its new velocity. The position x¯ i represents the fittest position of the ith particle up to time t, and x¯ g is the fittest position of the globally best particle at time t. It has been shown in Clerc and Kennedy (2002) that the choices of w ¼ 0:7298, p1 ¼ p2 ¼ 1:49618 ensure convergence of the PSO algorithm. The velocities computed with Eq. (7) are used to move the particles as xi ðt þ DtÞ ¼ xi ðtÞ þ vi ðt þ DtÞDt.
(8)
xji
If any component of xi goes beyond specified minimum and maximum values, it is brought within the specified range by randomly assigning a new position between the two limits. The PSO algorithm provides fast convergence because of the cooperative approach among the particles. However, the algorithm could fail or become inefficient if the particles get trapped in local minima. One effective way of avoiding this premature convergence to local minima is to enhance the diversity of the population. Several alternatives have appeared in the literature to achieve this objective:
(a) In Krink et al. (2002) and Monson and Seppi (2006), the particles are treated as spheres with a finite radius, and the collisions and bouncing among particles help to alter their positions and velocities significantly, thus leading to an increased diversity of the population. (b) In addition to attraction among particles, the feature of repulsion is introduced in attractive–repulsive PSO to increase the diversity in population in Riget and Vesterstroem (2002). (c) In ‘‘Gregarious PSO’’ (Pasupuleti and Battiti, 2006), aggressive exploitation of the space around the globally best particle is carried out; if a solution is not found, the positions and velocities of the particles are reinitialized. (d) A PSO algorithm with time varying acceleration coefficients (p1 and p2 in Eq. (7)) has been proposed in Ratnaweera and
ARTICLE IN PRESS 320
R.A. Thakker et al. / Engineering Applications of Artificial Intelligence 22 (2009) 317–328
begin /* Initialization */ randomly initialize the population of particles for position and velocity for i =1 to number of particles do calculate fitness of particle initialize particle’s own best position to current position update global best position if (termination criteria := TRUE) exit end if end for /* Iterative process */ do for i =1 to number of particles do update velocity of particle calculate fitness of particle update particle’s own best and global best positions end for while (termination criteria := FALSE) end
Fig. 3. (a) The basic PSO algorithm, (b) two-dimensional representation of the three components involved in velocity update of particles.
Halgamuge (2004). In this method, p1 is varied from 2.5 to 0.5, while p2 is simultaneously varied from 0.5 to 2.5 as the iterations proceed. This approach provides exploration in initial iterations and exploitation in final iterations. (e) In the HPSO algorithm (Janson and Middendorf, 2005), the population is divided into several groups, each with a ‘‘local leader.’’ The velocity update of a particle is significantly influenced by the position of local leader of its group, which results in increased diversity. Some other population topologies have been studied in Kennedy and Mendes (2002), and the von Neumann topology was observed to be more consistent in performance. In this paper, the HPSO algorithm has been adopted for the MOSFET parameter extraction, and the effect of some novel modifications on its performance is reported.
3. Hierarchical particle swarm optimization (HPSO) In the basic PSO, which is called the globally best or g-best PSO algorithm (Kennedy and Mendes, 2002; Janson and Middendorf, 2005), the position of the globally best particle is used in updating the particle velocities. In the HPSO algorithm, the particles are divided into groups, each group having a ‘‘local leader.’’ This feature of the HPSO algorithm enables better exploration of the search space. In the following, the organization of particles in HPSO and some modifications of the basic HPSO algorithm are described.
3.1. Arrangement of particles in HPSO Let us assume that the total number of particles is N. In HPSO, these particles are first arranged in ascending order according to their fitness, with the globally best particle (the ‘‘global leader’’) at position N. The next M best particles are designated as the ‘‘local leaders.’’ The remaining (N M 1) particles (the ‘‘generic particles’’) are divided into M groups. The M local leaders are assigned to the M groups (see Fig. 4). The generic particles follow their local leader and the local leaders follow the global leader. For ease of implementation, the branching degree (number of particles under a leader) is kept constant in this work. Any particle is allowed to become a local leader or the global leader, depending upon its fitness.
3.2. Memory loss (ML) operation The strength of the PSO algorithm is that each particle has a memory in the sense that it remembers the best position attained by it during its trajectory. However, in some cases, this feature makes the population vulnerable to getting stuck in local minima. In the GA, the mutation operation is generally used to bring the population out of a local minimum. The mutation operator has also been applied on the particle’s velocity vector in the PSO algorithm (Ratnaweera and Halgamuge, 2004) and was found to be useful. In this work, the mutation operator has been used to perform a ‘‘ML’’ operation, as follows.
ARTICLE IN PRESS R.A. Thakker et al. / Engineering Applications of Artificial Intelligence 22 (2009) 317–328
321
In each iteration, the local leaders are made to undergo the ML operation with a small probability (typically 1%). A random number is generated for each local leader, and if it is less than 0.01, a randomly selected component of its past best position (i.e., its memory) is replaced by a new random value. The following comments may be made about the ML operation.
2006) in which the velocity update of the global leader is performed with the acceleration coefficients p1 , p2 in Eq. (7) set to zero, i.e., the velocity update of the global leader is governed only by the inertia term. In this work, the following modifications of the strategy reported in Ho et al. (2006) were implemented.
(1) Due to the ML operation, the fitness of the local leader may reduce, and it would therefore get replaced by the fittest generic particle of the corresponding group, thus changing the orientation of the search process and exploration of a different search region for a better solution. (2) The ML operation is expected to continuously add diversity to the population at a small rate, thus possibly preventing the algorithm from getting stuck in a local minimum. (3) From tests conducted on well-known multi-dimensional benchmark functions, it was observed that, when the PSO algorithm was stuck in a local minimum, only a small number of the position components were far from the solution. This observation motivated the choice made in our implementation, viz., only one of the components of the position is changed (i.e., allowed to undergo the ML operation) at a time. If the selected component happens to be the one with a large discrepancy with respect to the solution, the ML operation can be expected to be very effective in bringing the population out of a local minimum over a few iterations.
(1) The inertia parameter (w in Eq. (7)) determines the steps taken in the velocity update operation. A large inertia does not allow effective exploration of the immediate neighborhood. A smaller inertia parameter was therefore used in our implementation during local search. An inertia value equal to one-fifth of the value of w for generic particles was found to be efficient. (2) The local search operation used in Ho et al. (2006) is illustrated in Fig. 5(a).The particle is initially at position A. Its velocity can get updated in two different ways as shown in the figure: (a) the new positions are on the same side of the local minimum (A ! B ! C in the figure), (b) the new position of the particle has crossed the minimum (A ! B0 in the figure). In the second case, one more step will take the particle to C0 , i.e., further away from the minimum, which is undesirable. In order to circumvent this difficulty, the approach illustrated in Fig. 5(b) is implemented in our work. If the fitness of the particle improves in a given PSO iteration but degrades in the next PSO iteration (see A ! B ! C in the figure), the new position (C in the figure) is not accepted. Instead, the velocity of the particle is inverted at B and the position update is carried out. This brings the particle to a fitter position (C0 in the figure).
3.3. Local search (LS) operation As pointed out in Schutte and Groenword (2005), the PSO algorithm is not very effective in terms of refinement of the solution. This can be improved by enhanced local search (Ho et al.,
3.4. Adaptive inertia The inertia parameter (w in Eq. (7)) is used in updating the velocity of a particle in the PSO algorithm. A larger value of inertia favors exploration, whereas a smaller value favors exploitation. In order to explore and exploit the search space more effectively, the inertia parameter should be assigned a sufficiently large value at the beginning and should be made smaller as the solution is approached. The most commonly used approach for varying the inertia is the linear update approach, first proposed by Shi and Eberhart (1998a), and given by Fig. 4. Arrangement of particles in HPSO.
wðtÞ ¼ ðwi wf Þ
ðt max tÞ þ wf , t max
Fig. 5. Local search operation in HPSO: (a) earlier scheme (Ho et al., 2006) and (b) the scheme implemented in this work.
(9)
ARTICLE IN PRESS 322
R.A. Thakker et al. / Engineering Applications of Artificial Intelligence 22 (2009) 317–328
where wi and wf represent the initial and final values of w, respectively, t is the current iteration number, and t max is the maximum number of iterations. The above approach does not take into account the fitness of the population. In this work, a new ‘‘adaptive inertia’’ (AI) approach is proposed in which the particle inertia is updated dynamically, based on the fitness of the population but not on the current iteration number. The new scheme is given by logðf ðx¯ g ðtÞÞ=f 0 Þ þ wf , wðtÞ ¼ ðwi wf Þ (10) logðf ðx¯ g ð0ÞÞ=f 0 Þ where x¯ g ð0Þ and x¯ g ðtÞ are the positions of the global leader at time 0 and t, respectively, and f ðxÞ is the function value for position x. The parameter f 0 may be considered as the ‘‘reference’’ or ‘‘goal’’ value for the function being optimized. In the MOS parameter extraction problem, f 0 is computed as follows. Let the tolerance for the relative error for any bias condition be . Then pffiffiffiffiffiffiffiffiffiffi P 2ffi f0 ¼ , where the summation is over all data points. If the pffiffiffiffi number of data points is K, f 0 is simply K . For the benchmark functions considered in this paper, f 0 is taken as the minimum or maximum value of the function (as applicable) from the literature. It should be noted that, for a minimization (maximization) problem with no prior knowledge of the minimum (maximum) value, the AI (Eq. (10)) could not be used. However, for many practical problems, this restriction does not apply. At t ¼ 0, f ðx¯ g ðtÞÞ ¼ f ðx¯ g ð0ÞÞ and therefore w ¼ wi . As the fitness of the global leader improves, f ðx¯ g ðtÞÞ will approach f 0 , and w will progress towards wf . Note that, in this scheme, the maximum number of iterations can be independently specified as a ‘‘large’’ number, without any impact on the inertia. This is a very convenient feature from the implementation point of view as well.
4. PSP MOSFET model and parameter extraction strategy Several advanced MOSFET models are currently in use for circuit simulation. The PSP MOSFET model (Gildenblat et al., 2006b), which has approximately 70 parameters to describe the behavior of a single device, is considered in this work. As discussed in Section 1, it is generally not possible to extract all of the desired parameters in one optimization step because the current–voltage (I2VÞ characteristics in different regions of device operation are affected by different sets of parameters. For this reason, the parameters have to be extracted in several extraction steps. This step-by-step procedure will be referred to as the ‘‘parameter extraction strategy’’ in the following. There are three types of parameters in the PSP model:
with W 1 being the smallest width and L1 being the smallest length. For a given W and L, the following steps describe our extraction procedure. (1) Flat band voltage and doping related parameters are extracted from either C g 2V g data (V bs ¼ V ds ¼ 0 V, V gs ¼ 1:5 to 1:5 V) or Id 2V g data (V ds ¼ 0:05 V, V bs ¼ 0, 0:45,0:9 V). (2) Mobility, series resistance, and body bias dependent parameters are extracted from Id 2V g data with V ds ¼ 0:05 V and V bs ¼ 0, 0:45, 0:9 V. (3) DIBL and below-threshold CLM parameters are extracted from subthreshold region of Id 2V g data for V ds ¼ 0:9 V, V bs ¼ 0, 0:45, 0:9 V. (4) CLM and velocity saturation parameters are extracted from Id 2V d data with V gs ¼ 0:4, 0.65, 0.9 V and V bs ¼ 0, 0:45, 0:9 V. (5) Parameters extracted in step 3 are refined using a narrower range for the relevant parameters. (6) Gate leakage parameters are extracted from Ig 2V g data (V bs ¼ 0 V, V ds ¼ 0, 0.45, 0.9 V). In Table 1, the PSP parameters considered in this work are specified along with the measurement data, extraction step, allowed range, and devices used to extract them. The parameter extraction begins with the device with W ¼ W 3 and L ¼ L8 , using the steps 1–6 described above. Next, the parameters for the device with W ¼ W 3 and L ¼ L1 (shortest channel) are extracted, keeping the type (a) parameters fixed at the values obtained previously. This procedure is repeated for other devices.
5. Results and discussion In Section 3, some novel features, viz., ML, local search, and AI, were proposed for the HPSO algorithm. The following abbreviations are used: ML for memory loss, LS for local search, and AI for adaptive inertia. Thus, HPSO (ML, LS), for example, means the HPSO algorithm with the ML and local search features. 5.1. Benchmark functions
(a) Geometry-independent parameters (e.g., the oxide thickness, t ox Þ. (b) Geometry-dependent (local) parameters whose values would depend on the device width (W) and length (L) (e.g., velocity saturation, vsat ). (c) Interpolation parameters which can be used to compute the local parameters (in (b)) for given device dimensions W and L values, using an interpolating formula.
In order to assess the effectiveness of the proposed algorithm and the novel features, it would be appropriate to test them on standard benchmark functions which are specially designed and routinely used to evaluate the performance of global optimization algorithms. Five such commonly used benchmark functions are given in Table 2. The first four functions are multi-modal, and the fifth function is unimodal. The dimension, initialization range, search space, and target to be obtained for each function are given in Table 3. The parameters in Table 3 are taken from (Ahn and Ramakrishna, 2003; Ratnaweera and Halgamuge, 2004; Pasupuleti and Battiti, 2006). The initialization range to generate population for each function is selected so as to exclude the globally optimum solution.
This work is restricted to parameters in (a) and (b); the extraction of the interpolation parameters is currently under investigation. The 34 parameters, which affect the DC I2V characteristics, are extracted in this work. Of these, 16 parameters are geometry independent (i.e., type (a) above). The extraction steps used are as follows. I2V characteristics of devices with different widths (W 1 ; W 2 ; W 3 ), and different lengths (L1 ; L2 ; . . . ; L8 ) are measured,
5.1.1. Algorithm parameters and performance measures The population for the PSO algorithms was taken to be 40, and the maximum number of iterations (t max ) is set to 5000. The acceleration coefficients p1 and p2 were chosen to be p1 ¼ p2 ¼ 1:49618 (Clerc and Kennedy, 2002). In HPSO, the population was taken to be 41 (one global leader, 10 local leaders, and 30 generic particles). The mutation probability for the ML operation was set to 0.01.
ARTICLE IN PRESS R.A. Thakker et al. / Engineering Applications of Artificial Intelligence 22 (2009) 317–328
323
Table 1 Detailed description of PSP parameters Extraction step
Parameter
Unit
Parameter range (Min: Max)
Measurement data
Device
1
VFB NEFF NP DPHIB
V
1:1 : 0:1 1e20: 1e26 1e20: 1e26 1e 2 : 1e 2
C g 2V g at V ds ¼ V bs ¼ 0 V or subthreshold region of Id 2V g data at low V ds and different V bs
L A L A
0.0: 7.5 100 : 100 0.0: 1.0
Id 2V g data at low V ds and different V bs
A L L
2
BETN VNSUB DNSUB NSLP CS XCOR
6
V1 V –
L A A
V1
0.0: 2.0 0.0: 3.0 0.0: 10.0 0.0: 100 0:5 : 10:0
L L L S S
V1 –
0:5 : 1:0
S
CT
0.0: 1.0
A
CFB
V1
0.0: 1.0
CF
V1
0.0: 1.0
A
ALP2
V1
0.0: 1.0
A
THESAT
V1
0.0: 10.0
THESATB
V1
0:5 : 1:0
S
THESATG
V1 V – V –
0:5 : 10:0
S
ALP1 ALP VP AX
0.0: 1.0 0.0: 1.0 1:0e 10 : 1:0 2:20
A A L A
NOV IGOV GC2 GC3 TOXOV GCO IGINV
m3 A – – m – A
1e20 : 1e26 0: 500 0: 10 2:0 : 2:0 1e 9 : 3e 9 10:0 : 10:0 0.0: 1.0e6
RSB
4
m2 =V=s V
0.001: 100 0.0: 10.0 0.0: 1.0
MUE THEMU FETA RS RSG
3, 5
m3 m3 V
V1 m/V – –
O
Subthreshold region of Id 2V g data at large V ds and different V bs
Id 2V d data at different V gs and V bs
Ig 2V g data at different V ds and V bs ¼ 0 V
S
A
L L L L L L A
L, S, A denote long-channel device, short-channel device, and all devices, respectively.
Regarding the choice of the inertia factor w, it has been shown in Shi and Eberhart (1998b), with experiments conducted on Schaffer function, that w should be in the range of 0.9–0.4 for the PSO algorithm to be efficient. The optimum value for inertia parameter was shown to be 0.7298 if w is kept constant (Clerc and Kennedy, 2002). In Janson and Middendorf (2005), an inertia range of 0.7298–0.4 was considered in their HPSO implementation. In this work, two inertia ranges were investigated: 0.9–0.4 (Shi and Eberhart, 1998b), and 0.7298–0.4 (Janson and Middendorf, 2005), and comparison between the two is carried out. To compare the various algorithms, 100 independent runs of each algorithm were carried out for each benchmark function. The ‘‘Mersenne Twister’’ method (Matsumoto and Nishimura, 1998) was used to generate random numbers. A run was considered to be successful if the algorithm was able to reach the specified goal. The number of successful runs is denoted by S. The absolute error () between the function value obtained at the end of each run and the goal f 0 was calculated. The average of absolute error (denoted by ¯ ) and standard deviation of the absolute error (s) were computed over all (i.e., successful and unsuccessful) runs. ¯ f ) is However, the number of average function evaluations (N computed over the successful runs only.
5.1.2. Results The results of various algorithms for benchmark functions are shown in Table 4. For the inertia range of 0.9–0.4, the following observations can be made. (a) The HPSO and PSO algorithms are equivalent in performance for all functions except f 3 . For f 3, HPSO is successful in all runs, but the success rate of PSO is substantially smaller. (b) HPSO (ML, LS, LI) is observed to be more successful than HPSO and PSO for all functions except f 2 . Comparing HPSO (ML, LS, LI) for the two inertia ranges, 0.9–0.4 (Range 1) and 0.7298–0.4 (Range 2), it can be seen that (a) Range 2 performs much better in terms of success rate ðSÞ and ¯ f ) for all functions. In particular, average function evaluations (N for f 1, Range 2 gave 75 successes while Range 1 gave 37. ¯ f for all functions. (b) Range 2 results in a significantly smaller N Based on these results, it was decided to use Range 2 (i.e., 0.7298 to 0.4) in the rest of the work.
ARTICLE IN PRESS 324
R.A. Thakker et al. / Engineering Applications of Artificial Intelligence 22 (2009) 317–328
Table 2 Benchmark functions used in this work Function
Globally optimum solution
Type
f 1 , Rastrigin: minimize n X f 1 ¼ 10n þ ½x2i 10 cosð2pxi Þ
Origin
Multimodal
Also, it was observed, by comparing HPSO (LS, LI) and HPSO (ML, LS, LI), that for all functions, the algorithm performed poorly when the ML operation was not included. In conclusion, the features of ML, local search, and AI are found to be generally very effective in most cases. 5.2. MOSFET parameter extraction
i¼1
f 2 , Griewank: minimize n n Y 1 X x f 2 ðxÞ ¼ x2 cos piffi þ 1 4000 i¼1 i i i¼1
Origin
Multimodal
maximize f 3 , Schaffer’s binary: qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi Pn 2 sin2 ½ i¼1 xi f 3 ðxÞ ¼ P 1:0 þ 103 ½ ni¼1 x2i 2
Origin
Multimodal
f 4 , Ackley: minimize
Origin
Multimodal
f 4 ðxÞ ¼ 20 þ expð1Þ 20 exp exp
kxk2 pffiffiffi 5 n !
5.2.1. Measurements To extract 34 parameters of the PSP model, the following measurements were taken on 65-nm technology NMOS devices of different channel length (L) in the range of 70 nm to 1 mm and width (W) in the range of 175 nm to 10 mm.
1. C g 2V g at V bs ¼ V ds ¼ 0 V, V gs ¼ 1:5 to 1:5 V with steps of 25 mV. 2. Id 2V g at V ds ¼ 0:05, 0.45, 0.9 V, V bs ¼ 0, 0:45, 0:9 V, V gs ¼ 020:9 V with steps of 25 mV. 3. Id V d at V gs ¼ 0:4, 0.65, 0.9 V, V bs ¼ 0, 0.45, 0.9 V, V ds ¼ 020:9 V with steps of 25 mV.
n 1X cosð2pxi Þ n i¼1
f 5 , Rosenbrock: minimize n1 X ½100ðxiþ1 x2i Þ2 þ ðxi 1Þ2 f 5 ðxÞ ¼
Parameter extraction for the PSP MOSFET model was carried out as per the parameter extraction strategy discussed in Section 4. The PSP parameters were extracted using GA and the various PSO algorithms.
½1; 1; . . . ; 1n
Unimodal
C g 2V g characteristic was only used for the L ¼ 1 mm device. Ig 2V g data was used only for those devices for which the effect of gate current (Ig ) was observed on the drain current (Id ).
i¼1
Table 3 Parameters of benchmark functions Function
Dimension ðnÞ
Initialization range
Search space
Goal ðf 0 Þ
F1
30
ð2:56; 5:12Þn
ð10; 10Þn
109
F2
30
ð300; 600Þn
ð600; 600Þn
F3 F4
10 30
ð15; 30Þn ð15; 32Þn
ð100; 100Þn ð32; 32Þn
109 0.994006
F5
30
ð15; 30Þn
ð100; 100Þn
109 106
Next, the effect of using the AI (Eq. (10)) on the performance of the HPSO (ML, LS) algorithm was studied. The following observations can be made from Table 4 regarding AI. (a) For functions f 1 , f 2 , f 5 , AI has increased the success rate. Also, except for f 3, AI results in a smaller number of function evaluations. (b) The use of AI has reduced the average error (¯) and standard deviation of error (s) for f 1 and f 5 . The error and inertia for a specific run are plotted as a function of iteration number in Fig. 6 for the linear and AI HPSO algorithms. In the LI case, the inertia does not depend on the fitness of the population whereas in the AI case, the inertia follows the fitness of the population, thus allowing much better exploration and exploitation of the solution space. Further, HPSO (ML, LI) and HPSO (ML, LI, LS) were compared, and it was found that the local search operation increased the success rate substantially for f 1. For f 2, the error and the standard deviation of error increased slightly when the local search operation was included (these results are not shown in Table 4).
5.2.2. Algorithm parameters and performance measures For the PSO and GA algorithms, the population and maximum number of iterations (t max ) are set to 100 and 1000, respectively. The population in the PSO and GA is randomly initialized with the values within the PSP parameter range as specified in Table 1. Inertia (w) is varied from 0.7298 to 0.4, since it was observed to be more efficient compared to 0.9–0.4 range as shown earlier for the benchmark functions. Other algorithmic parameters are set to the same values as given in Section 5.1. For the HPSO algorithm, the population is set to 101 particles (one global leader, 10 local leaders, and 90 generic particles). In the case of AI, the tolerance for the relative error () is taken as 1% to determine the value of f 0 (see Eq. (10)). For the GA, the crossover and mutation probabilities are taken as 0.84 and 0.15, respectively, and the tournament selection method is used with a selection parameter k equal to 0.75. The particular extraction step was terminated if one of the following criteria was satisfied: (i) specified relative error (f 0 ) was obtained, (ii) maximum number of iterations (t max ) was reached, and (iii) the algorithm could not improve solution for 100 iterations. To examine the consistency of the algorithms, each algorithm was tested for 20 independent runs (trials). In each run, the same parameter extraction strategy (as described in Section 4) was used to extract parameters from I2V and C2V measurements taken on devices fabricated with 65-nm technology. The same experimental data of different devices was employed in each run. The device with the smallest channel length (L ¼ 70 nm), for which most of the model parameters come into picture, was taken as the benchmark device to compare the various algorithms. After completion of parameter extraction, for each run, the percentage rms error between measurement data and model results was computed. For each algorithm, the rms error is summed over the following characteristics: (i) Id 2V g at V ds ¼ 0:05, 0.45, 0.9, and V bs ¼ 0, 0:45, 0:9 V, (ii) Id 2V d at V gs ¼ 0:4, 0.65, 0.9, and
ARTICLE IN PRESS R.A. Thakker et al. / Engineering Applications of Artificial Intelligence 22 (2009) 317–328
325
Table 4 Results for Benchmark functions showing the average error (¯) from the goal f 0 , standard deviation (s) of the error, number of successes ðSÞ, and average function ¯ f) evaluations (N Function
F1
Linear inertia (0.9–0.4)
¯ s S ¯f N
F2
¯ s S ¯f N
F3
¯ s S ¯f N
F4
¯ s S ¯f N
F5
¯ s S ¯f N
Linear inertia (0.7298–0.4)
Adpative inertia (0.729–0.4)
PSO
HPSO
HPSO (ML, LS, LI)
HPSO (ML, LS, LI)
HPSO (ML, LS, AI)
43.96 11.05 – –
46.51 12.18 – –
0.84 1.0 37 179331
0.28 0.56 75 149040
0.06 0.275 89 147386
0.014 0.014 32 69213
0.015 0.018 35 75557
0.016 0.02 32 79277
0.018 0.023 39 27815
0.013 0.018 44 17854
0.014 0.155 55 43738
0 0 100 44760
0 0 100 46709
0 0 100 8901
0 0 100 9336
0.152 0.453 89 82366
0.212 0.544 86 93099
0 0 100 94997
0 0 100 55020
0 0 100 44410
16.45 23.11 – –
14.89 26.13 – –
3.31 8.3 – —
0.56 1.83 3 195430
0.16 0.69 13 193774
The entry ‘–’ in the table indicates that there was no successful run. The best results are indicated using bold face.
Table 5 Parameter extraction results for PSP MOSFET model: minimum rms error ðmin rms Þ, maximum rms error (max rms ), standard deviation (s ) of rms rms ), average rms error (¯ error, and average CPU time (T¯ cpu ) over 20 runs
min rms max rms ¯ rms s T¯ cpu
GA
PSO
HPSO
HPSO (ML, LI)
HPSO (ML, LS, LI)
HPSO (ML, LS, AI)
49.1
40.04
40.01
61.61 55.32 3.57 112
52.14 44.47 3.37 81
45.71 43.03 1.63 85
40.18
37.92
38.2
46.58 42.94 1.8 87
45.19 40.28 1.78 103
44.63 40.54 1.48 82
The best results are indicated using bold face.
for one run (for a given algorithm) is denoted by T¯ cpu in the following. Fig. 6. Plot of error () and inertia (w) versus iteration number for the best run of HPSO (ML, LS, LI/AI) for function f 4 .
V bs ¼ 0, 0.45, 0.9 V, (iii) g m 2V gs at V ds ¼ 0:05, and V bs ¼ 0, 0.45, 0.9 V, and (iv) g ds V ds at V gs ¼ 0:4, 0.65, 0.9, and V bs ¼ 0, 0:45, 0:9 V. The rms error thus obtained is averaged over 20 runs for each algorithm. Here, g m 2V gs and g ds 2V ds are the first derivative of Id 2V g and Id 2V d , respectively. For the purpose of discussion to follow, the average rms error will be denoted by ¯ rms , and the standard deviation of rms error computed over different runs (for each algorithm) by s. The minimum and max maximum rms error is denoted by min rms and rms , respectively. We have also computed the standard deviation of extracted parameter values generated by each algorithm over 20 runs. All runs of parameter extraction were carried out on a computer with a 2.2 GHz AMD processor with 16 GB RAM. The average CPU time
5.2.3. Results Table 5 shows the results for the various algorithms for PSP MOSFET parameter extraction. The following observations can be made:
(a) The PSO algorithm and all its variants perform better than GA in all aspects, including the CPU time. This is the first time the PSO algorithm has been explored for the MOS transistor parameter extraction problem, and the results clearly show that it is more effective than GA which has been used so far for the MOSFET parameter extraction problem. In particular, it can be seen that GA takes 50% more CPU time than the PSO algorithm but gives ¯ rms that is 25% higher. (b) The HPSO algorithm performs better than the PSO algorithm with respect to the deviation of the rms error over several
ARTICLE IN PRESS 326
R.A. Thakker et al. / Engineering Applications of Artificial Intelligence 22 (2009) 317–328
runs (see the s column in Table 5), with a slight increase in the CPU time. In terms of the standard deviation of the individual MOSFET parameter over all 20 runs, HPSO performed better for most of the parameters. (c) The HPSO with ML, LS, and LI shows the lowest rms average error ¯ rms, but it takes 25% more CPU time (T¯ cpu ) in comparison with the PSO algorithm. (d) For the HPSO (ML, LI) algorithm (i.e., not including local search), the rms error ¯ rms increases but the CPU time reduces. By comparing the standard deviation of individual parameters, it was seen that HPSO (ML, LS, LI) gave smaller standard deviation for 28 parameters in comparison with 6 for HPSO (ML, LI), indicating the importance of the LS operation.
Thus, the additional CPU time is certainly justified for the local search operation. (e) The performance of HPSO (ML, LS, AI) is comparable with HPSO (ML, LS, LI) with respect to the rms average error (¯rms ) and deviation of the rms error (s ). However, the CPU time (T¯ cpu ) is significantly less (82 min) with AI as compared with linear inertia (103 min) as shown in Table 5. The reduction in CPU time is consistent with the results with the benchmark functions where a reduction in N¯ f was observed when the adaptive inertia was used (Section 5.1). (f) The standard deviation of the individual parameter was better for 26 of the 34 parameters when linear inertia was used. Thus, from the angle of standard deviation of individual parameter, LI was observed to be better compared with AI.
Table 6 RMS error for various characteristics of the benchmark device ðW=L ¼ 10 mm=70 nmÞ for the best run of HPSO with ML, LS, and AI
The rms error for the six characteristics for the benchmark device is presented in Table 6 for the best run of HPSO (ML, LS, AI). The error values are significantly smaller than those reported in the literature on modern MOS model parameter extraction using the GA (Watts et al., 1999). Figs.7–9 show the current and derivative plots comparing the experimental data with the results obtained with the HPSO (ML, LS, AI) algorithm. The importance of accurate extraction of parameters related to gate current can be observed in Fig. 8(b) for the L ¼ 1 mm device. The dashed line shows the drain current before the gate current correction, and the solid line shows the
Characteristic
RMS error in %
Id 2V g ðV ds ¼ 0:05 V; V bs ¼ 0; 0:45; 0:9Þ Id 2V g ðV ds ¼ 0:05 V; V bs ¼ 0; 0:45; 0:9Þ Id 2V g ðV ds ¼ 0:05 V; V bs ¼ 0; 0:45; 0:9Þ Id 2V d ðV gs ¼ 0:4; 0:65; 0:9 V; V bs ¼ 0; 0:45; 0:9 VÞ g m 2V g ðV ds ¼ 0:05 V; V bs ¼ 0; 0:45; 0:9Þ g ds 2V d ðV gs ¼ 0:4; 0:65; 0:9 V; V bs ¼ 0; 0:45; 0:9 VÞ
8.16 5.83 7.31 3.15 7.71 5.74
Fig. 7. Measured and modeled characteristics for 10 mm=70 nm (W=L) NMOS device using parameters extracted with HPSO (ML, LS, AI) algorithm. (a) Id 2V gs at V ds ¼ 0:05, (b) logarithmic plot of Id 2V gs at V ds ¼ 0:9, (c) Id 2V gs at V ds ¼ 0:45, (d) Id 2V ds , (e) g m 2V gs at V ds ¼ 0:05, and (f) g ds 2V ds .
ARTICLE IN PRESS R.A. Thakker et al. / Engineering Applications of Artificial Intelligence 22 (2009) 317–328
10−3
0.14 Measurement
Drain Current: Id (A)
Drain Current: Id (mA)
Model
L = 1 µm W = 10 µm
0.07
Vds = 0.05 V Vbs = 0.0, −0.45, −0.9 V
L = 1 µm
10−6
0.45 Voltage: Vgs (V)
Vds = 0.9 V
0
W = 10 µm Vbs = 0.0, −0.45, −0.9 V Vgs = 0.65 V
Vds = 0.45 V
10−8 Vds = 0 V Model
Vgs = 0.4 V
0.0 0
0.9
Vds = 0.9 V
L = 1 µm
0.4
0.45 Voltage: Vgs (V)
10−4 Vgs = 0.9 V
Gate Current: Ig (A)
Drain Current: Id (mA)
Measurement
Model: After Ig Extraction Model: Before Ig Extraction Measurement
10−9
0.9
0.8 Model
W = 10 µ m
Vbs = 0.0, −0.45, −0.9 V
0.0 0
327
0.4 Voltage: Vds (V)
5
10−12 −0.9
Vbs = 0.0 V
Measurement
−0.45
0 0.4 Voltage: Vgs (V)
5
Fig. 8. Measured and modeled characteristics for 10 mm=1 mm (W=L) NMOS device using parameters extracted with HPSO (ML, LS, AI) algorithm. (a) Id 2V gs at V ds ¼ 0:05, (b) logarithmic plot of Id 2V gs at V ds ¼ 0:9, (c) Id 2V ds , and (d) Ig 2V g (b) Shows matching before and after extraction of gate (Ig ) current parameters.
Fig. 9. Measured and modeled characteristics for 175 nm/175 nm (W=L) NMOS device using parameters extracted with HPSO (ML, LS, AI) algorithm. (a) Id 2V gs at V ds ¼ 0:05, (b) logarithmic plot of Id 2V gs at V ds ¼ 0:9, (c) g m 2V gs at V ds ¼ 0:05, and (d) g ds 2V ds .
same after the gate current correction. The improvement in fitting the Id 2V g characteristic, particularly at low values of V gs , can be clearly seen.
For various geometries (a) L ¼ 70 nm, W ¼ 10 mm (Fig. 7), (b) L ¼ 1 mm, W ¼ 10 mm (Fig. 8), and (c) L ¼ 175 nm, W ¼ 175 nm (Fig. 9), excellent agreement with the experimental data is
ARTICLE IN PRESS 328
R.A. Thakker et al. / Engineering Applications of Artificial Intelligence 22 (2009) 317–328
observed, thus pointing to the effectiveness of the algorithm. Even for a device with width W ¼ 1 mm, good matching was observed between measured and model data. Apart from the I2V characteristics, the derivative values also fitted very well with the extracted parameters in all cases, which is a very good test for any parameter extraction strategy.
6. Conclusions and future work In conclusion, the PSO algorithm has been applied to the problem of MOS model parameter extraction for the first time, and its performance is shown to be substantially better than the genetic algorithm. Further, the memory loss and local search operations are introduced in the HPSO algorithm, which has improved the performance of the HPSO algorithm remarkably for benchmark functions as well as for the MOSFET parameter extraction problem. It has been shown through tests with the benchmark functions that, when the inertia parameter (w) is varied within the range 0.7298–0.4, it improves the performance of HPSO significantly both in terms of success rate and number of function evaluations. A novel adaptive inertia scheme based on population fitness is demonstrated in this paper and is shown to reduce the CPU time and the number of function evaluations. In this work, we have described extraction of local and geometry-independent parameters for MOS transistors. For VLSI designers, the interpolation parameters for a specific technology (e.g., 65-nm technology) are of equal importance. Extraction of interpolation parameters involves fitting the local parameters obtained for devices with different geometries with certain fitting expressions. This is a challenging task which requires more extensive measurements and an implementation of design of experiments. This task is currently being undertaken by the authors.
Acknowledgments The authors are grateful to IMEC, Belgium, for providing the 65-nm technology devices used in this study and to Nikunj Gandhi for assistance in coding of the PSP MOSFET model. References Ahn, C., Ramakrishna, R., 2003. Elitism-based compact genetic algorithms. IEEE Transactions on Evolutionary Computation 7 (4), 367–385. Antoun, G., El-Nozahi, M., Fikry, W., Abbas, H., 2003. A hybrid genetic algorithm for MOSFET parameter extraction. In: IEEE Canadian Conference on Electrical and Computer Engineering, vol. 2, pp. 1111–1114. Bergh, F., Engelbrecht, A., 2004. A cooperative approach to particle swarm optimization. IEEE Transactions on Evolutionary Computation 8 (3), 225–239. Clerc, M., Kennedy, J., 2002. The particle swarm-explosion, stability, and convergence in a multidimensional complex space. IEEE Transactions on Evolutionary Computation 6, 58–73. Doganis, K., Scharfetter, D., 1983. General optimization and extraction of IC device model parameters. IEEE Transactions on Electron Devices ED-30 (9), 1219–1229. Gildenblat, G., et al., 2006a. PSP 102.0 User’s Manual. Gildenblat, G., et al., 2006b. PSP: an advanced surface-potential-based MOSFET model for circuit simulation. IEEE Transactions on Electron Devices 53 (9), 1979–1993.
Goldberg, D.E., 1989. Genetic Algorithm in Search, Optimization, and Machine Learning. Addison-Wesley, Reading MA. Gowda, S., Sheu, B., Chang, R., 1994. Effective parameter extraction using multiobjective function for VLSI circuits. Analog Integrated Circuits and Signal Processing 5 (2), 121–133. Harik, G., Lobo, F., Goldberg, D., 1999. The compact genetic algorithm. IEEE Transactions on Evolutionary Computation 3 (4), 287–297. Ho, S.L., Yang, S., Ni, G., Wong, H.C., 2006. A particle swarm optimization method with enhanced global search ability for design optimizations of electromagnetic devices. IEEE Transactions on Magnetics 42 (4), 1107–1110. Janson, S., Middendorf, M., 2005. A hierarchical particle swarm optimizer and its adaptive variant. IEEE Transactions on Systems, Man and Cybernetics—Part B: Cybernetics 35 (6), 1272–1282. Juang, C., 2004. A hybrid of genetic algorithm and particle swarm optimization for recurrent network design. IEEE Transactions on Systems, Man, and Cybernetics— Part B: Cybernetics 34 (2), 997–1006. Kennedy, J., Eberhart, R., 1995. Particle swarm optimization. In: Proceedings of the IEEE International Conference on Neural Networks, Piscataway, NJ, pp. 1942–1948. Kennedy, J., Mendes, R., 2002. Population structure and particle swarm performance. In: Proceedings of the Congress on Evolutionary Computation, vol. 2, pp. 1671–1676. Keser, M., Joardar, K., 2000. Genetic algorithm based MOSFET model parameter extraction. In: Proceedings of the International Conference on Modeling and Simulation of Microsystems, pp. 341–344. Krink, T., Vestertroem, J., Riget, J., 2002. Particle swarm optimization with spatial extension. In: Proceedings of the IEEE Congress on Evolutionary Computation (CEC 2002), Honolulu, Hawaii. Li, Y., 2007. An automatic parameter extraction technique for advanced CMOS device modeling using genetic algorithm. Microelectronics Engineering 84 (2), 260–272. Matsumoto, M., Nishimura, T., 1998. Mersenne twister: a 623-dimensionally equidistributed uniform pseudo-random number generator. ACM Transactions on Modeling and Computer Simulation 8 (1), 3–30. Monson, C., Seppi, K., 2006. Adaptive diversity in PSO. In: Proceedings of the Genetic and Evolutionary Computation Conference (GECCO 2006), pp. 59–65. Murakawa, M., Miura-Mattausch, M., Mimura, S., Higuchi, T., 2005. Genetic algorithm for reliable parameter extraction of complete surface-potentialbased models. In: Proceedings of the Second International Workshop Compact Modeling, vol. 1, pp. 7–12. Pasupuleti, S., Battiti, R., 2006. The gregarious particle swarm optimizer. In: Proceedings of the Genetic and Evolutionary Computation Conference (GECCO 2006), pp. 67–74. Potter, M., Jong, K., 1994. A cooperative evolutionary approach to function optimization. In: The Third Parallel Problem Solving from Nature. Springer, Berlin, Germany, pp. 249–257. Ratnaweera, A., Halgamuge, S., 2004. Self-organizing hierarchical particle optimizer with time varying acceleration coefficients. IEEE Transactions on Evolutionary Computation 8 (3), 240–255. Riget, J., Vesterstroem, J., 2002. A diversity-guided particle swarm optimizer—the ARPSO. Technical Report 2002, Department of Computer Science, University of Aarhus. Sakurai, T., Lin, B., Newton, A., 1992. Fast simulated diffusion: an optimization algorithm for multiminimum problems and its application to MOSFET model parameter extraction. IEEE Transactions on Computer-Aided Design II (4), 228–234. Schutte, J., Groenword, A., 2005. A study of global optimization using particle swarms. Journal of Global Optimization 31, 93–108. Shi, Y., Eberhart, R., 1998a. A modified particle swarm optimizer. In: Proceedings of the IEEE Congress on Evolutionary Computation, pp. 69–73. Shi, Y., Eberhart, R., 1998b. Parameter selection in particle swarm optimization. In: Proceedings of the Evolutionary Programming VII, vol. 1447, pp. 591–600. Ward, D., Doganis, K., 1982. Optimized extraction of MOS model parameters. IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems CAD-1 (4), 163–168. Watts, J., Bittner, C., Heaberlin, D., Hoffman, J., 1999. Extraction of compact model parameters for ULSI MOSFETs using a genetic algorithm. In: Proceedings of the International Conference on Modeling and Simulation of Microsystems, pp. 176–179. Yang, P., Chatterjee, P., 1983. An optimal parameter extraction program for MOSFET models. IEEE Transactions on Electron Devices ED-30 (9), 1214–1219. Zhao, B., Guo, C., Cao, Y., 2005. An improved particle swarm optimization algorithm for optimal reactive power dispatch. In: Proceedings of the IEEE Power Engineering Society General Meeting, vol. 1, pp. 272–279.