A novel model based on wavelet LS-SVM integrated improved PSO algorithm for forecasting of dissolved gas contents in power transformers

A novel model based on wavelet LS-SVM integrated improved PSO algorithm for forecasting of dissolved gas contents in power transformers

Electric Power Systems Research 155 (2018) 196–205 Contents lists available at ScienceDirect Electric Power Systems Research journal homepage: www.e...

2MB Sizes 0 Downloads 17 Views

Electric Power Systems Research 155 (2018) 196–205

Contents lists available at ScienceDirect

Electric Power Systems Research journal homepage: www.elsevier.com/locate/epsr

A novel model based on wavelet LS-SVM integrated improved PSO algorithm for forecasting of dissolved gas contents in power transformers Hanbo Zheng a,b,1 , Yiyi Zhang a,f,∗,1 , Jiefeng Liu a,c,∗,1 , Hua Wei a , Junhui Zhao d , Ruijin Liao e a

Guangxi Key Laboratory of Power System Optimization and Energy Technology, Guangxi University, Nanning, Guangxi 530004, China State Grid Henan Electric Power Research Institute, Zhengzhou, Henan 450052, China c Shijiazhuang Power Supply Branch of State Grid Electric Power Company, Shijiazhuang 050093, China d Department of Electrical and Computer Engineering & Computer Science, University of New Haven, West Haven, CT 06516, USA e State Key Laboratory of Power Transmission Equipment & System Security and New Technology, Chongqing University, Chongqing 400044, China f National Demonstration Center for Experimental Electrical Engineering Education, Guangxi University, Nanning, Guangxi 530004, China b

a r t i c l e

i n f o

Article history: Received 11 July 2017 Received in revised form 7 September 2017 Accepted 10 October 2017 Keywords: Wavelet technique Least squares support vector machine Forecasting Dissolved gases Oil-immersed power transformers Particle swarm optimization

a b s t r a c t Finding out the transformer incipient faults and their development trend has always been a central issue for electric power companies. In this paper, a novel approach combing wavelet technique with least squares support vector machine (LS-SVM) for forecasting of dissolved gases in oil-immersed power transformers has been proposed. The algorithm of particle swarm optimization (PSO) with mutation is developed to optimize the parameters of constructed wavelet LS-SVM regression (W-LSSVR). The existence of admissible wavelet kernels is proven by theoretic analysis. Evaluation of forecasting performance is based upon the measures of mean absolute percentage error (MAPE) and squared correlation coefficient (r2 ). On the basis of the proposed approach, a procedure is put forward to serve as an effective tool and experimental results show that this approach is capable of forecasting the dissolved gas contents accurately. Comparing with the back propagation neural network (BPNN), the radial basis function neural network (RBFNN), the generalized regression neural network (GRNN), and the SVM regression (SVR) in two practical cases (taken hydrogen as an example here), the MAPEs of the proposed approach are significantly better than that of the four methods (5.4238% vs 19.1458%, 11.7361%, 7.7395%, 8.3248%; 2.1567% vs 18.9453%, 10.2451%, 7.8636%, 2.4628%) respectively. © 2017 Elsevier B.V. All rights reserved.

1. Introduction As a major component of the power system, a malfunction of an oil-immersed power transformer is among the more frequent causes of interruptions in power supplies with serious repercussions on the system stability and reliability [1]. In order to promote the operation reliability of power transformers, electric power companies perform online monitoring to obtain the condition of oil-immersed power transformers.

∗ Corresponding authors at: Guangxi Key Laboratory of Power System Optimization and Energy Technology, Guangxi University, Nanning, Guangxi 530004, China. E-mail addresses: [email protected] (H. Zheng), [email protected] (Y. Zhang), [email protected] (J. Liu). 1 These authors contributed equally to this work. https://doi.org/10.1016/j.epsr.2017.10.010 0378-7796/© 2017 Elsevier B.V. All rights reserved.

Dissolved gases in oil always result from the decomposition of electrical insulation materials (oil or paper), as a result of faults or chemical reactions in the power transformers. Therefore, with the advantages of non-destructive monitoring and sensitively detecting, dissolved gas analysis (DGA) is considered to be a widely used online monitoring method for detecting the early incipient faults in oil-filled power transformers [2]. Various computational and graphical methods employing gas ratios and proportions of gases dissolved in oil include the key gas method, Doernenburg ratio method, Rogers ratio method, IEC ratio method, and Duval triangle method [3]. In the IEC 60599 three-ratio method, ratios of certain gases are used for diagnostic analysis. These techniques were standardized in 1978 by IEC and later revised in 2008 [4]. In the last two decades, artificial intelligent (AI) techniques have been proposed for transformer fault diagnosis, such as fuzzy logic inference system [5–17], artificial neural network (ANN) method [18–26], expert systems [27–29], grey clustering analysis method [30,31], and rough

H. Zheng et al. / Electric Power Systems Research 155 (2018) 196–205

set theory method [32,33], among others. Additional AI techniques have been used by researchers, such as self-organizing polynomial networks [34], organizing-map algorithm [35], data mining approach [36], extension theory [37], Bayesian network [38], and kernel-based possibilistic theory [39], etc. From the above brief review, the fault diagnosis approaches mentioned above can only detect the present-time happening or already happened faults in a transformer rather than forecast the faults. Due to the fact that the forecasting of the gas content in oil can effectively predict the faults of oil-immersed transformers, some data-centric machine-learning techniques have been introduced for the prediction of transformer failures from DGA data [40–47]. In the past, the main drawback of the traditional forecasting methods was that they were established on the principle of empirical risk minimization such as the back propagation neural network (BPNN) [48], radial basis function neural network (RBFNN) [49] and generalized regression neural network (GRNN) [50]. As discussed in Refs. [48–50], these approaches related to artificial neural network can fully approach to arbitrary complex nonlinear relationship and show the good performance in a forecasting problem. However, a large amount of historical data is needed for model training to overcome the over fitting problem, which may result in unacceptable performance in the application because of the limitation of the key gas content data in practice [51–53]. A support vector machine (SVM) relies on the structural risk minimization principle, in considering both empirical risk minimization and the complexity of the learning machine, and therefore it is good for solving small samples and optimal problems, and has good generalization ability [54–59], so it is an effective approach for solving the forecasting problems [56]. The least squares support vector machine (LS-SVM) is introduced by Ref. [60] as reformulations to the standard SVM [54,56] which simplifies the model of standard SVM in a great extent by applying linear least squares criteria to the loss function instead of traditional quadratic programming method. The simplicity and inherited advantages of SVM such as basing principle of structural risk minimization and kernel mapping promote the applications of LS-SVM in many pattern recognition and regression problems [61–64]. However, the common used kernels for SVM such as Gaussian and polynomial kernels are not orthonormal bases, whereas the wavelet function is orthonormal in L2 (RN ) space [65–67]. That is to say the wavelet function can approximate arbitrary curves in L2 (RN ) space. So it is not surprising that wavelet kernel gets better approximation than Gaussian kernel [68–71], which is shown by computer simulations. In addition, most of past studies do not consider optimizing parameters of the kernel function for SVM, leading to get unsatisfactory classification or prediction accuracies in practice. In order to overcome these limitations, the main contribution of this paper can be summarized as follows: (1) Proving the existence of three admissible wavelet kernels and building the LS-SVM regression algorithm based on wavelet kernels. (2) A mutation operation employing certain probability is applied to the algorithm of traditional particle swarm optimization (PSO) [72–76] to optimize parameters of the kernel function for LS-SVM, which can overcome the drawback of premature convergence of traditional PSO. (3) Introducing the wavelet LS-SVM regression (W-LSSVR) to forecast dissolved gases in oil-immersed power transformers. The novelty of this paper is to integrate the advantages of wavelet kernels, LSSVM, and PSO with mutation to forecast dissolved gases in oil-immersed power transformers. And the performance evaluation of proposed W-LSSVR is guided by the two measure criteria; namely, mean absolute percentage error (MAPE)

197

and squared correlation coefficient (r2 ). Research results and comparisons are performed to emphasize the potential of the provided method with satisfactory forecasting accuracy and valuable information. 2. Methodology 2.1. Establishment of wavelet least squares support vector machine 2.1.1. Conditions for support vector’s kernel function Generally, the formation of an SVM kernel is a kernel of dotproduct type in some feature space K(x, x ) = ϕ(x)T ϕ(x ). According to Hibert–Schmidt theory, K(x, x ) can be any symmetric function satisfying the following Mercer’s condition [77]. Theorem 1. To guarantee that the symmetric function K(x, x ) from L2 (RN × RN ) space has an expansion K(x, x ) =

∞ 

ak

k (x)

k (x



)

(1)

k=1

with positive coefficients ak > 0, (i.e., K(x, x ) describes an inner product in some feature space), it is necessary and sufficient that the condition.



K(x, x )g(x)g(x )dxdx ≥ 0

(2)

be valid for all g = / 0 for which



g 2 (x)dx < ∞

(3)

For the translation invariant kernels, i.e., K(x, x ) = K(x − x ) are also admissive SVM kernels if they satisfy Mercer’s condition. However, it is difficult to decompose the translation invariant kernels into the dot-product of two functions and then to prove them as SVM kernels according to Theorem 1. Consequently, we state a necessary and sufficient condition for translation invariant kernels as follows [78]: Theorem 2. A translation invariant kernel K(x, x ) = K(x − x ) is an admissible SVM kernel if and only if the Fourier transform need satisfy the following condition: F[K](ω) = (2)−N/2



exp(−j(ω · x))K(x)dx ≥ 0

(4)

RN

2.1.2. Wavelet kernel functions as support vector’s kernels Suppose that a wavelet function (x) satisfies the admissibility condition





|

0

 (x)|2

ω

dω < ∞

(5)

where  (x) is the Fourier transform of (x), then (x) is called the mother wavelet. So the wavelet function group can be defined as: a,b (x)

1 = √ a

(

x−b ) a

(6)

where a, b ∈ R, a is a dilation factor, a > 0 and b is a translation factor. For a common multidimensional wavelet function, we can construct it as the product of one dimensional wavelet functions: (x) =

N  i=1

(xi )

(7)

198

H. Zheng et al. / Electric Power Systems Research 155 (2018) 196–205

Therefore, we can construct the translation invariant wavelet kernel as follows: K(x, x ) =

N  x − x   i i

xi is the input and yi is the corresponding target value of sample i. Our goal is to deduce an estimate f(x) that approximates the actual desired y from the obtained training samples and at the same time is as flat as possible. In the original space, the LS-SVM regression can be described as the following optimization formulation:

(8)

a i=1

KMorlet (x, x ) =

N  

 cos 1.75

xi

− x

i







exp

a

xi

i=1

KMarr (x, x ) =

N  

xi − x i  a2

1−

i=1

KDOG (x, x ) =

N  

 −

exp

i=1

 −

xi − x i  8a2

 2

 −

exp

xi − x i 2 2a2

 −

− x

i

2

xi − x i  2a2

min (ω, e) =

subject to the equality constrains. yi = ωT ϕ(xi ) + ˇ + ei ,

(9)

(10)

1 exp 2

f (x) = (11)

K(x) =

N  

l 

˛i K(x, xi ) + ˇ

(18)

Combining the wavelet kernel functions with LS-SVM regression, we can build the W-LSSVR model. The structure of W-LSSVR is shown in Fig. 1. 2.2. Parameter optimization of wavelet LS-SVM regression based on PSO with mutation



exp(−j(ω · x))K(x)dx

2.2.1. Particle swarm optimization with mutation In the W-LSSVR model, the parameters have great influence on the performance of W-LSSVR, therefore the selection of parameters should be handled carefully. The application of cross validation to select the best parameter among the candidates could be a good method. On the basis of the idea, more methods are introduced to improve the results [79–83], including analytical techniques

(12)

RN

where

(17)

i=1

Without loss of generality, in the following, we prove that the Morlet wavelet kernel is an admissible SVM kernel. According to Theorem 2, we only need to prove the following Fourier transform to be nonnegative. F[K](ω) = (2)

i = 1, 2, ..., l

Note that ω may become an infinite dimensional vector potentially, so that this primal optimization problem cannot be solved. Therefore, the reformulation of the aforementioned optimization problem is transformed to a dual optimization one through Lagrange function. The function of the data can be expressed conveniently in the original dimensional feature space as follows:

 2

 2

−N/2

(16)

i=1



2a2

1 T 1  2 ei ω ω+ C 2 2 l

Theorem 3. The translation invariant wavelet kernels (say, Morlet, Marr and DOG wavelet kernels) derived from the mother wavelets of Morlet, Marr and DOG are admissible SVM kernels, which are defined respectively as follows:

 cos 1.75

xi a



 exp



i=1

xi 2 2a2

 (13)

We firstly calculate the integral term



exp(−j(ω · x))

RN

=

N  

 exp(−j(ω · x))K(x)dx = RN

 N  1 2

 exp



−∞

i=1

= exp



+∞



xi 2 + 2a2

xi a



 exp

i=1

 1.75j a

 

− jwi a xi

N √  √2a 2  2|a| √

1.75j − jwi a a

 cos 1.75

2

2

 + exp

 + exp





xi 2 − 2a2



xi 2 2a2

 1.75j a

=

2



exp





1.75 − wi a2 2

dx

 

+ jwi a xi

dxi

N √  √2a 2  2|a| √

1.75j + jwi a a

2

i=1 N √  2|a| √



2



i=1

2



+ exp



1.75 + wi a2 2

2 (14)

i=1

Since Eq. (14) is nonnegative, we get the Fourier transform to be nonnegative. Therefore, the proof is completed. Similarly, we can prove the Marr and DOG wavelet kernels are also admissible kernels. And this completes the proof of Theorem 3. 2.1.3. Wavelet LS-SVM regression In regression problems, consider first a model in the original space of the following form: f (x) = ωT ϕ(x) + ˇ Rn ,

(15)

f(x) ∈ R, and ϕ(x) denotes a set of nonlinear transforwhere x ∈ mations. Given a training set {(x1 , y1 ), . . ., (xl , yl )} ⊂ Rn × R, where

and heuristic algorithms. The analytical techniques determine the parameters with gradient-based algorithms. And the heuristic algorithms determine the parameters with modern evolutionary algorithms such as genetic algorithm, simulated annealing algorithm and PSO algorithm. In this paper, we use the PSO to optimize the W-LSSVR model. The major advantage of PSO is that it uses the physical movements of the individuals in the swarm and has a flexible and well-balanced mechanism to enhance and adapt to the global and local exploration abilities. Another advantage of PSO is its simplicity in coding and consistency in performance.

H. Zheng et al. / Electric Power Systems Research 155 (2018) 196–205

Fig. 2. Flowchart of PSO for parameter optimization.

Fig. 1. Structure diagram of W-LSSVR model.

Each particle of a swarm is one solution in a d-dimension space. When the iteration is t, the best solution of particle i is pdi (t), and

pdg (t) represent the best settlement achieved among all particles in the swarm thus far. To search for the optimal solution, each particle changes its velocity and position according to the following updating equation:

vdi (t

+ 1) =

wvdi (t) + c1 r(t)





pdi (t) − xid (t)

+ c2 r(t)





pdg (t) − xid (t)

xid (t + 1) = xid (t) + vdi (t + 1) (19) c1 and c2 are two acceleration constants that adjust the relative velocities to the best global and local positions respectively. r(t) is a random variable that is drawn from an uniform distribution in the open interval (0, 1) to provide a stochastic weighting of the different components participating in the particle velocity definition. The velocity is restricted to the [−vmax , vmax ] range in which vmax is a predefined boundary value. The capabilities of local and global exploration are balanced by using inertia weight w, which has a large initial value at the beginning and decreases gradually. The following equation is used to determine w: w = wmax −

wmax − wmin t T

(20)

where wmax is the initial weight, wmin is the final weight, t is the current iteration, and T is the maximum number of iterations. However, the PSO algorithm also suffers from premature convergence just like other swarm intelligence algorithms. In order to overcome the premature problem of the PSO algorithm, mutation is an effective strategy. According to our repeated studies, in this paper, the following simple mutation operation is implemented to PSO d d d xid (t) = (xmax − xmin )r(t) + xmin

199

(21)

where r(t) must firstly satisfy the constraint r(t) > pr (pr represent for the mutation probability, it is suggested as a value in the range of [0.85, 0.95], which can assure about 10% mutation d d probability) before running Eq. (18). xmax and xmin are the userdetermined maximum and minimum values for the parameters selection respectively. 2.2.2. Parameter optimization Utilizing the PSO algorithm to optimize the parameters each particle stands for a potential solution, including the regularization parameter C and kernel parameter a. And the parameter optimality is measured by the fitness functions that are defined in relation

to the considered optimization problem. In the process of training and testing, the objective of W-LSSVR is to improve the generalized performance of the regression model. It is equal to minimize the deviation of the testing samples between the true values and forecasting values. Therefore, the fitness function can be defined as follows:

 

 1  2 1 Fitness = (f (xij ) − yij ) m k k

i=1

m

(22)

j=1

where k is the number of folds in cross validation, m is the number of each subset as validation, yij is the true value, and f(xij ) is the forecasting value of validation samples. The target of our study is to minimize the fitness function, so the performance of the particle with the minimal fitness value outperforms the others and the value of it should be reserved during the process of optimization. Accordingly, the optimal parameters can be selected. The process of PSO with mutation for parameter optimization is presented in the following steps: Step 1: Initialize the size of swarm, the maximum value of iterations and the velocity and position of each particle. Step 2: Evaluate each particle’s fitness according to Eq. (22) and set the best position from the particle with the minimal fitness in the swarm. Step 3: For each candidate particle, training W-LSSVR with the corresponding parameters on the basis of the theory of cross validation. Step 4: Renovate the velocity and position of each particle according to Eqs. (19) and (20). Step 5: If r(t) > pr , execute mutation operation according to Eq. (21). Step 6: Check the stopping criterion. If the fitness is not fulfilled, return to Step 2. Otherwise proceed to the next step. Step 7: Terminate the algorithm and give the optimal parameters. The flowchart of PSO for parameter optimization is shown as Fig. 2. 2.3. Procedure of forecasting gas contents based on W-LSSVR and PSO with mutation The different phases which are based on the proposed W-LSSVR and PSO with mutation procedure are described as follows and shown in Fig. 3. All the wavelet techniques and LS-SVM algorithms in this study are coded in MATLAB, which was widely used for mul-

200

H. Zheng et al. / Electric Power Systems Research 155 (2018) 196–205

Table 1 Data of key gas contents from oil-immersed power transformers. Case no.

Date

H2

CH4

C2 H2

C2 H4

C2 H6

Type

1

11/15/2007 11/22/2007 11/29/2007 12/06/2007 12/13/2007 12/20/2007 12/27/2007 01/03/2008 01/10/2008 01/17/2008 01/24/2008

4.50 7.40 10.20 7.70 8.80 12.60 15.20 14.00 14.90 16.80 13.20

20.50 23.70 32.30 32.70 37.50 39.40 45.60 41.80 45.50 58.00 47.70

0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00

10.00 12.90 17.50 16.10 17.00 16.80 20.60 20.10 21.80 26.80 22.40

5.90 7.00 10.10 8.30 10.90 11.60 11.80 12.10 13.20 17.50 13.00

Training Training Training Training Training Training Training Training Training Training Testing

2

04/14/2005 04/28/2005 05/05/2005 05/12/2005 05/19/2005 05/26/2005 06/02/2005

18.50 20.90 22.10 23.10 23.30 24.60 23.90

62.30 66.80 68.80 71.10 71.40 72.70 73.40

5.50 6.20 6.30 6.60 6.70 6.50 6.70

22.50 23.50 23.90 24.50 24.60 24.90 24.30

70.00 79.80 83.20 87.30 89.00 90.20 92.20

Training Training Training Training Training Training Testing

is taken as an evaluation measure. Let x1 , . . ., xl be the training data and f(x1 ), . . ., f(xl ) be the forecasting values by W-LSSVR. Assumed that the true values are represented by y1 , . . ., yl , the MAPE and r2 can be described as follows: 1  f (xi ) − yi | | yi l l

MAPE =

i=1

l

2

r = Fig. 3. Procedure of forecasting approach based on W-LSSVR and PSO with mutation.

⎛ ⎝l

f (xi )yi −

i=1

l  i=1

tidomain simulation, automatic code generation, and data analytics [84–89]. Phase 1: Data preprocessing Firstly achieve a collection of primary data from the key gas contents (e.g., H2 , CH4 , C2 H2 , C2 H4 and C2 H6 ) and generate the training and testing sets separately. Consider that the initial samples which are usually obtained from electric power companies may be collected at irregular intervals. Therefore the primary sampling data should be transformed into equal interval time series through interpolation methods (In this paper, the Hermite spline interpolation is selected). Then the initial data which consist of training data and testing data become normalized, and then the ability of generalization for W-LSSVR can be improved. Phase 2: PSO implementation for parameter optimization According to the description in Section 2.2, the optimization procedure can be summarized as PSO initialization, fitness evaluation, updating, mutation, and the termination checking. During the optimization process, cross validation is applied to PSO. For k-fold cross validation, once the training data experience the process of random permutation, it is divided into k disjunct sets. In the i-th (i = 1, 2, . . ., k) iteration, the i-th set (called validation set) is used to evaluate the performance of the model trained on the other k − 1 sets (called training set). At last, the k different estimates of the performance are combined equally. Phase 3: Testing and forecasting With the optimal parameters obtained from the PSO implementation, the W-LSSVR training model based on training data can be built, and the outputs on testing data will be forecasted. To validate the performance in training phases, we take mean absolute percentage error (MAPE) and squared correlation coefficient (called r2 ) as evaluation measures. In testing phase, only MAPE

l 

f (xi )2 −

(23)

l  i=1

l 

l 

f (xi )

2 yi

⎞⎛ l l 2 ⎞ (24)   ⎠ ⎝l yi 2 − ⎠ f (xi ) yi i=1 2

i=1

i=1

i=1

3. Cases study and results comparison 3.1. Forecasting results based on W-LSSVR and PSO with mutation To indicate the effectiveness of the proposed forecasting model, cases of dissolved gas data from several electric power companies in China were collected in this study. The rating of the tested transformers is 110 kV. The data of key gas contents from oil-immersed transformers are shown in Table 1. The experiments implemented in this study with three kinds of wavelet kernels mentioned above, were Morlet, Marr, and DOG W-LSSVR respectively. For Case 1, the sampling time was obtained in the period between November 2007 and January 2008 periodically. The first ten sets are set as training data and the last set is set to be testing set. The experimental data consists of training data and testing data are normalized before applying W-LSSVR. Then, taking the Morlet W-LSSVR as an example, the algorithm of PSO is implemented with mutation to find the optimal parameters for each group of the key gas contents using 5-fold cross validation. Within the limited iterations, in order to cover the search space and carry out the experiment normally, the initial population size of swarm is chosen to 20. The maximum number of generations is fixed to 100 and the inertia weight is initially set to 0.9 and reduced to 0.1 linearly according to Eq. (20). The convergence process of PSO with Morlet W-LSSVR for CH4 is drawn in Fig. 4. From Fig. 4, it can be observed that the fitness curves decrease rapidly at the beginning of the iterations and afterward the curves appear almost flat, which shows that the algorithm of PSO with mutation converges to the optimal

H. Zheng et al. / Electric Power Systems Research 155 (2018) 196–205

201

Fig. 4. Convergence curves of PSO with Morlet W-LSSVR for CH4 in Case 1. Fig. 7. Forecasting results with DOG W-LSSVR for CH4 in Case 1.

Fig. 5. Forecasting results with Morlet W-LSSVR for CH4 in Case 1.

the forecasting data and that of the actual values are almost coincident. It means that all the three kinds of W-LSSVR present very good forecasting performance. Otherwise, the difference of their performance among Morlet, Marr and DOG W-LSSVR is not significant, which partly due to the same wavelet family they belong to and some similar characteristics they have. The optimal parameters and forecasting performance of Morlet, Marr and DOG W-LSSVR for Case 1 are given in Table 2. As listed in Table 2, the testing MAPEs obtained from Morlet W-LSSVR are 5.4238%, 2.6832%, 4.4761% and 3.9606% respectively. While for the Marr W-LSSVR the MAPEs are 3.2301%, 3.417%, 8.6217% and 7.7346% respectively. And we get 5.9128%, 4.212%, 8.7995% and 8.2363% respectively in MAPE for DOG W-LSSVR. From Figs. 5–7 and Table 2, it can be observed that the W-LSSVR based on PSO with mutation has strong learning capability under the circumstances of small samples and accomplishes excellent generalization performance simultaneously. To further demonstrate the effectiveness and the generalizability of the proposed forecasting model, three benchmark data of UCI [90] including temp, atemp, and hum are applied to test the forecasting performance of the model. Table 3 shows the evaluation performances (MAPE and r2 ) of the LS-SVM with different kernel functions in the three UCI benchmark data, and the results show that the W-LSSVR is superior to the others. From the above results, we conclude that the W-LSSVR based on PSO with mutation obtains promising consequences and accomplishes excellent generalization performance. The forecasting information of gas contents can be directly related to fault characteristics of transformers, which is significant for making decisions regarding the transformer fault diagnosis or arrangement of maintenance schemes. 3.2. Comparisons with BPNN, RBFNN, GRNN and SVR

Fig. 6. Forecasting results with Marr W-LSSVR for CH4 in Case 1.

solution quickly. Consequently, the ones with the minimum fitness value are selected as the most appropriate parameters. In the following step, the optimal parameters are applied to train Morlet W-LSSVR. The MAPE and r2 are carried out to judge the performance of this model. And the testing data are used to examine the accuracy of the final forecasting results. Fig. 5 shows an example of forecasting results with Morlet W-LSSVR for CH4 of Case 1. Similarly, the forecasting results with Marr and DOG W-LSSVR are shown in Figs. 6 and 7, respectively. From Figs. 5–7, the curves of

For comparisons, different forecasting models based on BPNN, RBFNN, GRNN and SVR are carried out under the same training and testing conditions. Note that the sampling data are unequal interval series in Case 2, which needs to be preprocessed by the method of cubic Hermite spline interpolation. The first six sets are set as training data and the last set is set to be testing set. Then all experimental data are normalized before training. Owing to the limited space, the Morlet W-LSSVR is picked out to be an example. In RBFNN and GRNN models, the spread of RBF has a great significance for the application of neural networks successfully. In this study, cross validation is employed to select the optimal spread among some candidate values, which ensures that the net-

202

H. Zheng et al. / Electric Power Systems Research 155 (2018) 196–205

Table 2 The optimal parameters and forecasting performance of Morlet, Marr and DOG W-LSSVR for Case 1. Gas

Kernel

Parameters

Training

Testing

C

a

MAPE (%)

r2

MAPE (%)

H2

Morlet Marr DOG

188.1110 52.711 182.9936

4.6791 4.8565 4.2134

0.1962 1.6299 0.7346

0.9999 0.9984 0.9974

5.4238 3.2301 5.9128

CH4

Morlet Marr DOG

77.9358 94.4660 61.2399

4.9200 4.6558 4.6697

0.5499 1.1811 1.4627

0.9997 0.9979 0.9969

2.6832 3.4170 4.2120

C2 H4

Morlet Marr DOG

155.3220 111.5310 102.4911

4.9236 4.1009 4.4537

0.3537 0.4892 0.8834

0.9998 0.9997 0.9973

4.4761 8.6217 8.7995

C2 H6

Morlet Marr DOG

199.5460 76.5825 33.0015

4.5344 4.1355 3.4572

0.1899 0.8392 2.1248

0.9999 0.9994 0.9964

3.9606 7.7346 8.2363

Table 3 The evaluation performances (MAPE and r2 ) of LS-SVM with different kernel functions in the three UCI benchmark data. UCI

Kernel

Parameters

Training

Testing

C

a

MAPE (%)

r2

MAPE (%)

temp

Morlet Marr DOG RBF Linear Poly

162.4760 52.3876 57.1631 129.5250 154.5280 10.2160

4.4181 4.1556 4.0701 4.0120 4.1533 4.0921

0.2020 0.9901 3.0027 0.2298 32.0467 28.2875

1.0000 0.9999 0.9982 0.9077 0.1332 0.3481

3.0177 4.8346 2.3816 2.9231 24.2454 29.0040

atemp

Morlet Marr DOG RBF Linear Poly

130.8650 131.2320 183.8760 33.7109 76.4472 125.2110

4.1536 4.0028 4.2144 4.5631 4.1235 4.1828

0.2411 0.3849 0.9737 0.8810 30.4923 30.2275

0.9997 1.0000 0.9998 1.0000 0.1212 0.1455

9.6757 13.2655 11.3800 9.2939 28.0346 27.2541

hum

Morlet Marr DOG RBF Linear Poly

73.8557 124.0100 5.0540 112.9450 196.5540 76.5117

4.5189 4.2577 4.0254 4.1812 4.0221 1.8159

9.6847 10.3768 7.9826 11.1590 18.8263 16.4494

0.7613 0.7175 0.8315 0.8186 0.1091 0.1312

12.4432 19.0808 13.5851 27.4803 25.5078 13.8452

works provide the best generalization performance. The optimal spread parameters of the radial basis kernel for RBFNN and GRNN are set to be 2 and 0.3. In SVR, the RBF is chosen for SVR as a kernel function and the traditional PSO algorithm is selected to optimize the kernel parameters. Moreover, in order to screen out the best network model for BPNN, a hidden-layer network with the transfer function of log-sigmoid is applied. The BPNN is the first one to be trained through 30 trials in order to select the best networks. Consequently, the BPNN is created, which consists of one hidden layer of 30 neurons and five input and output nodes. It is trained based on the method of Levenberg–Marquardt optimization to minimize the predetermined error goal value as fast as possible. The evaluation performances of BPNN, RBFNN, GRNN, SVR and W-LSSVR in MAPE and r2 are detailed listed in Table 4. In the two cases, the r2 and MAPE of the proposed approach are significantly better than that of BPNN, RBFNN, GRNN, and SVR methods, respectively. Testing MAPE results of the five forecasting approaches for gases in the two Cases are shown in Figs. 8 and 9. As shown in Figs. 8 and 9, the W-LSSVR has a significantly better performance than other four approaches. From Table 4, the W-LSSVR presents the best learning capability in training phase with the training errors less than 1% and the r2 near to 1. And in testing phase, it can be observed that the MAPEs obtained from W-LSSVR based on PSO with mutation is far smaller (less than 6%) than that from BPNN, RBFNN, GRNN, and SVR, which indicates that the forecasting pre-

Fig. 8. Testing MAPE results of the five forecasting approaches for gases in Case 1.

cision and generalization performance of W-LSSVR based on PSO outperform the others. 4. Conclusions A novel model combing wavelet technique integrated LS-SVM with improved PSO for forecasting of dissolved gases in oil-

H. Zheng et al. / Electric Power Systems Research 155 (2018) 196–205

203

Table 4 The evaluation performances of BPNN (A1), RBFNN (A2), GRNN (A3), SVR (A4) and W-LSSVR (A5) in MAPE and r2 . Gas

Training

Testing r2

MAPE (%)

MAPE (%)

A1

A2

A3

A4

A5

A1

A2

A3

A4

A5

A1

A2

A3

A4

A5

H2 CH4 C2 H2 C2 H4 C2 H6

15.3456 13.3492 / 9.9985 10.1993

9.2311 9.7321 / 6.7723 7.3999

6.3412 3.6351 / 6.1234 4.8035

7.2456 2.6297 / 2.3216 3.2884

0.1962 0.5499 / 0.3537 0.1899

0.7137 0.7485 / 0.8192 0.8307

0.9156 0.8942 / 0.9245 0.8882

0.9002 0.9219 / 0.9345 0.9410

0.9168 0.9680 / 0.9999 0.9485

0.9999 0.9997 / 0.9998 0.9999

19.1458 16.8735 / 11.8473 10.8760

11.7361 13.6988 / 8.9827 7.4368

7.7395 6.4273 / 7.7361 6.0163

8.3248 3.1980 / 3.8227 5.1541

5.4238 2.6832 / 4.4761 3.9606

H2 CH4 C2 H2 C2 H4 C2 H6

16.9834 13.8342 18.8442 8.8712 11.3987

8.4340 6.9872 7.9812 5.9872 9.8123

6.1435 5.9232 8.9832 3.9983 7.0833

1.1292 0.8325 3.4673 0.5231 1.3453

0.4872 0.3071 0.5194 0.0871 0.5184

0.7612 0.8023 0.7562 0.8987 0.8012

0.9032 0.8972 0.8024 0.9412 0.8721

0.9321 0.9234 0.8623 0.9532 0.9183

0.9760 0.9815 0.9710 0.9785 0.9626

0.9999 0.9999 0.9998 0.9999 0.9998

18.9453 15.9348 19.2355 9.0311 12.4663

10.2451 7.9834 13.9345 6.9712 10.9887

7.8636 7.2346 10.9234 4.0091 8.7931

2.4628 1.9107 6.7795 2.1942 3.6090

2.1567 1.7643 4.3850 0.6741 2.6521

Natural Science Foundation of Guangxi (2015GXNSFBA139235),the Foundation of Guangxi Science and Technology Department (AE020069), and the Foundation of Guangxi Education Department (T3020097903) in support of this work. The authors also thank the anonymous reviewers and the editor for their valuable comments. Appendix A.

Fig. 9. Testing MAPE results of the five forecasting approaches for gases in Case 2.

immersed transformers is proposed in this paper. The results of this work are concluded as follows: 1. The existence of admissible wavelet kernels, including of Morlet, Marr and DOG wavelet kernels, is proved by theoretic analysis. Therefore, we combine the wavelet technique with LS-SVM regression to construct a new forecasting approach in this paper. 2. By contrast with standard SVR, only two parameters need to be selected in W-LSSVR. And the algorithm of PSO is employed to obtain the optimal parameters. To overcome the drawback of premature convergence of the traditional PSO, a mutation operation is applied to the PSO process in this study. 3. The forecasting procedure is put forward to serve as an effective tool for forecasting of gas contents in oil-filled transformers. Furthermore, compared with BPNN, RBFNN, GRNN and SVR, the W-LSSVR based on PSO with mutation gains excellent learning capability for actual limited samples and simultaneously performs better generalization performance and stable forecasting capability than others. The proposed model is convenient to combine with fault diagnosis method to offer more useful information for future transformer fault analysis. Thus, a subsequent work needs to be supplemented in the future study. Acknowledgments The authors acknowledge the National Basic Research Program of China (973 Program, 2013CB228205), and the National Hightech R&D Program of China (863 Program, 2015AA050204), the

List of symbols (x) Mother wavelet  (x) The Fourier transform of (x) K(x, xi ) Support vector’s kernel function a Kernel parameter C Regularization parameter ϕ(x) Set of nonlinear transformations t The current iteration T The maximum number of iterations The best solution of particle i pdi (t) pdg (t)

The best solution among all particles

d xmax The user-determined maximum value d xmin The user-determined minimum value r(t) Random variable Inertia weight w wmax Initial weight wmin Final weight pr Mutation probability temp, atemp, hum Three UCI benchmark data sets

Greek letters ˛i Lagrange multiplier ˇ Bias term References [1] Y. Bicen, F. Aras, H. Kirkici, Lifetime estimation and monitoring of power transformer considering annual load factors, IEEE Trans. Dielectr. Electr. Insul. 21 (2014) 1360–1367. [2] A. Akbari, A. Setayeshmehr, H. Borsi, E. Gockenbach, I. Fofana, Intelligent agent-based system using dissolved gas analysis to detect incipient faults in power transformers, IEEE Electr. Insul. Mag. 26 (2010) 27–40. [3] N. Bakar, A. Abu-Siada, S. Islam, A review of dissolved gas analysis measurement and interpretation techniques, IEEE Electr. Insul. Mag. 30 (2014) 39–49. [4] Z.-x. Liu, B. Song, E.-w. Li, Y. Mao, G.-l. Wang, Study of code absence in the IEC three-ratio method of dissolved gas analysis, IEEE Electr. Insul. Mag. 31 (2015) 6–12. [5] K. Tomsovic, M. Tapper, T. Ingvarsson, A fuzzy information approach to integrating different transformer diagnostic methods, IEEE Trans. Power Deliv. 8 (1993) 1638–1646. [6] Y.-C. Huang, H.-T. Yang, C.-L. Huang, An evolutionary computation based fuzzy fault diagnosis system for a power transformer, in: Fuzzy Systems

204

[7]

[8]

[9]

[10]

[11]

[12]

[13]

[14]

[15]

[16]

[17]

[18] [19]

[20]

[21]

[22] [23]

[24]

[25] [26] [27]

[28] [29]

[30]

[31]

[32]

[33]

H. Zheng et al. / Electric Power Systems Research 155 (2018) 196–205 Symposium, 1996. Soft Computing in Intelligent Systems and Information Processing. Proceedings of the 1996 Asian, IEEE, 1996, pp. 218–223. Y.-C. Huang, H.-T. Yang, C.-L. Huang, Developing a new transformer fault diagnosis system through evolutionary fuzzy logic, IEEE Trans. Power Deliv. 12 (1997) 761–767. H.-T. Yang, C.-C. Liao, Adaptive fuzzy diagnosis system for dissolved gas analysis of power transformers, IEEE Trans. Power Deliv. 14 (1999) 1342–1350. G. Zhang, S. Ibuka, K. Yasuoka, Application of fuzzy data processing for fault diagnosis of power transformers, in: Proceeding of IEE Conference Publication, High Voltage Engineering Symposium, IET, 1999, pp. 22–27. G. Zhang, K. Yasuoka, S. Ishii, L. Yang, Z. Yan, Application of fuzzy equivalent matrix for fault diagnosis of oil-immersed insulation, in: Dielectric Liquids, 1999. (ICDL’99) Proceedings of the 1999 IEEE 13th International Conference on, IEEE, 1999, pp. 400–403. M. Denghua, A New Fuzzy Information Optimization Processing Technique for Monitoring the Transformer, Eighth International Conference On Dielectric Materials, Measurements And Applications (IEE Conference Publications), Heriot Watt Univ, Edinburgh, Scotland, 2000. Q. Su, A fuzzy logic tool for transformer fault diagnosis, in: Power System Technology, 2000. Proceedings. PowerCon 2000. International Conference on, IEEE, 2000, pp. 265–268. Q. Su, C. Mi, L.L. Lai, P. Austin, A fuzzy dissolved gas analysis method for the diagnosis of multiple incipient faults in a transformer, IEEE Transactions On Power Systems 15 (2000) 593–598. R. Hooshmand, M. Banejad, Application of fuzzy logic in fault diagnosis in transformers using dissolved gas based on different standards, World Acad. Sci. Eng. Technol. 17 (2006) 157–161. W. Flores, E. Mombello, J. Jardini, G. Ratta, A novel algorithm for the diagnostics of power transformers using type-2 fuzzy logic systems, in: Transmission and Distribution Conference and Exposition, 2008. T&D. IEEE/PES, IEEE, 2008, pp. 1–5. R. Afiqah, I. Musirin, D. Johari, M. Othman, T. Rahman, Z. Othman, Fuzzy logic application in DGA methods to classify fault type in power transformer, International Conference on Electric Power Systems, High Voltages, Electric Machines, International Conference on Remote Sensing—Proceedings (2010) 83–88. L. Sun, Y. Liu, B. Zhang, Y. Shang, H. Yuan, Z. Ma, An integrated decision-making model for transformer condition assessment using game theory and modified evidence combination extended by D numbers, Energies 9 (2016) 697. Y. Zhang, X. Ding, Y. Liu, P. Griffin, An artificial neural network approach to transformer fault diagnosis, IEEE Trans. Power Deliv. 11 (1996) 1836–1841. L. Honglei, X. Dengming, C. Yazhu, Wavelet ANN based transformer fault diagnosis using gas-in-oil analysis, in: Properties and Applications of Dielectric Materials, 2000. Proceedings of the 6th International Conference on, IEEE, 2000, pp. 147–150. K. Thang, R. Aggarwal, D. Esp, A. McGrail, Statistical and Neural Network Analysis of Dissolved Gases in Power Transformers, Eighth International Conference On Dielectric Materials, Measurements And Applications (IEE Conference Publications), Heriot Watt Univ, Edinburgh, Scotland, 2000. J. Guardado, J. Naredo, P. Moreno, C. Fuerte, A comparative study of neural network efficiency in power transformers diagnosis using dissolved gas analysis, IEEE Trans. Power Deliv. 16 (2001) 643–647. Y.-C. Huang, Evolving neural nets for fault diagnosis of power transformers, IEEE Trans. Power Deliv. 18 (2003) 843–848. D.S. Sarma, G. Kalyani, ANN approach for condition monitoring of power transformers using DGA, in: TENCON 2004. 2004 IEEE Region 10 Conference, IEEE, 2004, pp. 444–447. E. Mohamed, A. Abdelaziz, A. Mostafa, A neural network-based scheme for fault diagnosis of power transformers, Electr. Power Syst. Res. 75 (2005) 29–39. X. Hao, S. Cai-Xin, Artificial immune network classification algorithm for fault diagnosis of power transformer, IEEE Trans. Power Deliv. 22 (2007) 930–935. Y.-j. Sun, S. Zhang, C.-x. Miao, J.-m. Li, Improved BP neural network for transformer fault diagnosis, J. China Univ. Min. Technol. 17 (2007) 138–142. C.E. Lin, J.-M. Ling, C.-L. Huang, An expert system for transformer fault diagnosis using dissolved gas analysis, IEEE Trans. Power Deliv. 8 (1993) 231–238. Z. Wang, Y. Liu, P.J. Griffin, Neural net and expert system diagnose transformer faults, IEEE Comput. Appl. Power 13 (2000) 50–55. M.B. Ahmad, Z. bin Yaacob, Dissolved gas analysis using expert system, in: Research and Development, 2002. SCOReD 2002. Student Conference on, IEEE, 2002, pp. 313–316. S. Bin, Y. Ping, L. Yunbai, W. Xishan, Study on the fault diagnosis of transformer based on the grey relational analysis, in: Power System Technology, 2002. Proceedings. PowerCon 2002. International Conference on, IEEE, 2002, pp. 2231–2234. C.-H. Lin, C.-H. Wu, P.-Z. Huang, Grey clustering analysis for incipient fault diagnosis in oil-immersed transformers, Expert Syst. Appl. 36 (2009) 1371–1379. J. Mo, X. Wang, M. Dong, Z. Yan, Diagnostic model of insulation faults in power equipment based on rough set theory, Proc. Chin. Soc. Electr. Eng. 24 (2004) 162–167. Y.-C. Huang, H.-C. Sun, K.-Y. Huang, Y.-S. Liao, Fault diagnosis of power transformers using rough set theory, in: Innovative Computing, Information

[34]

[35]

[36] [37]

[38]

[39]

[40] [41]

[42] [43]

[44] [45]

[46]

[47]

[48] [49] [50] [51]

[52]

[53]

[54] [55]

[56] [57]

[58]

[59]

[60] [61]

[62]

and Control (ICICIC), 2009 Fourth International Conference on, IEEE, 2009, pp. 1422–1426. H.-T. Yang, Y.-C. Huang, Intelligent decision support for diagnosis of incipient transformer faults using self-organizing polynomial networks, IEEE Trans. Power Syst. 13 (1998) 946–952. K. Thang, R. Aggarwal, A. McGrail, D. Esp, Application of self-organising map algorithm for analysis and interpretation of dissolved gases in power transformers, in: Power Engineering Society Summer Meeting, 2001, IEEE, 2001, pp. 1881–1886. Y.-C. Huang, A new data mining approach to dissolved gas analysis of oil-insulated power apparatus, IEEE Trans. Power Deliv. 18 (2003) 1257–1261. L. Zhang, Z. Li, H. Ma, P. Ju, Power transformer fault diagnosis based on extension theory, in: Electrical Machines and Systems, 2005. ICEMS 2005. Proceedings of the Eighth International Conference on, IEEE, 2005, pp. 1763–1766. W. Yongqiang, L. Fangcheng, L. Heming, The fault diagnosis method for electrical equipment based on Bayesian network, in: Electrical Machines and Systems, 2005. ICEMS 2005. Proceedings of the Eighth International Conference on, IEEE, 2005, pp. 2259–2261. H. Xiong, C.-X. Sun, R.-J. Liao, J. Li, L. Du, Study on kernel-based possibilistic clustering and dissolved gas analysis for fault diagnosis of power transformer, Zhongguo Dianji Gongcheng Xuebao (Proceedings of the Chinese Society of Electrical Engineering) (2005) 162–166. M. Duval, A review of faults detectable by gas-in-oil analysis in transformers, IEEE Electr. Insul. Mag. 18 (2002) 8–17. D.R. Morais, J.G. Rolim, A hybrid tool for detection of incipient faults in transformers based on the dissolved gas analysis of insulating oil, IEEE Trans. Power Deliv. 21 (2006) 673–680. Z. Wang, I. Cotton, S. Northcote, Dissolved gas analysis of alternative fluids for power transformers, IEEE Electr. Insul. Mag. 23 (2007) 5–14. A. Shintemirov, W. Tang, Q. Wu, Power transformer fault classification based on dissolved gas analysis by implementing bootstrap and genetic programming, IEEE Trans. Syst. Man Cybern. C 39 (2009) 69–79. X. Li, H. Wu, D. Wu, DGA interpretation scheme derived from case study, IEEE Trans. Power Deliv. 26 (2011) 1292–1293. R. Liao, H. Zheng, S. Grzybowski, L. Yang, Y. Zhang, Y. Liao, An integrated decision-making model for condition assessment of power transformers using fuzzy approach and evidential reasoning, IEEE Trans. Power Deliv. 26 (2011) 1111–1118. K. Bacha, S. Souahlia, M. Gossa, Power transformer fault diagnosis based on dissolved gas analysis by support vector machine, Electr. Power Syst. Res. 83 (2012) 73–79. F.C. Sica, F.G. Guimarães, R. de Oliveira Duarte, A.J. Reis, A cognitive system for fault prognosis in power transformers, Electr. Power Syst. Res. 127 (2015) 109–117. H.S. Hippert, C.E. Pedreira, R.C. Souza, Neural networks for short-term load forecasting: a review and evaluation, IEEE Trans. Power Syst. 16 (2001) 44–55. F.-J. Chang, Y.-C. Chen, Estuary water-stage forecasting by using radial basis function neural network, J. Hydrol. 270 (2003) 158–166. M.T. Leung, A.-S. Chen, H. Daouk, Forecasting exchange rates using general regression neural networks, Comput. Oper. Res. 27 (2000) 1093–1110. S.-W. Fei, Y. Sun, Forecasting dissolved gases content in power transformer oil based on support vector machine with genetic algorithm, Electr. Power Syst. Res. 78 (2008) 507–514. M.-H. Wang, C.-P. Hung, Novel grey model for the prediction of trend of dissolved gases in oil-filled power apparatus, Electr. Power Syst. Res. 67 (2003) 53–58. S.-w. Fei, M.-J. Wang, Y.-b. Miao, J. Tu, C.-l. Liu, Particle swarm optimization-based support vector machine for forecasting dissolved gases content in power transformer oil, Energy Convers. Manage. 50 (2009) 1604–1609. V.N. Vapnik, V. Vapnik, Statistical Learning Theory, Wiley, New York, 1998. L. Ganyun, C. Haozhong, Z. Haibao, D. Lixin, Fault diagnosis of power transformer based on multi-layer SVM classifier, Electr. Power Syst. Res. 74 (2005) 1–7. V. Vapnik, The Nature of Statistical Learning theory, Springer Science & Business Media, 2013. M. Tasdighi, M. Kezunovic, Preventing transmission distance relays maloperation under unintended bulk DG tripping using SVM-based approach, Electr. Power Syst. Res. 142 (2017) 258–267. A. Yusuff, A. Jimoh, J. Munda, Fault location in transmission lines based on stationary wavelet transform, determinant function feature and support vector regression, Electr. Power Syst. Res. 110 (2014) 73–83. X. Zhang, J. Wang, K. Zhang, Short-term electric load forecasting based on singular spectrum analysis and support vector machine optimized by Cuckoo search algorithm, Electr. Power Syst. Res. 146 (2017) 270–285. J.A. Suykens, T. Van Gestel, J. De Brabanter, Least Squares Support Vector Machines, World Scientific, 2002. T. Van Gestel, J.A. Suykens, D.-E. Baestaens, A. Lambrechts, G. Lanckriet, B. Vandaele, B. De Moor, J. Vandewalle, Financial time series prediction using least squares support vector machines within the evidence framework, IEEE Trans. Neural Netw. 12 (2001) 809–821. T. Van Gestel, J.A. Suykens, B. Baesens, S. Viaene, J. Vanthienen, G. Dedene, B. De Moor, J. Vandewalle, Benchmarking least squares support vector machine classifiers, Mach. Learn. 54 (2004) 5–32.

H. Zheng et al. / Electric Power Systems Research 155 (2018) 196–205 [63] Y. Zhang, Y. Liu, Traffic forecasting using least squares support vector machines, Transportmetrica 5 (2009) 193–213. [64] A. Zendehboudi, Implementation of GA-LSSVM modelling approach for estimating the performance of solid desiccant wheels, Energy Convers. Manage. 127 (2016) 245–255. [65] Q. Zhang, A. Benveniste, Wavelet networks, IEEE Trans. Neural Netw. 3 (1992) 889–898. [66] L. Zhang, W. Zhou, L. Jiao, Wavelet support vector machine, IEEE Trans. Syst. Man Cybern. B 34 (2004) 34–39. [67] S. Jazebi, B. Vahidi, M. Jannati, A novel application of wavelet based SVM to transient phenomena identification of power transformers, Energy Convers. Manage. 52 (2011) 1354–1363. [68] A. Widodo, B.-S. Yang, Support vector machine in machine condition monitoring and fault diagnosis, Mech. Syst. Signal Process. 21 (2007) 2560–2574. [69] A. Widodo, B.-S. Yang, Wavelet support vector machine for induction machine fault diagnosis based on transient current signal, Expert Syst. Appl. 35 (2008) 307–316. [70] Q. Wu, The forecasting model based on wavelet ␯-support vector machine, Expert Syst. Appl. 36 (2009) 7604–7610. [71] O. Kisi, M. Cimen, A wavelet-support vector machine conjunction model for monthly streamflow forecasting, J. Hydrol. 399 (2011) 132–140. [72] J. Kennedy, R. Eberhart, Particle swarm optimization, in: Neural Networks, 1995. Proceedings., IEEE International Conference on, IEEE, 1995, pp. 1942–1948. [73] Y. Shi, Particle swarm optimization: developments, applications and resources, in: Evolutionary Computation, 2001. Proceedings of the 2001 Congress on, IEEE, 2001, pp. 81–86. [74] J. Sun, B. Feng, W. Xu, Particle swarm optimization with particles having quantum behavior, in: Evolutionary Computation, 2004. CEC2004. Congress on, IEEE, 2004, pp. 325–331. [75] D. Bratton, J. Kennedy, Defining a standard for particle swarm optimization, in: Swarm Intelligence Symposium, 2007. SIS 2007. IEEE, IEEE, 2007, pp. 120–127. [76] R. Poli, J. Kennedy, T. Blackwell, Particle swarm optimization, Swarm Intell. 1 (2007) 33–57.

205

[77] B. Schölkopf, C.J. Burges, A.J. Smola, Advances in Kernel Methods: Support Vector Learning, MIT Press, 1999. [78] A.J. Smola, B. Schölkopf, K.-R. Müller, The connection between regularization operators and support vector kernels, Neural Netw. 11 (1998) 637–649. [79] M. Valipour, Evolution of irrigation-equipped areas as share of cultivated areas, Irrig. Drain. Syst. Eng. 2 (2013), e114. [80] M. Valipour, Study of different climatic conditions to assess the role of solar radiation in reference crop evapotranspiration equations, Arch. Agron. Soil Sci. 61 (2015) 679–694. [81] M. Valipour, Optimization of neural networks for precipitation analysis in a humid region to detect drought and wet year alarms, Meteorol. Appl. 23 (2016) 91–100. [82] M. Valipour, S. Mousavi, R. Valipour, E. Rezaei, A new approach for environmental crises and its solutions by computer modeling, The 1st International Conference on Environmental Crises and Its Solutions, Kish Island, Iran (2013). [83] S.I. Yannopoulos, G. Lyberatos, N. Theodossiou, W. Li, M. Valipour, A. Tamburrino, A.N. Angelakis, Evolution of water lifting devices (pumps) over the centuries worldwide, Water 7 (2015) 5031–5060. [84] M. Valipour, Number of required observation data for rainfall forecasting according to the climate conditions, Am. J. Sci. Res. 74 (2012) 79–86. [85] M. Valipour, Use of Surface Water Supply Index to Assessing of Water Resources Management in Colorado and Oregon, 3, US. Advances in Agriculture, Sciences and Engineering Research, 2013, pp. 631–640. [86] M. Valipour, Increasing irrigation efficiency by management strategies: cutback and surge irrigation, ARPN J. Agric. Biol. Sci. 8 (2013) 35–43. [87] M. Valipour, Application of new mass transfer formulae for computation of evapotranspiration, J. Appl. Water Eng. Res. 2 (2014) 33–46. [88] M. Valipour, A.A. Montazar, An evaluation of SWDC and WinSRFR models to optimize of infiltration parameters in furrow irrigation, Am. J. Sci. Res. 69 (2012) 128–142. [89] M. Valipour, M.A.G. Sefidkouhi, M. Raeini, Selecting the best model to estimate potential evapotranspiration with respect to climate change and magnitudes of extreme events, Agric. Water Manage. 180 (2017) 50–60. [90] A. Frank, A. Asuncion, UCI Machine Learning Repository [http://archive.ics.uci.edu/ml], University of California, School of Information and Computer Science, Irvine, CA, 2010, pp. 213.