Energy 194 (2020) 116847
Contents lists available at ScienceDirect
Energy journal homepage: www.elsevier.com/locate/energy
Multifractional Brownian motion and quantum-behaved particle swarm optimization for short term power load forecasting: An integrated approach Wanqing Song professor, Phd. (lion) a, *, Carlo Cattani professor, Phd. b, Chi-Hung Chi professor, Phd. c a b c
School of Electronic and Electrical Engineering, Shanghai University of Engineering Science, Shanghai, 201620, PR China Department of Economics, Engineering, Society and Business Organization, Tuscia University, Viterbo, 01100, Italy The Senior Principal Research Scientist, CSIRO, Data61, 15 College Road, Sandy Bay, Hobart, Tasmania, 7005, Australia
a r t i c l e i n f o
a b s t r a c t
Article history: Received 19 March 2019 Received in revised form 21 December 2019 Accepted 24 December 2019 Available online 3 January 2020
Power load fluctuation is generally agreed to be a non-stationary stochastic process. The Fractional Brownian Motion (FBM) model is proposed to forecast a non-stationary time series with high accuracy. Computation of the Hurst exponent (H) for the power load data series using the Rescaled Range Analysis (R/S) in this study. This method is used to verify the Long-Range Dependent (LRD) characteristics of nonstationary power load data. For the real power load, however, H exponent takes on the self-similarity characteristics in a certain finite range of intervals, the global self-similarity is very rare to exist. The H exponent of the self-similarity usually has more than one value. We generalize multifractional HðtÞ to replace constant H. To improve the forecasting accuracy, the HðtÞ is optimized by the Quantum-Behaved Particle Swarm Optimization (QPSO). Once the optimal HðtÞ is obtained, then the optimal and parameters in the multi-Fractional Brownian Motion (mFBM) model can be deduced to forecast next power load data series with a higher accuracy. © 2019 Published by Elsevier Ltd.
Keywords: Fractional brownian motion Long-range dependence Particle swarm optimization Power load forecasting Quantum-behaved particle swarm optimization
1. Introduction Short-term power load forecasting is the critical and important process for power plant systems operation. Accuracy in load forecasting can improve the efficiency of the power grid scheduling, energy saving and reducing economic losses reduction [1e3]. Due to the importance of the aspects mentioned above, substantial research efforts have been devoted to power load forecasting. These methods can be divided into two categories: classical statistical algorithms and Artificial Intelligence(AI) based methods [4]. The statistical technique includes Grey Modeling (GM) and Auto Regressive Integrated Moving Average (ARIMA), and the AI method includes Artificial Neural Networks (ANNs) and Support Vector Regression (SVR). The GM scheme is traditionally used in power load forecasting [5]. However, its accuracy are often limited by many factors and because of the serious lack of information, the
* Corresponding author. E-mail addresses:
[email protected] (W. Song),
[email protected] (C. Cattani),
[email protected] (C.-H. Chi). https://doi.org/10.1016/j.energy.2019.116847 0360-5442/© 2019 Published by Elsevier Ltd.
GM approach only considers the size of the instant forecasting value. The ARIMA model is the other effective method for power load forecasting [6]. But it is required that the sequence of power load is stable after differentiation. Since the early 1990s, the AI techniques are among the most detailed studied forecasting methods. First, a approach is to use ANNs. In particular, the short-term load forecasting model based on the error back-propagation (BP) neural network is the most typical forecasting model [2]. But the BP neural networks are easy to fall into local minimum and have slow convergence speed, so the models can’t meet the precision requirements of short-term load forecasting of modern power system. Then another AI method becomes popular, SVR method has shown better prediction results than the neural networks, but the SVR needs to set the penalty parameter that makes the forecasting model more complex [7]. In recent years, Deep Learning methods [8] have become a popular choice in forecasting, such as Convolutional Neural Networks (CNN) and Recurrent Neural Networks (RNN) [9e11]. Due to the complexity of Deep Learning methods, it costs a lot of computational performance to build the models. In view of stochastic models, fractal self-similarity is the natural choice, so it has diverse
2
W. Song professor et al. / Energy 194 (2020) 116847
applications, including not only power load forecasting [12,13], but also image processing [14], network traffic evaluation [15], and the forecasting of slowly wearing bearing faults [16]. The FBM is a well-known model among stochastic models. Its Hurst exponent Hcan represent the Long-range Dependent (LRD) characteristics for nonlinear time series data. If the Hurst exponent ranges within 0 < H < 0:5, the local process has large fluctuations; for 0:5 < H < 1, the process is persistent. If H ¼ 0:5, the FBM model can be considered as the special case of the Brownian Motion (BM). Consequently, the forecasting results of the FBM model turn more flexible than the BM [17e19].There exist several methods to calculate the Hurst exponent H for the FBM. Such methods include the variance method, absolute value method, rescaled range analysis (R/S) and wavelet method [20e22].Song et al. [23].compared these methods and concluded that R/S method has higher accuracy. Therefore, this paper uses R/S method to estimate the Hurst index. However, the correlation of FBM is also determined as the H value of the stochastic sequence is calculated. In other words, FBM can only be applied to stochastic sequences with constant correlation. The correlation of the sequence is difficult to be constant for practical non-stationary stochastic sequences. the H value obtained by the R/S method can only reflect the correlation of a finite interval. In addition, empirical data show that the Hurst exponent exhibits multiple fractal properties with variable time in the correlation characteristics of the non-stationary stochastic sequence [24]. Based on the above reasons, the concept about HðtÞ was obtained by Peltier and Levy-Vehel [25], and Lim and Muniandy et al. [26e28] gives its precise interpretation. Therefore, this paper generalizes FBM to mFBM, and H with a constant correlation is replaced by HðtÞ with indicates that the correlation changes with time. To obtain more accurate random sequence prediction, Particle Swarm Optimization (PSO) and Quantum-Behaved Particle Swarm Optimization (QPSO) are introduced. PSO represents populationbased swarm intelligence algorithm, and have attracted scientific interest from diverse research communities [29e31]. However, the disadvantage of the particle swarm optimization algorithm is that each particle moves in the target space at a velocity vector that represents its optimal solution. As the result, PSO can easily be trapped in the local optimal solution [32,33]. Motivated by issues from quantum mechanics, QPSO is proposed to deal with such problems [34,35]. Its basic idea is to employ the problem concerning the quantum delta potential well model on the top of the PSO to get the global optimum solution. Thus, various versions of QPSO have been proposed and applied in many fields [36e39]. In this study, we propose an integrated approach that both the mFBM and QPSO to forecast a short term power load. The computed HðtÞ value of the sampling power data series will be used to deduce parameters m and d in the mFBM model for the subsequent power load data series forecasting. The PSO algorithm based on the iteration of its velocity and position vectors will be used to find the local optimal values of the Hi;j ðtÞ, and QPSO will be used to get the global optimal Hi;j ðtÞ of the mFBM model, then be deduced to make the forecasting more accurate. On the basis of real-life power load data from the Electricity Corporation (http://neuron-ai.tuke.sk/ competition) during workday and holiday periods, we compare the prediction results from the four forecasting models: FBM,mFBM þ PSO, RNN and mFBM þ QPSO. We conclude that the forecasting accuracy of the mFBM þ QPSO model is superior to the other three models. This article is organized as follows. Section 2 explains how the hurst value of time series data is given by the R/S method. The LRD characteristics of the time series will also be discussed. Section 3 is devoted to computation of the parameters m and d of the mFBM
model and build the forecasting model of mFBM. Section 4 explains how the PSO can get the local and global optimal solution from the parameter Hi;j ðtÞ ; it also explained the mechanism to deduce the parameters m and d of the mFBM to predict the power load demand. Section 5 explains how the global optimum solution of the QPSO can help to get the parameter H of mFBM, and how the optimal solution m and d parameters of mFBM can be deduced to forecast the power load demand. In Section 6 we show and compare the analyzed results of the power data forecasted from four different forecasting models. Final remarks are given in Conclusion.
2. Computation of the Hurst exponent: H There exist many methods to calculate the Hurst exponent H of the time series. The time-domain methods process directly the series and estimate the Hurst value by Least squares fitting. Common methods are aggregate variance method [40], absolute value method [41], ACF method [42] and R/S method [16,20,21]. The R/S method is one of the most commonly used methods. It also supports the understanding of the LRD characteristics of the given time series. Let us set the time series fyi ji ¼ 1; 2; /; ng as
YðnÞ ¼
n X
yi ;
(1)
i¼1
with the sampling variance given by
s2 ðnÞ ¼
2 n 1X 1 y2i YðnÞ2 : n i¼1 n
(2)
The R/S ratio can then be defined as
R 1 t t ðnÞ ¼ maxðYðtÞÞ YðnÞ min YðtÞ YðnÞ : s sðnÞ n n
(3)
The R/S can be viewed as the scope of renormalization, the graph of the R/S is usually represented on the logarithmic plots. For example, the simulation result after 72-h power load is shown in Fig. 1. For the given time series, the Hurst exponent H within the 48h power load is H ¼ 0:62 > 0:5 by R/S, which exhibits LRD characteristics.
Fig. 1. Simulation results of a 72-h power load.
W. Song professor et al. / Energy 194 (2020) 116847
3
3. The forecasting model of mFBM Let the mFBM model be denoted by BHðtÞ ðtÞ. Thus, BHðtÞ ðtÞ is governed by (4) [18]:
8 > BHðtÞ ð0Þ ¼ b0 > > > > > > > > > < BHðtÞ ðtÞBHðtÞ ð0Þ ¼
1
GðHðtÞþ0:5Þ
> ) ( ð0 > ðt > > HðtÞ0:5 > HðtÞ0:5 HðtÞ0:5 > > t tj dB t þ t tj tj dB t > > > : ∞
;
0
(4) where Gð,Þ is the gamma function. Then the mFBM can generate a specific stochastic time series as the forecasting for next power load series. The stochastic differential equations (SDE) [43,44] of forecasting model is written by
dXt ¼ mXt dt þ dXt dBHðtÞ ðtÞ;
(5)
where m is the drift parameter, d is the diffusion coefficient. The Hurst exponent is the most significant parameter in the mFBM, once it is obtained, the two parameters m and d of the mFBM can then be deduced. The parameters m and d are calculated by maximum likelihood estimation (MLE) [45,46]. The forecasting model of mFBM is in the form
DXt ¼ mXt Dt þ dXt wðtÞðDtÞHðtÞ ;
(6)
where wðtÞ is independent and standard normal distribution. Equation (6) is rewritten by
Xtþ1 ¼ Xt þ mXt Dt þ dXt wðtÞðDtÞHðtÞ :
Fig. 2. The forecasting flowchart of the mFBM.
(7)
The forecasting flowchart of the mFBM is shown in Fig. 2. It is clear from Fig. 2 that once the value of H is calculated, m and d can also be determined, and (7) can generate the approximated sequences as the forecasting power load sequences. In other words, mFBM is FBM when HðtÞ is replaced by H. For example, the given power load data in Fig. 1 yield H ¼ 0:62, this proves that the power data series has LRD characteristics. Then, the parameters m ¼ 0:196 b )X b t can be and d ¼ 9:72 104 using the flowchart (Fig. 2) X tþ1
deduced by (7). Consequently, the FBM model can generate the stochastic series for the next 24-h power data. Due to the permanency of the Hurst exponent, equation (7) generates the series, which cannot follow the trend of nonstationary stochastic series. Consequently, for longer forecasting time windows, the error will increase gradually. As it is shown in Fig. 3, the maximum error is 7.023293 % . 4. Particle Swarm Optimization for Hurst index HðtÞ In this section we will describe how a PSO can be used to get the b by using velocity and position iteration, followed by optimized H b and b the deduction of its associated u d . The process for the PSO
Fig. 3. Comparison between the real power load and the forecasted data by the FBM model.
optimal solution through the iteration of its velocity and position.
optimization is given below. Step 1:Initializing HðtÞ, let the number of particles N ¼ 20, and Hi;j ðtÞ ¼ 2randintð1; 1; ½0; 2Þ, and the number of iterations is t ¼ 20. The 20 particles of HðtÞ are shown in Table .1. Each particle is used to achieve the local optimal solution and global
where c1 ¼ 2; c2 ¼ 2 are the acceleration coefficients; uniform random sequence r1 : Uð0; 1Þ; r2: Uð0; 1Þ; w ¼ 0:4 : 0:9 is the inertial weight. Step 6:The optimal position of each particle is iterated as follows
4
W. Song professor et al. / Energy 194 (2020) 116847
5. Quantum-Behaved Particle Swarm Optimization for the Hurst exponent
Table 1 20 particles of. HðtÞ Step 2: Let Hi ¼ ðHi;1 ðtÞ; Hi;2 ðtÞ; L; Hi;j ðtÞ; L; Hi;D ðtÞÞ and Vi ¼ ðVi;1 ðtÞ; Vi;2 ðtÞ; L; Vi;j ðtÞ; L; Vi;D ðtÞÞbe the position and velocity vectors of particle i, respectively. Here, D is the dimension of the solution space, 1 i N; 1 j D: b; b b; b Step 3: Evaluate the objective fitness function by using m d ; by sþ1 ¼ Fð u d ; Hi;j ðtÞ; ys Þ and f ðHij ðtÞÞ ¼ minðabsðsumð b y sþ1 ysþ1 Þ2 ÞÞ is the fitting function. Step 4:Assuming that Pi;j ðtÞ is the local optimal position of the particle Hi;j ðtÞ; Pg;j ðtÞð1 g NÞ is the global optimal position of the swarm, and Vi;j ðtÞ is the instantaneous velocity of the particle Hi;j ðtÞ, we initialize the local optimal position Pi;j ð0Þ of particle Hi;j ðtÞ and the global optimal position of swarm Pg;j ð0Þ: Step 5:During the iteration procedure, we refine the position and velocity vectors as follows Vi;j ðt þ 1Þ ¼ wVi;j ðtÞ þ c1 r1 Pi;j ðtÞ Hi;j ðtÞ þ c2 r2 Pg;j ðtÞ Hi;j ðtÞ ; Hi;j ðt þ 1Þ ¼ Hi;j ðtÞ þ Vi;j ðt þ 1Þ;
2 2 1 2
2 2 4 2
2 1.16 2 2
2 1 2 2
(8)
2 4 2 4
The combined approach of the PSO and the mFBM for shortterm power load forecasting was described in the previous section. We have emphasized that the accuracy of this approach is attractive, but its disadvantage is the premature convergence to the local minimal when it is applied to complex problems. In the integrated approach, since each particle of PSO moves in the search space with a velocity vector to find its local optimum and the global optimum, it is possible that it gets trapped in its local optimal solution and cannot reach the global optimal one. To solve this problem, from the mechanical point of view that particles have quantum behavior, quantum computation is introduced into particle swarm optimization algorithm, and the global optimal solution is searched to avoid falling into the local optimal solution, which is called the QPSO. In addition, there are a few parameters to be adjusted. In order to provide the convergence for the PSO algorithm, Clerc and Kennedy [47] have proved that each particle must converge to the local attractor pi;j , which is defined as
pi;j ðtÞ ¼ fPi;j ðtÞ þ ð1 fÞPg;j ðtÞ; f Uð0; 1Þ: Pi;j ðt þ 1Þ ¼ Pi;j ðtÞf Hi;j ðt þ 1Þ f Pi;j ðtÞ Hi;j ðt þ 1Þ; f Hi;j ðt þ 1Þ < f Pi;j ðtÞ
(9)
Step 7:The global optimal position is defined by Hgbest ¼ argðminð; Pi;j ðtÞÞÞ: Step 8:End of iteration. Supposing that Hgbest ¼ 1 in the example sketched in Fig. 3. By b ¼ 11:8 and b using the diagram given in Fig. 2, we get m d ¼ 4:3 1016 . The parameters of the mFBM can be optimized by using the PSO algorithm. It performs the forecasting of power load for the subsequent time window to be more accurate. The result is shown in Fig. 4. The maximum error of the combined PSO þ mFBM is 4.3%, while the FBM yields 7.023293%. However, it still suffers from the drawback of local optimum. In the next section, we will propose the quantum-behaved particle swarm optimization algorithm to further improve the accuracy of the Hurst exponent.
(10)
Comparing with the convergence of the PSO, the QPSO defines one-dimensional delta potential well for each dimension at the point pi ðtÞ, and each particle has a quantum behavior, which can be described by the Schrodinger equation [48]. The distribution function of the particle position is given as
F Hij ðt þ 1Þ ¼ e2jpi;j ðtÞHi;j ðtþ1Þj=Li;j ðtÞ ;
(11)
where Li;j ðtÞ is the standard deviation of the distribution. The position of each particle can be obtained by using the Monte Carlo method [49].
Hi;j ðt þ 1Þ ¼ pi;j ðtÞ±
Li;j ðtÞ lnð1 = uÞ; 2
(12)
where u Uð0; 1Þ, mðtÞ is defined as the average of the global mean best position
mðtÞ ¼ ðm1 ðtÞ; m2 ðtÞ; /; m2 ðtÞÞ ¼ X N N 1 N 1X 1X Pi;1 ðtÞ; Pi;2 ðtÞ; /; Pi;D ðtÞ ; N i¼1 N i¼1 N i¼1
(13)
where N is the population size, and pi is the local optimal position of ith particle. The values of Li;j ðtÞ and the position Hi;j ðt þ1Þ are evaluated as follows
Table 2 20 particles of. HðtÞ Step 2:Initialize the local optimal position Pi;j ð0Þ for the current position Hi;j ð0Þ. The mean optimal position mj ðtÞ is computed by (10) Select a suitable value b and evaluate the objective function value f ðHi;j ðtÞÞ. Step 3:The local attractor pi;j ðtÞ is computed by(8). If randð0; 1Þ > 0:5 Hi;j ðt þ 1Þ ¼ pi;j ðtÞ þ b , mj ðtÞ Hi;j ðtÞ,lnð1 = uÞ;
Fig. 4. H of mFBM optimized by PSO for power forecasting.
1 3 3 1
3 3 1 2
1 3 2 3
3 3 1 1
1 2 1 1
W. Song professor et al. / Energy 194 (2020) 116847
5
the convergence of the particle. Continue with the previous example, we use the previous 48-h power data to predict the next 24-h power load. Step 1:Initialize HðtÞ, let the number of particles N ¼ 20, and each Hi;j ðtÞ ¼ 2randintð1; 1; ½0; 2Þ with dimension equal to 3, and the number of iterations is t ¼ 20. The 20 particles of HðtÞ are shown in Table .2. Initialize the current position, individual optical positions, and the population global optimal position. Then each particle is used to find the global optimal solution.else,
Hi;j ðt þ 1Þ ¼ pi;j ðtÞ b , mj ðtÞ Hi;j ðtÞ,lnð1 = uÞ:
Step 4:The optimal position Pi;j ðtÞof each particle Hi;j ðtÞ is updated by Pg;j ¼ argminðPi ÞPi Step 5:The global optimal position is given by Hgbest ¼ argminfPg g: Pg Step 6:End of iteration.
Fig. 5. Optimization H using QPSO for fine-tuning mFBM forecasting.
Table 3 The parameters of different forecasting models in Case 1. Model
H
m
d
FBM PSO þ mFBM QPSO þ mFBM
0.6670 1.1249 1.7614
0.7762 2.6892 4.4585
1.7273eþ04 8.2208eþ11 4.7075eþ06
Li;j ðtÞ ¼ 2b,mj ðtÞ Hi;j ðtÞ;
(14)
Hi;j ðt þ 1Þ ¼ pi;j ðtÞ ± b , mj ðtÞ Hi;j ðtÞ,lnð1 = uÞ;
(15)
where b < 1:781, which is the only parameter that can guarantee
During the process of updating the particle position, it is necessary that the fitness value of the current individual position of each particle should compare with the fitness value of the global optimal position. The loop iteration is used to find the global optimal solution. Once the iteration number or minimum error is reached in QPSO, the optimal approximation power curve for the H index can be found. This effective H index parameter will be used by the mFBM model for the next data series forecasting. Continue with the previous Fig. 3, it is found that Hgbest ¼ 1:747438 is first found, b ¼ 32:9 and b followed by u d ¼ 2:37 105 . This can be used to
y sþ1 according to (6). In this way, we optimize the deduce ys /b parameters of mFBM by making use of the QPSO algorithm, and then re-establish the mFBM model to forecast the post power load.
Fig. 6. Power load forecasting on Friday and its error analysis: (a) real and forecasted power loads on Friday; (b) relative error rates; (c) box-plot of the relative error rates with different step lengths; (d) norm-plot of the relative error rates with different step lengths.
6
W. Song professor et al. / Energy 194 (2020) 116847
Table 4 Relative error rates of different forecasting model in Case 1. Model
Max
Mean
Median
Standard Deviation
FBM PSO þ mFBM RNN QPSO þ mFBM
10.09716 6.123293 5.670545 2.892868
4.19964 3.98491 2.45424 0.954681
4.415478 3.487297 1.556361 0.682468
2.870234 2.008887 1.480833 0.786988
Table 5 The parameters of different forecasting models in Case 2. Model
H
m
d
FBM PSO þ mFBM QPSO þ mFBM
0.7315 1.2716 1.6015
0.2319 23.1178 20.6063
3.2672eþ03 1.4587eþ13 1.48eþ05
This is shown as shown in Fig. 5. With this approach, the maximum error rate of the QPSO þ mFBM can be reduced to 3.1 %.
6. Experimental results In this section, we validate the effectiveness of the proposed integrated QPSO þ mFBM approach. The workflow of the stochastic models for the power load forecasting in the study is shown below. 1) The R/S method is first used to obtain the Hurst exponent of the power data series. The condition 0:5 < H < 1 implies that the data with LRD characteristics. 2) The parameters m, d, and H of mFBM can be used to find the optimal solution effectively through the global search of QPSO. Then mFBM can generate an approximate series as the forecasting for the next power load series.
3) mFBM þ QPSO should choose the optimum step size for the power load forecasting. In the experiments, we focus on the detailed analysis of PSO þ mFBM and QPSO þ mFBM models, and also use RNN and FBM models for comparison purposes. 4) Error rate with reference to the actual workload will be used as the key performance indicator. The power load data from the Eastern Slovakian Electricity Corporation were obtained for our experiment. Consider 48 points for the daily load curve, i.e., sampling points are spaced at 30 min intervals. First, we considered the influence of the prediction step size, and then the forecasted workday and holiday power load. Since the power load series is stochastic, the forecasting length would affect the forecasting accuracy. Furthermore, we attempted to get the optimal sampling range for small forecasting errors.
6.1. Case 1: Workday To gain a better insight into the QPSO þ mFBM forecasting effect, we chose to forecast the power load data on 20th March (Friday) and choose the RNN algorithm of deep learning method for comparison. 20th March, Friday was the day for forecasting. 6th March and 13th March were previous Fridays; they were expected to have similar characteristics because of the weekly cycle and load variation. Furthermore, 18th March and 19th March were close to 20th March, which means that they should also reflect the trend of the power load.The parameters of different prediction models are shown in Table .3, and the forecasting results are shown in Fig. 6. From these figures, we note that the maximum relative error rate of the FBM is 10.1 %, followed by the PSO þ mFBM with 6.1 % error. Then the error rate of the RNN drops to 5.67 % . At last, The QPSO þ mFBM performs 2.89 % error rate. This reveals the superiority of the QPSO þ mFBM over other methods. Fig. 6(c) shows the
Fig. 7. Power forecasting on Sunday and its error analysis: (a) real and forecasted power loads on Saturday; (b) Relative Error Rates; (c) box-plot of relative error rates with different step size; (d) norm-plot of relative error rates with different step sizes.
W. Song professor et al. / Energy 194 (2020) 116847 Table 6 Relative error rates of different forecasting model in Case 2. [4] Model
Max
Mean
Median
Standard Deviation
FBM PSO þ mFBM RNN QPSO þ mFBM
13.19865 8.461283 7.949999 4.13116
4.648453 3.997359 2.664427 1.233191
4.616822 3.933228 2.692467 1.272298
2.658825 2.126456 1.392939 0.709522
[5] [6]
[7]
box-plot chats of the error rate of the four prediction models. Fig. 6(d) shows the norm-plot functions of the relative error rates at different step lengths. The maximum error rates and its error statistics are shown in Table .4.
[8] [9]
6.2. Case 2: Holiday
[10]
We repeated the analysis of Case 1, but this time, the chosen day was 22nd March, which was Sunday. Again, data from the previous cycle days, 8th March and 15th March, as well as data from the closest time, 20th March and 21st March were used to reflect the trend of the power load. Due to the 20th March is Friday, the accuracy of the forecasting was worse than that of workday forecasting.The parameters of different prediction models are shown in Table .5.The forecasting results are shown in Fig. 7. We note that the maximum relative error rates of FBM, PSO þ mFBM, RNN, and QPSO þ mFBM are 13.19 % , 8.46 % , 7.95 % , and 4.13 % , respectively. Such errors are expected and are close to the Case 1. Observations from the box-plot chats (shown in Fig. 7(c)) and the norm-plots (shown in Fig. 7(d))are also close to the Case 1. Thus, the QPSO þ mFBM method exhibits its robustness. Table .6 shows the statistics of the relative error rates.
[11]
[12] [13]
[14]
[15]
[16] [17] [18]
7. Conclusion
[19]
This paper integrates the QPSO and mFBM models to forecast short term non-stationary LRD power load series. The Hurst exponent evaluates the characteristics of the LRD in non-stationary power load series by the R/S method. Since the value of H is constant for the finite interval, FBM generates a stationary stochastic series. So FBM is not appropriate to forecast the trend of a nonstationary power load series. The PSO can easily get trapped into a local optimal solution, and the QPSO avoids falling into the local optimal solution by searching globally optimal solution. So the optimization effect of the QPSO has a higher accuracy than the PSO. The QPSO optimizes the parameter HðtÞ by choosing an optimum step size. Then the optimal m and the d of mFBM can be computed effectively by the optimal H. As a result, the forecasting accuracy is improved. The FBM, PSO þ mFBM, RNN and QPSO þ mFBM are used to forecast the short-term power load series on the working and holidays, respectively. The boxplot and the normalized plot methods are used to analyze the forecasting relative error. The forecasting results have shown that the QPSO þ mFBM have a higher accuracy than the other forecasting methods.
[20] [21] [22] [23]
[24]
[25] [26]
[27] [28]
[29]
[30]
Declaration of competing interest The authors declare that there is no conflict of interest regarding the publication of this article. References [1] Kong W, Dong ZY, Hill DJ, Luo F, Xu Y. Short-term residential load forecasting based on resident behaviour learning. IEEE Trans Power Syst 2018;33:1087e8. [2] Mordjaoui M, Haddad S, Medoued A, Laouafi A. Electric load forecasting by using dynamic neural network. Int J Hydrogen Energy 2017;42:17655e63. [3] Tucci M, Crisostomi E, Giunta G, Raugi M. A multi-objective method for short-
[31]
[32]
[33]
[34]
7
term load forecasting in european countries. IEEE Trans Power Syst 2016;31: 3537e47. Kavousi-Fard A, Samet H, Marzbani F. A new hybrid modified firefly algorithm and support vector regression model for accurate short term load forecasting. Expert Syst Appl 2014;41:6047e56. Zhao H, Guo S. An optimized grey model for annual power load forecasting. Energy 2016;107:272e86. Fard AK, Akbari-Zadeh M-R. A hybrid method based on wavelet, ann and arima model for short- term load forecasting. J Exp Theor Artif Intell 2014;26: 167e82. Yaslan Y, Bican B. Empirical mode decomposition based denoising method with support vector regression for time series prediction: a case study for electricity load forecasting. Measurement 2017;103:52e61. Shi H, Xu M, Li R. Deep learning for household load forecasting-a novel pooling deep rnn. Ieee Transactions on Smart Grid 2018;9:5271e80. Tian C, Ma J, Zhang C, Zhan P. A deep neural network model for short-term load forecast based on long short-term memory network and convolutional neural network. Energies 2018;11:3493. https://doi.org/10.3390/en11123493. Wang ZJ, Zheng LK, Wang JY, Du WH. Research of novel bearing fault diagnosis method based on improved krill herd algorithm and kernel extreme learning machine. Complexity 2019. https://doi.org/10.1155/2019/4031795. Wang ZJ, Zheng LK, Du WH, Cai WN, Zhou J, Wang J, Han XF, Feng HG. A novel method for intelligent fault diagnosis of bearing based on capsule neural network. Complexity 2019b;2019:1e17. https://doi.org/10.1155/2019/ 6943234. Kai LJ, Cattani C, Song WQ. Power load prediction based on fractal theory. Advances in Mathematical Physics 2015:6. Korolko N, Sahinoglu Z, Nikovski D. Modeling and forecasting self-similar power load due to ev fast chargers. Ieee Transactions on Smart Grid 2016;7: 1620e9. Zachevsky I, Zeevi YY. Single-image superresolution of natural stochastic textures based on fractional brownian motion. IEEE Trans Image Process 2014;23:2096e108. Rezaul KM, Pakstas A, Gilchrist R, Chen TM. Heaf: a novel estimator for longrange dependent self-similar network traffic, next generation teletraffic and wired/wireless advanced networking. Proceedings 2006;4003:34e45. Song WQ, Li M, Liang JK. Prediction of bearing fault using fractional brownian motion and minimum entropy deconvolution. Entropy 2016;18:15. Delorme M, Wiese KJ. Maximum of a fractional brownian motion: analytic results from perturbation theory. Phys Rev Lett 2015;115:5. Delorme M, Wiese KJ. Perturbative expansion for the maximum of fractional brownian motion. Phys Rev 2016;94:24. Vamos C, Craciun M, Suciu N. Automatic algorithm to decompose discrete paths of fractional brownian motion into self-similar intrinsic components. European Physical J B 2015;88:10. Mielniczuk J, Wojdy P. Estimation of hurst exponent revisited. Comput Stat Data Anal 2007;51:4510e25. Barunik J, Kristoufek L. On hurst exponent estimation under heavy-tailed distributions. Phys A Stat Mech Appl 2010;389:3844e55. Wang G, Antar G, Devynck P. The hurst exponent and long-time correlation. Phys Plasmas 2000;7:1181e3. Song WQ, Li M, Li YY, Cattani C, Chi C-H. Fractional brownian motion: difference iterative forecasting models. Chaos, Solit Fractals 2019;123. https:// doi.org/10.1016/j.chaos.2019.04.021. Gao YD, Villecco F, Li M, Song WQ. Multi-scale permutation entropy based on improved lmd and hmm for rolling bearing diagnosis. Entropy 2017;19:176. https://doi.org/10.3390/e19040176. Peltier R, Levy Vehel J, Signal P, Fractales P. Multifractional brownian motion : Definition and preliminary results. INRIA; 1995. Lim S, Muniandy SV. On some possible generalizations of fractional brownian motion. Phys Lett A 2000;266:140e5. https://doi.org/10.1016/S03759601(00)00034-7. Li M. Fractal time series d a tutorial review. Math Probl Eng 2010;2010. https://doi.org/10.1155/2010/157264. Muniandy SV, Lim S, Murugan R. Inhomogeneous scaling behaviors in malaysian foreign currency exchange rates. Physica A 2001;301:407e28. https://doi.org/10.1016/S0378-4371(01)00387-9. Song WQ, Cheng XX, Cattani C. Multi-fractional brownian motion and quantum-behaved partial swarm optimization for bearing degradation forecasting, complexity in press. 2019. Boubaker S. Identification of nonlinear hammerstein system using mixed integer-real coded particle swarm optimization: application to the electric daily peak-load forecasting. Nonlinear Dyn 2017;90:797e814. Soufi Y, Kahla S, Bechouat M. Feedback linearization control based particle swarm optimization for maximum power point tracking of wind turbine equipped by pmsg connected to the grid. Int J Hydrogen Energy 2016;41: 20950e5. Baskar S, Zheng RT, Alphones A, Ngo NQ, Suganthan PN. Particle swarm optimization for the design of low-dispersion fiber bragg gratings. IEEE Photonics Technol Lett 2005;17:615e7. Zhang PG, Yang CL, Xu ZH, Cao ZL, Mu QQ, Xuan L. Hybrid particle swarm global optimization algorithm for phase diversity phase retrieval. Opt Express 2016;24:25704e17. Singh MR, Mahapatra SS. A quantum behaved particle swarm optimization for flexible job shop scheduling. Comput Ind Eng 2016;93:36e44.
8
W. Song professor et al. / Energy 194 (2020) 116847
[35] Sun T, Xu M-h. A swarm optimization genetic algorithm based on quantumbehaved particle swarm optimization. Computational Intelligence and Neuroscience; 2017. [36] Li Y, Jiao L, Shang R, Stolkin R. Dynamic-context cooperative quantumbehaved particle swarm optimization based on multilevel thresholding applied to medical image segmentation. Inf Sci 2015;294:408e22. [37] Xu C, Zhang P, Wang H, Li Y, Lv C. Ultrasonic echo waveshape features extraction based on qpso-matching pursuit for online wear debris discrimination. Mech Syst Signal Process 2015;60e61:301e15. [38] Xu M, Zhang L, Du B, Zhang L, Fan Y, Song D. A mutation operator accelerated quantum-behaved particle swarm optimization algorithm for hyperspectral endmember extraction. Remote Sens 2017;9. [39] Xue T, Li R, Tokgo M, Ri J, Han G. Trajectory planning for autonomous mobile robot using a hybrid improved qpso algorithm. Soft Computing 2017;21: 2421e37. [40] Taqqu MS, Teverovsky V, Willinger W. Estimators for long-range dependence: an empirical study. Fractals 1995;3:785e98. [41] Montanari A, Taqqu MS, Teverovsky V. Estimating long-range dependence in the presence of periodicity: an empirical study. Math Comput Model 1999;29: 217e28. [42] Li M, Zhao W, Jia W, Long D, Chi C-H. Modeling autocorrelation functions of self-similar teletraffic in communication networks based on optimal
approximation in hilbert space. Appl Math Model 2003;27:155e68. [43] Pei B, Xu Y. On the non-lipschitz stochastic differential equations driven by fractional brownian motion. Adv Differ Equ 2016. https://doi.org/10.1186/ s13662-016-0916-1. [44] Xu Y, Pei B, Li Y. An averaging principle for stochastic differential delay equations with fractional brownian motion. Abstract and Applied Analysis; 2014. [45] Ming L. Modeling autocorrelation functions of long-range dependent teletraffic series based on optimal approximation in hilbert space-a further study. Appl Math Model 2007;31:625e31. [46] Longjin L, Ren F-Y, Qiu W-Y. The application of fractional derivatives in stochastic models driven by fractional brownian motion. Phys A Stat Mech Appl 2010;389:4809e18. [47] Clerc M, Kennedy J. The particle swarm-explosion, stability, and convergence in a multidimensional complex space. IEEE Trans Evol Comput 2002;6:58e73. [48] Wang P, Huang C. An energy conservative difference scheme for the nonlinear fractional schrodinger equations. J Comput Phys 2015;293:238e51. [49] Gong HF, Chen ZS, Zhu QX, He YL. A Monte Carlo and pso based virtual sample generation method for enhancing the energy prediction and energy optimization on small data problem: an empirical study of petrochemical industries. Appl Energy 2017;197:405e15.