Journal Pre-proof Load Forecasting based on Grasshopper Optimization and a Multilayer Feedforward Neural Network Using Regressive Approach
M. Talaat, M.A. Farahat, Noura Mansour, A.Y. Hatata PII:
S0360-5442(20)30194-8
DOI:
https://doi.org/10.1016/j.energy.2020.117087
Reference:
EGY 117087
To appear in:
Energy
Received Date:
18 September 2019
Accepted Date:
02 February 2020
Please cite this article as: M. Talaat, M.A. Farahat, Noura Mansour, A.Y. Hatata, Load Forecasting based on Grasshopper Optimization and a Multilayer Feed-forward Neural Network Using Regressive Approach, Energy (2020), https://doi.org/10.1016/j.energy.2020.117087
This is a PDF file of an article that has undergone enhancements after acceptance, such as the addition of a cover page and metadata, and formatting for readability, but it is not yet the definitive version of record. This version will undergo additional copyediting, typesetting and review before it is published in its final form, but we are providing this version to give early visibility of the article. Please note that, during the production process, errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain. ยฉ 2019 Published by Elsevier.
Journal Pre-proof
Load Forecasting based on Grasshopper Optimization and a Multilayer Feed-forward Neural Network Using Regressive Approach M. Talaat*1,3, M. A. Farahat1, Noura Mansour1 and A.Y. Hatata2,3 1Electrical
Power and Machines, Faculty of Engineering, Zagazig University, P.O. 44519, Zagazig, Egypt. Engineering Department, Faculty of Engineering, Mansoura University, Egypt. 3Electrical Engineering Department, College of Engineering, Shaqra University, Dawadmi, Ar Riyadh, Saudi Arabia. *
[email protected] 2Electrical
Abstract This paper introduces a proposed model for mid-term to short-term load forecasting (MTLF; STLF) that can be used to forecast loads at different hours and on different days of each month. The combined MT-STLF model was investigated to aid in power generation and electricity purchase planning. A hybrid model of a multilayer feed-forward neural network (MFFNN) and the grasshopper optimization algorithm (GOA) was introduced to obtain high-accuracy results for load forecasting using the combined MT-STLF model. The MFFNN is prepared by processing the input layer and output layer and finally selecting a suitable number of hidden layers. The main steps in developing the model from the MFFNN include entering the data into the network, training the model and finally implementing the prediction process. The accuracy of the model obtained before using the GOA was lower than that after applying the GOA. Weather factors such as the temperature were used as inputs to the MFFNN during MT-STLF modelling to ensure high accuracy. In the proposed model, the temperature had a clear effect on the forecasted load. Additionally, there was a difference between the maximum and minimum loads in winter and summer months. A regressive model was introduced to determine the relations between the dependent variable (the load) and the independent variables that affect the load, such as the temperature. The regressive model used in the paper highlights the effect of the temperature on the hourly load. The accuracy of the hybrid model is satisfied with deviation error varied between -0.06 and 0.06. Moreover, the performance of the proposed forecasting model has been assessed by three indices; Root Mean Square Error (RMSE), Mean Absolute Error (MAE) and Mean Absolute Percentage Error (MAPE) then, compared with other forecasting models considering other optimization algorithms. Key words: Mid-term load forecasting; short-term load forecasting; power generation; multilayer feed-forward neural network; grasshopper optimization algorithm; regressive model.
1
Journal Pre-proof Abbreviations MTLF STLF MT-STLF MFFNN GOA LTLF ARIMA AR MA ARMA ANN GA FI SVM ELMs BNN NN MLP RBFNN GRNN CPNN ENN WNN WT EMD GWO ALO SVR ASO ACO FFOA ABC RFA CSO CSA EAM ML MSE MAE RMSE MAPE
Mid-Term Load Forecasting Short-Term Load Forecasting Mid-Term to Short-Term Load Forecasting Multilayer Feed-Forward Neural Network Grasshopper Optimization Algorithm Long-Term Load Forecasting Autoregressive Integrated Moving Average Auto Regression Moving Average Autoregressive Moving Average Artificial Neural Network Genetic Algorithm Fuzzy Inference Support Vector Machines Extreme Learning Machines Bagging Neural Network Neural Networks Multilayer Perceptron Radial Basis Function Neural Network Generalized Regression Neural Network Counter-Propagation Neural Network Elman Neural Network Wavelet Neural Network Wavelet Transform Empirical Mode Decomposition Grey Wolf Optimization Ant Lion Optimization Support Vector Regression Ant Swarm Optimization Ant Colony Optimization Fruit Fly Optimization Algorithm Artificial Bee Colony Rainfall Algorithm Chaotic Swarm Optimization Cuckoo Search Algorithm Environmental Adaptation Method Marquardt-Levenberg Mean Square Error Mean Absolute Error Root Mean Square Error Mean Absolute Percentage Error
Nomenclature ๐ ๐ โ ๐ ๐ ๐ท ๐ ๐ ๐ฝ ๐
The autoregressive order with functions โ
and ๐ The number of differences from the original data with functions โ
and ๐ The difference operator in a stationary system The order of the MA process with functions โ
and ๐ The autoregressive order with functions ๐ and ๐ The number of differences from the original data with functions ๐ and ๐ The order of the MA process with functions ๐ and ๐ The output term The time lag operator Random error term 2
Journal Pre-proof ๐ ๐ฆ๐ ๐ค๐๐ ๐๐ ๐ฅ๐ ๐ง๐ ๐ค๐๐ ๐๐ ๐๐ ๐ค๐๐ ๐ ๐๐ ๐น๐๐ก๐๐๐ ๐ (๐๐) โ๐ โ๐
๐พ ๐ก ๐๐ ๐๐ ๐บ๐ ๐ด๐ ๐1, ๐2, and ๐3 ๐๐๐ ๐๐๐ ๐
๐๐ ๐ ๐ ๐ ๐๐ ๐ข ๐๐ค ๐๐๐ ๐ข๐๐ ๐๐ ๐ ๐๐๐๐ ๐๐๐๐ฅ ๐ ๐๐๐๐ฅ ๐
๐ฟ๐ก ๐ฟ๐ก ๐
The number of samples The output of the first hidden layer with ๐๐กโ First hidden neuron The weight between the ๐๐กโ input neuron and the ๐๐กโ first hidden neuron, The base of the first hidden layer The ๐๐กโ input The output from the second hidden layer The weight of the ๐๐กโ first hidden neuron and the ๐๐กโ second hidden neuron The base of the second hidden layer The output from the ๐๐กโ Output layer The weight of the ๐๐กโ second hidden neuron and the ๐๐กโ output neuron The number of training patterns The target output The ๏ฌtness value of the training result The change in the weight The change in bias The learning rate Time The ๐๐กโ grasshopper position of social interactions The ๐๐กโ grasshopper force of social interactions The ๐๐กโ grasshopper gravity force The ๐๐กโ grasshopper advection of air Random numbers in the range [0,1] The distance between the ๐๐กโ and the ๐๐กโ grasshoppers Unit vector from the ๐๐กโ to the ๐๐กโ grasshopper The total number of grasshoppers The strength of the social forces function The attractive length scale The strength of attraction The gravitational constant Unit vector towards the centre of the Earth The drift constant Unit vector in the wind direction The lower bounds in the ๐๐กโ dimension The upper bounds in the ๐๐กโ dimension The value of the target in the ๐๐กโ dimension (best solution) A coefficient that decreases in proportion to the number of iterations The minimum values of the decreasing coefficient The maximum values of the decreasing coefficient The current iteration The maximum iteration The regression factor The actual value of the load at time ๐ก The forecasting value of the load at time ๐ก The total number of data used
1 Introduction Recently, load forecasting has become a critical task in power system operations. The load refers to the consumption of power from power system plants [1]. The new trend in the sustainable energy development and entrepreneurship is to utilize the energy that has satisfied 3
Journal Pre-proof the human needs. This requires the knowledge of the energy forecasting to utilize the available energy by a smart way with smart decision. This energy forecasting is depending on the diagnosis of the data available from the power system grid along short and long period of time [2]. Load forecasting helps to estimate future loads from recent loads using various techniques in efforts to save energy, reduce costs, perform power management and implement economic dispatch plans. The more accurate the forecasting process is, the more reliable and stable the systems are. The importance of an accurate forecasting process is necessary because a 1% increase in the forecasted load can considerably increase costs due to the establishment of many power generation units and may cause an increase in the spinning reserve; however, a decrease in the forecast can lead to system failure and damage due to a low spinning reserve and insufficient number of generating units. Providing an accurate forecast is very important in electricity markets because (a) the load is considered the basic component in determining the electricity price for consumers, (b) forecasts make the overall system more stable and reduce costs, and (c) forecasting reduces the unnecessary consumption of the energy. Conversely, an inaccurate forecasting process can result in losses and a poor economic system. Load forecasting methods can be divided into three types [3, 4]. The first type is long-term load forecasting (LTLF)[5], which is used to predict the load during a year or over more than one year, which is very important in economic operations involving the power system. The second type is mid-term load forecasting (MTLF)[6], which is used to predict the load at weekly, monthly, or even annual time scales. This type of method is very important for maintenance operations in a power system. The third type of method is short-term load forecasting (STLF), which is used to predict the load at hourly, daily and weekly time scales. Selecting the appropriate input data affects the performance of the respective method. STLF is considered one of the most important tasks in the field of power systems, so researchers have introduced many methods to achieve accurate STLF with low error. STLF has great importance in power system operations such as power scheduling, power planning and economic operations [7]. Moreover, this type of forecasting is important for the efficiency and reliability of the power system. Currently, power scheduling and unit commitment [8] are necessary components of STLF. Accurate STLF can aid in making the system more secure and stable and help generating units implement production plans [9, 10]. 4
Journal Pre-proof Many models and techniques are used to accurately predict load-forecasting operations. These models are divided into statistical models and non-statistical models or expert systems. Statistical models are divided into autoregressive integrated moving average (ARIMA), linear regression, multiple linear regression and exponential smoothing models [11]. Statistical methods depend on the previous load and current load and are based on series of mathematical equations. However, in some cases, statistical models do not yield the best accuracy. The time series approach is a statistical methods that consistently provides good accuracy [10]. Time series models include the auto regression (AR), moving average (MA), autoregressive moving average (ARMA) and ARIMA models [10]. The advantage of these models is that they yield good accuracy under typical conditions, but they produce poor results on days with abnormal or special events, such as holidays and weekends. ARIMA models depend on historical and current data in addition to external factors that affect the load forecasting process. Exponential smoothing yields better accuracy than ARIMA models, but it a high error is often generated in long-term load forecasting. Multiple linear regression can address the relationships among different independent variables. In addition, some time series filtering methods, such as the Kalman filter method, have been used to provide smoothing operations and improve the forecasting accuracy [12,13]. The Kalman filter provides an optimal dynamic estimator from the available data and can solve nonlinear problems through linearization [14]. The disadvantage of the Kalman filter is that it cannot implement nonlinear models, so extended Kalman filter methods have been introduced [13], such as state space models (SSMs) with Kalman filters. SSMs have been used with ARIMA models and Kalman filters in STLF. The disadvantage of SSMs are their low accuracy. Expert systems often involve artificial neural network (ANN) [15], genetic algorithm (GA) [16], fuzzy inference (FI) [17] and support vector machine (SVM) [18] methods. The advantage of expert systems is that they can obtain high-precession results for loads that are nonlinear. The major disadvantage of expert systems is overfitting, and this problem can be avoided by using hybrid systems and combinations of expert systems. There are many classes of ANNs, such as extreme learning machines (ELMs) [19], which have a fast learning speed. The disadvantage of ELMs (like other ANN models) is the random initialization of the weight parameters, which may produce error in the results. Another disadvantage is associated with over-training. Bagging neural network (BNN) [20] methods are also based on neural networks (NN) and create different sets of data; then, the NN is trained on 5
Journal Pre-proof each data set, and the results obtained for each data set are average. The advantage of a BNN is the reduction in the estimation error. There are other neural network-based techniques, such as [21] multilayer perceptron (MLP), radial basis function neural network (RBFNN), generalized regression neural network (GRNN), counter-propagation neural network (CPNN), and Elman neural network (ENN) [12] methods. GRNN and CPNN methods generate the forecasted output as a vector, and the MLP and RDFNN methods generate the forecasted output as one component [21]. ENNs yield higher performance than BNNs when applied in complex nonlinear models. The wavelet neural network (WNN) is another neural network-based technique [10]. Wavelet transform (WT) decomposes the original electricity load into several components, and each component is predicted by ELM [10]. WT is applied to filter the high-frequency component, and the remaining component is predicted by optimization techniques. WT may be combined with empirical mode decomposition (EMD). EMD is an adaptive signal-processing method that decomposes the time series into several functions [12]. The advantages of EMD over WT are that it is self-adaptive and there is no need to set the decomposition scale. WNNs are based on WT and ANNs and thus combine the best time-frequency localization properties of wavelet transform and neural network methods [10]. FI is an artificial intelligence technique, and its advantages are that it does not need a mathematical model that relates input to output and does not require extremely precise input data [17]. SVMs are used in STLF and can achieve satisfactory accuracy because they are powerful learning techniques; however, they require a long processing time [18, 22]. Many algorithms, such as ant lion optimization (ALO), have been combined with SVMs to enhance their accuracy. When SVMs are applied in regression models, the resulting method is called support vector regression (SVR) [23]. The Grey Wolf Optimization (GWO) [24] has been recently used in the forecasting process to reduce the relevant error and obtain the best solution. Many techniques used for STLF are combined with optimization techniques to enhance their performance and obtain high accuracy, and common optimization methods include ant swarm optimization (ASO) [25], the grasshopper optimization algorithm (GOA) [2, 26], ant colony optimization (ACO) [27], the fruit fly optimization algorithm (FFOA) [28], artificial bee colony (ABC) optimization [29], the cuckoo search algorithm (CSA) [30], the rainfall algorithm (RFA) [31], chaotic swarm optimization (CSO) [32] and the environmental adaptation method (EAM) [1].
6
Journal Pre-proof These different optimization methods have some drawbacks; specifically, CSO, GA and CSA do not include storage functions, so the information associated with the best particle is not stored while implementing the optimization technique [18]. Various evolutionary algorithms, such as the EAM [33] and FFOA [34], have been proposed to optimize the weights of neural networks in STLF [33]. The EAM generally displays excellent performance, and each particle other than the best particle is treated equally. The disadvantages of the EAM are its low convergence, local optima problems and use of only one particle in the search space. A combination of the previous algorithms was used to eliminate the disadvantages of the single optimization methods [16]. This paper introduces a precise and sufficient model that can be used to forecast loads at the hourly and daily scales over the course of a month. Additionally, in this paper, the GOA is proposed for parameter identification and training the MFFNN for the MT-STLF model. This approach is used to optimize the parameters of the MFFNN and achieve optimal performance. Many factors affect load forecasting. The most influential factor is the weather. To obtain an accurate load forecast using the newly proposed MT-STLF model, weather factors should be used as inputs.
2 Regressive Analysis Regression analysis involves the relations between some independent or exogenous variables and a dependent variable. The purpose of regression analysis is to find a function that represents the relation between these variables. The disadvantage of this method is that it must be combined with other techniques to obtain an accurate solution with satisfactory error. Statistical methods cannot yield acceptable accuracy in load forecasting because the relation between the load demand and the factors that affect it is generally nonlinear. These models are used for linear analysis and are not suitable for nonlinear loads. Thus, expert systems can be used to overcome the drawbacks of statistical methods. Regression involves the relations between independent variables (weather conditions) and a dependent variable (load condition). The purpose of regression analysis is to find a correlation function between the dependent and independent variables [10]. A regression model reflects the effect of weather factors, such as the humidity, wind speed, cloud cover (load shedding) and temperature, on the hourly load.
7
Journal Pre-proof The temperature plays a notable role in multiple linear regression because it is the main factor that affects the humidity ratio, wind speed and load shedding. In this model, the temperature is taken as the main influential factor related to weather change. The load is described in terms of different independent weather variables (e.g., the forecasting temperature). Errors may occur in this model due to sudden changes in the load conditions, load shedding or the forecasting temperature. Therefore, regression models require very accurate temperature forecasting, load shedding and load conditions for a long period of time. Thus, a new model that considers the load conditions, load shedding and temperature forecasting over a long period required. This model investigates the changes in these factors at hourly, weekly, and monthly scales, thereby encompassing STLF and MTLF. In this study, the newly proposed MT-STLF model is been presented considering the variations in the temperature, load shedding and load conditions during each hour of each month. The ARIMA model is a time series technique based on a simple regressive analysis method that can be used for MT-STLF with sufficient accuracy. The ARIMA model consists of auto regression (AR), integration (I), and MA tasks. The AR process with the โ
function, where ๐ is the autoregressive order {โ
1 + โ
2 + โฆ + โ
๐}, and ๐ is the order of the MA process with the ๐ function {๐1 + ๐2 + โฆ + ๐๐}. Additionally, ๐ (๐ก) is a random error term for the AR process with the โ
function and the MA process with the ๐ function. So, the output, ๐(๐ก), considering the AR of order ๐, can be written as: ๐(๐ก) = โ
1 ๐(๐ก โ 1) + โ
2 ๐(๐ก โ 2) + โฆ + โ
๐ ๐(๐ก โ ๐) + ๐(๐ก)
(1)
where ๐(๐ก) represents the output term and ๐ฝ is the time lag operator or backshift operator.
}
๐(๐ก โ 1) = ๐ฝ ๐(๐ก) โฎ ๐(๐ก โ ๐) = ๐ฝ๐ ๐(๐ก)
(2)
This operation can be written as follows. โ
(๐ฝ) ๐(๐ก) = ๐(๐ก) where โ
(๐ฝ) = 1 โ โ
1๐ฝ โ โ
2๐ฝ2 โโฆ โ โ
๐๐ฝ๐ Similar, the output considering the MA of order ๐, can be written as: 8
(3)
Journal Pre-proof ๐(๐ก) = ๐(๐ก) โ ๐1 ๐(๐ก โ 1) โ ๐2 ๐(๐ก โ 2) โ โฆ โ ๐๐ ๐(๐ก โ ๐)
(4)
Considering the backshift operator this equation can be expressed as: ๐(๐ก) = ๐(๐ฝ) ๐(๐ก)
(5)
where ๐(๐ฝ) = 1 โ ๐1๐ฝ โ ๐2๐ฝ2 โโฆ โ ๐๐๐ฝ๐ For a hybrid model using AR of order ๐ and MA of order ๐ the ARMA with (๐,๐) order can be written as: ๐(๐ก) = โ
1 ๐(๐ก โ 1) + โฆ + โ
๐ ๐(๐ก โ ๐) + ๐(๐ก) โ ๐1 ๐(๐ก โ 1) โ โฆ โ ๐๐ ๐(๐ก โ ๐)
(6)
By using the backshift operator, equation (6) can be expressed in the following form โ
(๐ฝ) ๐(๐ก) = ๐(๐ฝ) ๐(๐ก)
(7)
In addition, the relation between the difference operator โ with the output term ๐(๐ก) for different time series, ๐, and time lag operator, ๐ฝ, is given as follows. โ ๐(๐ก) = ๐(๐ก) โ ๐(๐ก โ 1) = (1 โ ๐ฝ) ๐(๐ก)
(8)
โ๐๐(๐ก) = (1 โ ๐ฝ)๐๐(๐ก)
(9)
Thus, ARIMA involves integer variables (๐, ๐, ๐) [14, 18] can be described as follows: โ
(๐ฝ) โ๐ ๐(๐ก) = ๐(๐ฝ) ๐(๐ก)
(10)
The proposed MT-STLF model is a highly variable periodic process due to the massive amount of weather factor and load condition data required at the hourly scale each month. The MTSTLF model considers these data in different seasons using two ARIMA time series processes: the ARIMA seasonal time series process using integer variables (๐, ๐, ๐) with functions โ
and ๐ and the other ARIMA non-seasonal time series process using integer variables (๐, ๐ท, ๐) with functions ๐ and ๐. The new non-seasonal time series represents the non-similar samples in each time period. The proposed MT-STLF model using ARIMA time series considering the two integral types (๐ , ๐, ๐) and (๐, ๐ท, ๐) with different input/output functions โ
, ๐, ๐ and ๐ is given as follows: โ
(๐ฝ) ๐(๐ฝ๐ ) โ๐ โ๐ท๐ ๐(๐ก) = ๐(๐ฝ) ๐(๐ฝ๐ ) ๐(๐ก)
9
(11)
Journal Pre-proof where ๐ represents the number of samples considering 24-hour daily loads, 7-day weekly loads and 4-week monthly loads (e.g., ๐ = 24, ๐ = 7 or ๐ = 4 for each forecasting study). The new terms in the proposed model given by Eqn. (11) can be expressed as follows. ๐(๐ฝ๐ ) = 1 โ โ
1๐ฝ๐ โ โ
2๐ฝ2๐ โ โฆ โ ๐๐๐ฝ๐๐ ๐(๐ฝ๐ ) = 1 โ ๐1๐ฝ๐ โ ๐2๐ฝ2s โ โฆ โ ๐๐๐ฝ๐๐
}
(12)
and, ๐ท
โ๐ท๐ ๐(๐ก) = (1 โ ๐ฝ๐ ) ๐(๐ก)
(13)
3 Multilayer Feed-forward Neural Network ANN methods are artificial intelligence techniques that are widely used in the load forecasting process. An ANN consists of an input layer, an output layer, and hidden layers. An ANN resembles the human brain, which consists of a number of neurons. Data can be used as inputs to the ANN, and the ANN performs data training and testing [1]. Input data in this study include weather factors such as temperature at the daily, hourly, and monthly scales. The type of day (weekend or holiday) is also an ANN input. An ANN can minimize the error for nonlinear inputs and has the ability to obtain the relationship between inputs and the output without complex mathematical equations. ANNs easily learn and then make decisions. The easy implementation of ANNs is a notable advantage compared to other models, and there is flexibility when using ANNs in modelling. The disadvantages of ANNs are that they may generate error in the forecasting process, training may be unstable, and many parameters need to be determined (weights). Small sample size and low convergence issues are two common drawbacks of ANNs. A neural network model is constructed by choosing the input data and output data. The number of neurons and number of hidden layers should be carefully selected because they affect the accuracy of training. For the ANN-based forecasting model in this paper, an MFFNN with two hidden layers is used. The input layer has several neurons equal to the number of network inputs; the first and second hidden layers have N and M neurons, respectively; and the output layer has one neuron [35, 36]. The transfer function for the hidden neurons is the sigmoid function, and the output transfer function is a linear activation function. The output of the ๐๐กโ hidden neuron is calculated as follows: ๐ฆ๐ =
1
[1 + ๐
โ
(
๐ โ๐ = 1(๐ค๐๐๐ฅ๐
โ ๐๐ )
)
]
10
๐ = 1, 2, โฆ, ๐
(14)
Journal Pre-proof where ๐ค๐๐ is the weight between the ๐๐กโ input neuron and the ๐๐กโ first hidden neuron, ๐๐ is the base of the first hidden layer, ๐ฅ๐ is the ๐๐กโ input and ๐ฆ๐ is the output of the first hidden layer. The output from the second hidden layer ๐ง๐ is determined by the following equation ๐ง๐ =
1
[1 + ๐
โ
(
๐ โ๐ = 1(๐ค๐๐๐ฆ๐
๐ = 1, 2, โฆ, ๐
)
โ ๐๐)
]
(15)
where ๐ค๐๐ is the weight of the ๐๐กโ first hidden neuron and the ๐๐กโ second hidden neuron and ๐๐ is the base of the second hidden layer. The output, ๐๐, from the ๐๐กโ output layer is determined as follows: ๐
๐๐๐ =
โ (๐ค
๐๐๐ง๐)
๐ = 1, 2, โฆ, ๐ป
(16)
๐=1
where ๐ค๐๐ is the weight of the ๐๐กโ second hidden neuron and the ๐๐กโ output neuron. To assess the training algorithm performance, the error is determined by ๏ฌnding the difference between the MFFNN output and the target output. Here, the mean square error (MSE) is computed: 1 ๐๐๐ธ = ๐
๐
โ(๐๐ โ ๐ ) ๐
2
๐
(17)
๐=1
where ๐ is the number of training patterns, ๐๐๐ is the obtained output from the MFFNN and ๐๐ is the target output. The ๏ฌtness value of the training result is computed as follows.
[โ
1 ๐น๐๐ก๐๐๐ ๐ (๐๐) = ๐๐๐.(๐๐๐ธ) = ๐๐๐. ๐
๐
]
(๐๐๐ โ ๐๐)2
๐=1
(18)
The declining gradient of the back-propagation algorithm is used to minimize the MSE between the target output and calculated output from the MFFNN. The MSE is used to update the weights and biases from the output layer towards the hidden layers [35]. The updated weights are computed as follows: โ๐๐๐ = ๐พ(๐ฆ๐ โ ๐ฅ๐)
(19)
โ๐๐ = ๐พ(๐ฆ๐ โ ๐ฅ๐)
(20)
11
Journal Pre-proof where โ๐๐๐ is the change in the weight that connects the inputs of the first hidden neurons, โ๐๐ is the change in bias, and ๐พ is the learning rate. Then, the updated weights and biases are given as follows. ๐๐๐๐ค = ๐๐๐๐ + โ๐
(21)
๐๐๐๐ค = ๐๐๐๐ + โ๐
(22)
It is very important to select suitable input variables for MT-STLF based on the MFFNN model to achieve the required accuracy. In this paper, four input variables are used in the MFFNN model: the times per hour, day, and month and the temperature. These input variables are explained below, see Fig. 1. a) Hours per the load day During the day, electrical loads change due to the behaviours of people and the nature of loads. Therefore, it is important to input these variables into the proposed MFFNN. The hour numbers in a day range from hour 1 to hour 24.
Times per hours of the load day Days of the week Months of the year
1
1
2
2
3
3
Holidays
Forecasting load
T(t)
Temperature T(t-1) T(t-2) T(t-3)
Input Layer
n
m
First Hidden Layer
Second Hidden Layer
Output Layer
Fig. 1. Construction of the proposed MFFNN model for MT-STLF 12
Journal Pre-proof b) Days of the week The days of the week are numbered from 1 to 7; for example, Sunday is denoted as 1, and Saturday is 7. c) Months of the year The load consumption during each month varies. The input months for the neural network are represented as January = 1, โฆ, December = 12. d) Holidays Weekends and holidays affect the load forecasting accuracy because human activities change and influence the electricity consumption pattern. A holiday is represented by 0, and a working day is represented by 1. e) Temperature The temperature is the main factor that affects load consumption. The temperature is used as a consecutive input to the neural network. Four values of the temperature are used as inputs to the MFFNN at times (๐ก), (๐ก โ 1), (๐ก โ 2) and (๐ก โ 3), as shown in Fig. 1. Figure 1 illustrates the final construction of the MFFNN and the inputs to the MFFNN, which are the hours of the day, days of the week, months of the year, holidays and temperature. The figure also shows the target output of the neural network, which is the forecasted load. The number of hidden layers is also illustrated in the figure. The connections between the neurons in a layer and the other layers are illustrated as black connection lines. The suitable and optimum weights, biases, numbers of hidden layers, transfer function and numbers of neurons were selected using the GOA. Therefore, a hybrid algorithm combining the GOA and MFFNN was established to obtain optimum values of the MFFNN parameters and achieve high accuracy with the proposed model. The optimization process depends on two basic steps (exploration and exploitation). Exploration refers to the search space of the algorithm. Exploitation reflects the ability to obtain the best solution. Mutation and crossover are the two steps in the optimization process that ensure that the best individual survives. Thus, different hybrid systems were investigated to improve the performance of the neural network and reduce the corresponding error. This approach improves the quality and accuracy of the load forecasting process. 13
Journal Pre-proof A hybrid system combines a MFFNN and one of the expert systems, such as GA or GWO methods. The benefits of hybrid systems are that they have large learning areas, enhance the learning ability of MFFNN and reduce the error in the results. Thus, the advantages of each model used in the hybrid model are combined.
4 Grasshopper Optimization Algorithm (GOA) The grasshopper belongs to the biological infraclass of winged insects. The GOA mimics the swarming behaviour of grasshoppers in nature. In this paper, a detailed explanation of the GOA is given in the following sections. The grasshopper is a harmful insect that feeds on crops and affects the production of agriculture. Grasshoppers experience three stages in their life cycle: the egg stage, larval stage, and adult stage. The larval stage is characterized by slow movement and small steps. However, grasshoppers have the ability to move rapidly and abruptly in the adult stage. Moreover, grasshoppers can form swarms in both the larval and adult stages [37]. These swarms are constantly looking for sources of food. Generally, the search process of nature-inspired optimization algorithms includes two stages: exploration and exploitation. The exploration stage involves abruptly changing position. During exploitation, the insects generally move locally. Grasshoppers perform exploration and exploitation as both nymphs and adults. These two natural behavioural processes of grasshoppers were analysed and modelled to construct the โgrasshopper optimization algorithmโ in [2, 37-41]. Thus, the GOA was developed by mimicking the behaviours of grasshoppers. The mathematical model of a grasshopper swarm can be represented as follows: ๐๐ = ๐๐ + ๐บ๐ + ๐ด๐
(23)
where ๐๐, ๐๐, and ๐บ๐ are the position, force of social interactions and gravity force of the ๐๐กโ grasshopper, respectively, and ๐ด๐ is the advection of air. In the search space, the search agent is distributed randomly. To obtain random behaviour, Eq. (23) is modified as follows: ๐๐ = ๐1๐๐ + ๐2๐บ๐ + ๐3๐ด๐
(24)
where ๐1, ๐2, and ๐3 are random numbers in the range [0,1]. The force of the social interactions of the ๐๐กโ grasshopper can be calculated by
14
Journal Pre-proof ๐
๐๐ =
โ ๐ (๐ ) ๐ ๐
๐๐
(25)
๐๐
๐=1 ๐โ ๐
where ๐๐๐ is the distance between the ๐๐กโ and the ๐๐กโ grasshoppers, ๐๐๐ is a unit vector from the ๐๐กโ to the ๐๐กโ grasshopper, ๐ is the total number of grasshoppers, and ๐๐ is the strength of the social forces function. Specifically, ๐๐ can be calculated as follows:
๐๐ (๐) = ๐๐
โ๐ ๐
โ ๐ โ๐
(26)
where ๐ and ๐ are the attractive length scale and the strength of attraction, respectively. The recommended values for ๐ and ๐ are 0.5 and 1.5, respectively [2, 36]. The gravity force of the ๐๐กโ grasshopper, ๐บ๐, is calculated by ๐บ๐ = โ๐ ๐๐
(27)
where ๐ and ๐๐ are the gravitational constant and unit vector towards the centre of the Earth, respectively. Wind advection, ๐ด๐, can be calculated by ๐ด ๐ = ๐ข ๐๐ค
(28)
where ๐ข and ๐๐ค are the drift constant and the unit vector in the wind direction, respectively. By substituting the values of ๐ด๐, ๐บ๐, and ๐๐ in Eqn. (23), the following equation can be obtained. ๐
๐๐ =
๐ฅ๐ โ ๐ฅ๐
โ๐ (|๐ฅ โ ๐ฅ |) ๐
๐
๐=1 ๐โ ๐
๐๐๐
โ ๐ ๐๐ + ๐ข ๐๐ค
(29)
The previous mathematical model cannot be used directly to solve optimization problems because a swarm of grasshopper does not converge to a specified point. To obtain the optimal solution, Eqn. (29) is modified as follows:
๐๐๐
=๐
(
๐
๐ข๐๐ โ ๐๐๐
โ๐ ๐=1 ๐โ ๐
2
)
๐ฅ๐ โ ๐ฅ๐ ๐ (|๐ฅ๐๐ โ ๐ฅ๐๐|) + ๐๐ ๐๐๐
15
(30)
Journal Pre-proof where ๐๐๐ and ๐ข๐๐ are the lower and upper bounds in the ๐๐กโ dimension, respectively, ๐๐ is the value of the target in the ๐๐กโ dimension (best solution) and ๐ is a coefficient that decreases in proportion to the number of iterations. The coef๏ฌcient c can be calculated as follows: ๐๐๐๐ฅ โ ๐๐๐๐ ๐ = ๐๐๐๐ฅ โ ๐ ๐๐๐๐ฅ
(31)
where ๐๐๐๐ and ๐๐๐๐ฅ are the minimum and maximum values of the decreasing coefficient, respectively, and ๐ and ๐๐๐๐ฅ are the current iteration and maximum iteration, respectively. The structure of the proposed MT-STLF model using a hybrid MFFNN combined with the GOA is presented in Fig. 2 and explained in the following steps.
16
Journal Pre-proof Start Read the training pattern Initialize weights, biases, number of hidden layers, number of neurons, transfer function and Max. iteration i=1 Training & Test data sets Generate random initial population and decision
Calculate the fitness function, for each individual of population n
Assign the best Grasshopper Td
m
Calculate the MSE Eq. (18)
Update the variable c Normalize the distance between the grasshoppers, dij Update the position Xi of the current search agent i=i+1 If error are satisfied
No
Yes Update Td with the Objective Function, if there is a better solution No Max iteration Yes Print Td End
Fig. 2. Training algorithm of the MFFNN based on the GOA. First the swarm, ๐๐๐๐ฅ, ๐๐๐๐, and the maximum number of iterations are initialized. Then, the GOA starts optimization by creating a set of random solutions (the number of neurons in hidden layers, weights, biases and the transfer function). Then the fitness of each solution (agent) is calculated by using Eq. (18). The variable ๐ and the positions of the search agents are updated. Then the obtained best target position is updated in each iteration. After that, the distances between the grasshoppers are normalized in each iteration. The best target with position and fitness is finally specified as the best approximation for the global optimum. 17
Journal Pre-proof
5 Load Forecasting Performance Assessment Indices Several load forecasting performance indices are used in this study to analyse the performance and the assessment of the proposed forecasting model comparing with other optimization algorithms such as GA [16] and GWO [24]. These indices are Mean Absolute Error (MAE), Root Mean Square Error (RMSE), and Mean Absolute Percentage Error (MAPE) which given by; ๐
๐
๐๐๐ธ =
โ๐ก = 1(๐ฟ๐ก โ ๐ฟ๐ก)2 ๐
(32)
๐
๐๐ด๐ธ = ๐
โ๐ก = 1|(๐ฟ๐ก โ ๐ฟ๐ก)| ๐
|(๐ฟ๐ก โ ๐ฟ๐ก)|
โ๐ก = 1 ๐๐ด๐๐ธ =
(33)
๐ฟ๐ก ๐
ร 100%
(34)
where ๐ฟ๐ก and ๐ฟ๐ก are the actual and forecasting value of the load at time ๐ก respectively and ๐ is the total number of data used.
6 Practical Applications To ensure the accuracy of the proposed model, the data from the Youth Power Station in the new city of Salhiya were used. The data were collected by the Electricity Distribution Authority of the Suze-Canal Sector in the El-Sharqia Province of Egypt. The model was trained with the available data for year 2013 through year 2016, a period of four years, considering the load augmentation factor and temperature variations. Then, the proposed model was used to predict the year 2017 loads. The obtained data, as shown in the next section, clearly show that the proposed model can accurately forecast loads.
7 Results and Discussion 7.1 Results of the Proposed MFFNN Without GOA Training Many different MFFNN structures, all with eight inputs and one output but different numbers of neurons in the two hidden layers, were trained and tested. The training and testing procedures were taken from the actual operating conditions of the Egyptian electricity grid. These networks were trained with Marquardt-Levenberg (ML) algorithms. The criterion for determining the optimum number of neurons in each hidden layer was based on the training 18
Journal Pre-proof
Mean squared error (MSE)
and testing MSE (accuracy). In this study, several tests were performed to determine the optimum number of hidden neurons based on the MSE and number of training epochs. Moreover, different training functions were examined to assess the convergence. The network yielded satisfactory results, with the minimum MSE for eight inputs, 2500 neurons in the first hidden layer, 1200 neurons in the second hidden layer and one output neuron. The proposed MFFNN structure was thus (8 โ 2500 โ 1200 โ 1). A sigmoid transfer function was used for the hidden layers.
10
2
10
0
10
-2
Train Best
0
2000 4000 6000 8000 10000 12000 Best training performance for (13582 Epochs) Fig. 3. The best training performance before using the GOA
The programme used to implement the algorithm was developed in MATLAB. The output layer is capable of minimizing the MSE of the MFFNN to a final value of less than 0.013934 within 13582 epochs. The MSE training error convergence diagrams for the MFFNN using the "trainlm" training function are shown in Fig. 3. Figure 3 illustrates the accuracy or the error value of the model after preparing the MFFNN, choosing the suitable processing steps for the input and target data sets, selecting the appropriate number of hidden layers and finally training the MFFNN. Figure 4 shows that the fitting and convergence results of the model were good based on the regression factor (๐
), where the closer the value is to the actual value, the better the model fit. Notably, acceptable ๐
values are obtained.
19
Output ~= 1*Target + 0.034
Journal Pre-proof
Data Fit Y=T
20 18 16 14 12 10 8 10
15 Target
20
Fig. 4. The value of the regression factor ๐
before using the GOA 7.2 Proposed MFFNN Combined with the GOA Training Results The proposed GOA is implemented using MATLAB. By applying the GOA to train the proposed MFFNN, as illustrated in Fig. 2, the MES is minimized to a final value of less than 0.0072164 at epoch 10000. The MSE training error convergence diagrams for the proposed MFFNN using the GOA training algorithm are shown in Fig. 5. This MSE is satisfactory for an MFFNN with eight inputs, 1500 neurons in the first hidden layer, 800 neurons in the second hidden layer and one output neuron. The proposed MFFNN structure is (8 โ 1500 โ 800 โ 1). Figure 5 demonstrates that the GOA can reduce the MSE. Moreover, the GOA quickly yields the optimal solution with few neurons in the first and second layers. Figure 6 illustrates the value of the regression factor ๐
after using the GOA. Notably, the new value of ๐
is greater than the value before using the GOA, which suggests that the GOA improves the model fitting results. The MFFNN training regression (plot regression) value is ๐
= 0.99937. Figure 7 shows the value of the error between the real model and the model obtained with the MFFNN when all the data sets are used. The deviation error between the actual and forecasted load values using the hybrid MFFNN model and GOA varied between โ0.06 and 0.06
20
10
2
10
0
10
-2
Train Best
0
2000 4000 6000 8000 Best training performance for (10000 Epochs) Fig. 5. The best training performance after using the GOA
Output ~= 1*Target + 0.017
Mean squared error (MSE)
Journal Pre-proof
20
Data Fit Y=T
18 16 14 12 10 8 10
15
20
Target Fig. 6. The value of the regression factor ๐
after using the GOA
21
10000
MFFNN output deviation error after using GOA
Journal Pre-proof
0.06 0.04 0.02 0 -0.02 -0.04 -0.06
0
1000
2000 3000 Training data domain
4000
5000
Fig. 7. MFFNN output deviation error after using the GOA 7.3 MT-STLF Results Figures 8-11 show the forecasting models for January, March, July and October. These figures highlight the closeness between the actual load and the forecasted load obtained using a hybrid proposed MFFNN and GOA model, indicating that the obtained model is valid and accurate. Additionally, using the MFFNN with GOA helps reduce the difference between the actual and forecasted models, leading to very small deviation between the two models.
16
Load (MW)
14 12 10 8 forecasted load 6
0
100
200
actual load
300 400 500 Time (hour)
600
Fig. 8. Actual and forecasted loads for January 22
700
800
Journal Pre-proof
forecasted load
18
actual load
Load (MW)
16 14 12 10 8
0
100
200
300 400 500 Time (hour)
600
700
800
700
800
Fig. 9. Actual and forecasted loads for March
22 20
Load (MW)
18 16 14 12 10 8 0
forecasted load 100
200
actual load
300 400 500 Time (hour)
Fig. 10. Actual and forecasted loads for July
23
600
Journal Pre-proof
forecasted load
18
actual load
Load (MW)
16 14 12 10 8
0
100
200
300 400 500 Time (hour)
600
700
800
Fig. 11. Actual and forecasted loads for October
MFFNN relative error after using GOA
Figures 12-15 illustrate the relative error between the actual and forecasted loads in January, March, July and October.
1.5
x 10
-3
1 0.5 0 -0.5 -1
0
100
200
300 400 500 Time (hour)
600
700
800
Fig. 12. MFFNN relative error after using the GOA (forecasted to actual value of the January load)
24
MFFNN relative error after using GOA
Journal Pre-proof
1
x 10
-3
0.5 0 -0.5 -1
0
100
200
300 400 500 Time (hour)
600
700
800
MFFNN relative error after using GOA
Fig. 13 MFFNN relative error after using the GOA (forecasted to actual value of the March load)
1.5
x 10
-3
1 0.5 0 -0.5 -1 -1.5
0
100
200
300 400 500 Time (hour)
600
700
800
Fig. 14. MFFNN relative error after using the GOA (forecasted to actual value of the July load)
25
MFFNN relative error after using GOA
Journal Pre-proof
1.5
x 10
-3
1 0.5 0 -0.5 -1 -1.5 -2
0
100
200
300 400 500 Time (hour)
600
700
800
Fig. 15. MFFNN relative error after using the GOA (forecasted to actual value of the October load) These figures show that the error values vary between positive and negative values, which suggests that the forecasted load is sometimes greater than the actual load. The relative error between the actual and forecasted loads can be summarized as follows. In January, the error varies between โ0.075 and 0.001; in March, the error varies from โ0.001 to 0.001; in July, the error varies between โ0.00125 and 0.00125; and in October, the load varies from โ0.00175 to 0.0015. 7.4 Assessment of the Proposed GOA Model and Other Algorithms Based on MFFNN Table 1. shows a comparison between the proposed model and other algorithms model based on MFFNN. These models are MFFNN based on Genetic Algorithm [16], (MFFNN-GA) and MFFNN based on Grey Wolf Optimization [24], (MFFNN-GWO). Also, the performance indices for the conventional MFFNN which is trained using trial and error are included in Table. 1. The performance of the proposed load forecasting model is analyzed and tested at different time intervals during a year with the three indices (RMSE (MW), MAE (MW) and MAPE (%)). From the results in Table 1 it can be shown that the proposed model performances based on GOA is better than the other models. The performance indices of the proposed load forecasting method are always less than those of others. Moreover, the results in Table 1 prove that, the better forecasting performance for the proposed forecasted method than the other methods because the GAO can obtain the optimal number of the neurons in the hidden layers with the appropriate weights, biases, and transfer functions and increases the convergence rate.
26
Table 1. Comparisons of the performance indices for different forecasting models using different optimization methods. Method
MFFNN
MFFNN -GA
MFFNN -GWO
MFFNN -GOA
Indices
Jan.
Feb.
Mar.
Apr.
May
June
Jul.
Aug.
Sep.
Oct.
Nov.
Des.
RMSE
0.527444
0.482611
0.53828
0.442033
0.442216
0.544437
0.477077
0.47203
0.449543
0.550553
0.444663
0.45074
MAE
0.500078
0.451484
0.51075
0.406004
0.405030
0.504
0.430694
0.435270
0.408681
0.505976
0.404390
0.418807
MAPE %
4.202935
3.472941
4.06492
3.112196
3.031005
3.63
2.6961
2.795095
2.973742
3.809281
3.258382
3.876845
RMSE
0.343816
0.321327
0.346165
0.275142
0.274282
0.369027
0.337195
0.279444
0.286578
0.378263
0.444663
0.296308
MAE
0.303967
0.282023
0.306894
0.232553
0.228115
0.3186
0.283579
0.233045
0.236219
0.324197
0.404390
0.253488
MAPE %
2.594933
2.194567
2.483446
1.804666
1.722423
2.3194
1.792797
1.5165638
1.738144
2.472955
3.258382
2.379583
RMSE
0.225208
0.211422
0.229046
0.214145
0.212293
0.261739
0.253817
0.219467
0.228355
0.27219
0.218693
0.224186
MAE
0.183608
0.170583
0.184423
0.170607
0.165786
0.210159
0.197565
0.171267
0.177816
0.216979
0.1716096
0.178859
MAPE %
1.588325
1.342397
1.517459
1.328622
1.253706
1.535901
1.260316
1.122866
1.312607
1.675347
1.410736
1.687325
RMSE
0.168011
0.173948
0.173476
0.189771
0.186832
0.214089
0.226267
0.195557
0.205342
0.230312
0.194577
0.194234
MAE
0.131321
0.133432
0.131321
0.143502
0.142504
0.165102
0.169419
0.131321
0.155384
0.131321
0.1483358
0.150280
MAPE %
1.162838
1.065948
1.129249
1.128294
1.083815
1.2243
1.0958
0.964477
1.155972
1.377452
1.226286
1.428071
27
Journal Pre-proof
8 Conclusions The load forecasting process is considered a vital and basic process in the field of power system management, which requires generating a stable and accurate load. In this paper, the development of an accurate and combined MT-STLF model is investigated for power generation and electricity purchase planning and the prediction of future loads to reduce costs and improve economics. Additionally, such as strategy can aid in building an appropriate number of generation units to cover the needed load, determining the load demand and establishing a stable power system. In addition, a hybrid system using an MFFNN and the GOA is employed to ensure a high degree of model accuracy. The GOA is a reliable and robust optimization technique and a newly developed stochastic search technique. The main conclusions of this research are as follows. ๏ท
The load in the regressive model used was described in terms of the independent variable (temperature) and the other factors that influence the load.
๏ท
The newly proposed approach of using a hybrid model that combines an MFFNN and the GOA is presented and is proven to be reliable and effective, with high accuracy and precision.
๏ท
The GOA is proposed for parameter identification and MFFNN training in the MT-STLF model.
๏ท
The hybrid MFFNN with GOA models has a large area for learning, thereby improving the performance of the MFFNN and achieving high accuracy.
๏ท
In this paper, a new approach using MT-STLF is investigated. This model can be used to forecast loads at different hours and on different days during each month.
๏ท
MT-STLF can make the electricity production process more stable and provide a continuous and uninterrupted flow of electricity, thus increasing the reliability of the system.
๏ท
The temperature has a clear effect on the forecasted load. Specifically, the maximum and minimum load demands during the summer months are greater than those during the winter months due to the higher temperature during the summer months. 28
Journal Pre-proof
๏ท
The forecasting model presented in this paper indicates that the proper selection of input variables and training vectors results in very short training and forecasting times.
๏ท
The accuracy of the system before and after using the GOA s as follows. Before using he GOA, the ๐
value of the system was 0.0139 at 13582 epochs, but after using GOA, the ๐
value of the system was improved to 0.0072 at 10000 epochs.
๏ท
The deviation error between the actual and forecasted load values using the new approach with the hybrid MFFNN model and GOA varied between โ0.06 and 0.06.
๏ท
The relative error between the actual and forecasted loads can be summarized as follows. In January, the error varies between โ0.075 and 0.001; in March, the error varies from โ0.001 to 0.001; in July, the error varies between โ0.00125 and 0.00125; and in October, the load varies from โ0.00175 to 0.0015.
๏ท
The performance of the proposed forecasting model using of MFFNN hybrid with GOA has been assessed by three indices; RMSE, MAE and MAPE. Then, compared with other forecasting models considering other optimization algorithms.
References [1] [2] [3] [4] [5] [6] [7] [8] [9]
P. Singh, P. Dwivedi, and V. Kant, "A hybrid method based on neural network and improved environmental adaptation method using Controlled Gaussian Mutation with real parameter for short-term load forecasting," Energy, 2019. M. Talaat, A. Y. Hatata, Abdulaziz S. Alsayyari and Adel Alblawi, " A smart load management system based on the grasshopper optimization algorithm using the under-frequency load shedding approach", Energy, Vol. 190, Article 116423, 2020. K. G. Boroojeni, M. H. Amini, S. Bahrami, S. Iyengar, A. I. Sarwat, and O. Karabasoglu, "A novel multi-time-scale modeling for electric power demand forecasting: From short-term to mediumterm horizon," Electric Power Systems Research, vol. 142, pp. 58-73, 2017. L. Hernรกndez, C. Baladrรณn, J. M. Aguiar, B. Carro, A. Sรกnchez-Esguevillas, and J. Lloret, "Artificial neural networks for short-term load forecasting in microgrids environment," Energy, vol. 75, pp. 252-264, 2014. L. Ekonomou, "Greek long-term energy consumption prediction using artificial neural networks," Energy, vol. 35, pp. 512-517, 2010. N. Abu-Shikhah and F. Elkarmi, "Medium-term electric load forecasting using singular value decomposition," Energy, vol. 36, pp. 4259-4271, 2011. S. Muzaffar and A. Afshari, "Short-Term Load Forecasts Using LSTM Networks," Energy Procedia, vol. 158, pp. 2922-2927, 2019. Y. Yang, J. Che, Y. Li, Y. Zhao, and S. Zhu, "An incremental electric load forecasting model based on support vector regression," Energy, vol. 113, pp. 796-808, 2016. Z. Guo, K. Zhou, X. Zhang, and S. Yang, "A deep learning model for short-term power load and probability density forecasting," Energy, vol. 160, pp. 1186-1200, 2018.
29
Journal Pre-proof
[10] [11] [12] [13] [14] [15] [16] [17] [18] [19] [20] [21] [22] [23] [24] [25] [26] [27] [28] [29]
J. Zhang, Y.-M. Wei, D. Li, Z. Tan, and J. Zhou, "Short term electricity load forecasting using a hybrid model," Energy, vol. 158, pp. 774-781, 2018. J. W. Taylor, "Short-term load forecasting with exponentially weighted methods," IEEE Transactions on Power Systems, vol. 27, pp. 458-464, 2011. W.-Q. Li and L. Chang, "A combination model with variable weight optimization for short-term electrical load forecasting," Energy, vol. 164, pp. 575-593, 2018. H. Takeda, Y. Tamura, and S. Sato, "Using the ensemble Kalman filter for electricity load forecasting and analysis," Energy, vol. 104, pp. 184-198, 2016. C.-N. Ko and C.-M. Lee, "Short-term load forecasting using SVR (support vector regression)-based radial basis function neural network with dual extended Kalman filter," Energy, vol. 49, pp. 413422, 2013. P. Bento, J. Pombo, M. Calado, and S. Mariano, "A bat optimized neural network and wavelet transform approach for short-term price forecasting," Applied energy, vol. 210, pp. 88-97, 2018. Y. Hu, J. Li, M. Hong, J. Ren, R. Lin, Y. Liu, et al., "Short term electric load forecasting model and its verification for process industrial enterprises based on hybrid GA-PSO-BPNN algorithmโA case study of papermaking process," Energy, vol. 170, pp. 1215-1227, 2019. W.-J. Lee and J. Hong, "A hybrid dynamic and fuzzy time series model for mid-term power load forecasting," International Journal of Electrical Power & Energy Systems, vol. 64, pp. 1057-1062, 2015. M. Barman, N. D. Choudhury, and S. Sutradhar, "A regional hybrid GOA-SVM model based on similar day approach for short-term load forecasting in Assam, India," Energy, vol. 145, pp. 710720, 2018. S. Li, L. Goel, and P. Wang, "An ensemble approach for short-term load forecasting by extreme learning machine," Applied Energy, vol. 170, pp. 22-29, 2016. A. Khwaja, M. Naeem, A. Anpalagan, A. Venetsanopoulos, and B. Venkatesh, "Improved shortterm load forecasting using bagged neural networks," Electric Power Systems Research, vol. 125, pp. 109-115, 2015. G. Dudek, "Neural networks for pattern-based short-term load forecasting: A comparative study," Neurocomputing, vol. 205, pp. 64-74, 2016. A. Ahmad, M. Hassan, M. Abdullah, H. Rahman, F. Hussin, H. Abdullah, et al., "A review on applications of ANN and SVM for building electrical energy consumption forecasting," Renewable and Sustainable Energy Reviews, vol. 33, pp. 102-109, 2014. Y. Li, J. Che, and Y. Yang, "Subsampled support vector regression ensemble for short term electric load forecasting," Energy, vol. 164, pp. 160-170, 2018. Y. Liang, "A combined model for short-term load forecasting based on neural network and grey wolf optimization," IEEE 2nd Advanced Information Technology, Electronic and Automation Control Conference (IAEAC), 25-26 March 2017, Chongqing, China, pp. 1291-1296, 2017. W.-C. Hong, "Application of chaotic ant swarm optimization in electric load forecasting," Energy Policy, vol. 38, pp. 5830-5839, 2010. S. Saremi, S. Mirjalili, and A. Lewis, "Grasshopper optimisation algorithm: theory and application," Advances in Engineering Software, vol. 105, pp. 30-47, 2017. D. Niu, Y. Wang, and D. D. Wu, "Power load forecasting using support vector machine and ant colony optimization," Expert Systems with Applications, vol. 37, pp. 2531-2539, 2010. G. Cao and L. Wu, "Support vector regression with fruit fly optimization algorithm for seasonal electricity consumption forecasting," Energy, vol. 115, pp. 734-745, 2016. W.-C. Hong, "Electric load forecasting by seasonal recurrent SVR (support vector regression) with chaotic artificial bee colony algorithm," Energy, vol. 36, pp. 5568-5578, 2011. 30
Journal Pre-proof
[30] [31] [32] [33] [34] [35] [36] [37] [38] [39] [40] [41]
X. Zhang, J. Wang, and K. Zhang, "Short-term electric load forecasting based on singular spectrum analysis and support vector machine optimized by Cuckoo search algorithm," Electric Power Systems Research, vol. 146, pp. 270-285, 2017. S. H. A. Kaboli, J. Selvaraj, and N. Rahim, "Rain-fall optimization algorithm: A population based algorithm for solving constrained optimization problems," Journal of Computational Science, vol. 19, pp. 31-42, 2017. W.-C. Hong, "Chaotic particle swarm optimization algorithm in a support vector regression electric load forecasting model," Energy Conversion and Management, vol. 50, pp. 105-117, 2009. P. Singh and P. Dwivedi, "Integration of new evolutionary approach with artificial neural network for solving short term load forecast problem," Applied energy, vol. 217, pp. 537-549, 2018. Y. Liang, D. Niu, and W.-C. Hong, "Short term load forecasting based on feature extraction and improved general regression neural network model," Energy, vol. 166, pp. 653-663, 2019. N. Leema, H. Khanna Nehemiah, A. Kannan, โNeural network classi๏ฌer optimization using Differential Evolution with Global Information and Back Propagation algorithm for clinical datasets,โ Applied Soft Computing, Vol. 49, 2016, pp. 834โ844. M. M. Khan, A. Masood Ahmad, G. M. Khan, J. F. Miller, โFast learning neural networks using Cartesian genetic programming,โ Neurocomputing, Vol. 121, 2013, pp. 274-289. S. Saremi, S. Mirjalili, A. Lewis, โGrasshopper optimization algorithm: Theory and application.,โ Advanced Engineering Software., Vol. 105, 2017, pp. 30โ47. S. M. Rogers,; T. Matheson,; E. Despland,; T. Dodgson,; M. Burrows,; S. J. Simpson, โMechanosensory-induced behavioural gregarization in the desert locust Schistocerca gregaria,โ Journal of Experimental Biology, Vol. 206, 2003, 3991โ4002. C. M. Topaz, A. J. Bernoff, S. Logan, W. Toolson, โA model for rolling swarms of locusts,โ The European Physical Journal Special Topics, Vol. 157, 2008, pp. 93โ109. M. Mafarja, I. Aljarah, A. A. Heidari, A. I. Hammouri, H. Faris, A. M. Al-Zoubi, S. Mirjalili, โEvolutionary Population Dynamics and Grasshopper Optimization Approaches for Feature Selection Problemโ, Knowledge-Based Systems Vol. 145, April 2018, pp.25โ45. S. Z. Mirjalili1, S. Mirjalili, S. Saremi, H. Faris, I. Aljarah, โGrasshopper optimization algorithm for multi-objective optimization problems,โ Applied Intelligence, Vol. 48, No. 4, 2018, pp 805โ820.
31
Journal Pre-proof
No Conflict of Interest
Journal Pre-proof
๏ท ๏ท ๏ท ๏ท
Proposed model for MT-STLF has been introduced using a hybrid of MFFNN and GOA GOA is proposed for the training of an MFFNN for the MT-STLF model. The accuracy is satisfied with deviation error varied between -0.06 and 0.06 The performance of the proposed model has been assessed by different indices