Applied Soft Computing 13 (2013) 3628–3635
Contents lists available at SciVerse ScienceDirect
Applied Soft Computing journal homepage: www.elsevier.com/locate/asoc
A spiking neural network (SNN) forecast engine for short-term electrical load forecasting Santosh Kulkarni ∗ , Sishaj P. Simon, K. Sundareswaran Department of Electrical and Electronics Engineering, National Institute of Technology, Tiruchirapalli, Tamil Nadu, India
a r t i c l e
i n f o
Article history: Received 4 June 2012 Received in revised form 28 February 2013 Accepted 8 April 2013 Available online 24 April 2013 Keywords: Spiking neurons Spike response model (SRM) ANN Hybrid model
a b s t r a c t Short-term load forecasting (STLF) is one of the planning strategies adopted in the daily power system operation and control. All though many forecasting models have been developed through the years, the uncertainties present in the load profile significantly degrade the performance of these models. The uncertainties are mainly due to the sensitivity of the load demand with varying weather conditions, consumption pattern during month and day of the year. Therefore, the effect of these weather variables on the load consumption pattern is discussed. Based on the literature survey, artificial neural networks (ANN) models are found to be an alternative to classical statistical methods in terms of accuracy of the forecasted results. However, handling of bulk volumes of historical data and forecasting accuracy is still a major challenge. The development of third generation neural networks such as spike train models which are closer to their biological counterparts is recently emerging as a robust model. So, this paper presents a load forecasting system known as the SNNSTLF (spiking neural network short-term load forecaster). The proposed model has been tested on the database obtained from the Australian Energy Market Operator (AEMO) website for Victoria State. © 2013 Elsevier B.V. All rights reserved.
1. Introduction Power system operation, planning and setting up of power generation infrastructure in any nation depends upon the growth in the power consumption by its population. Due to the depletion of fossil fuels, increase in population and increase of per capita power consumption in the world, forecasting of electrical load has become one of the major areas of research. Load forecasting is generally carried out to assist planners in making strategic decisions with regards to unit commitment, hydro-thermal co-ordination, interchange evaluation, and security assessments. Electrical load forecasting, in general is classified into three categories; shortterm, medium term and long-term forecasting [1]. The duration for these categories is not explicitly defined in the literature papers. Thus, different authors use different time horizons to define these categories. Short-term load forecasting, in general ranges from few minutes to seven days [2]. Since in power system, the next day’s power generation must be scheduled every day, short-term load forecasting (STLF) is necessary for economic scheduling of generation capacity. Medium-term or intermediate load forecasting deals with predictions ranging from few weeks to several months [3]. Outage scheduling and maintenance of plants and networks come
∗ Corresponding author. Tel.: +91 0431 2513298; fax: +91 0431 2500133. E-mail addresses:
[email protected],
[email protected] (S. Kulkarni),
[email protected] (S.P. Simon). 1568-4946/$ – see front matter © 2013 Elsevier B.V. All rights reserved. http://dx.doi.org/10.1016/j.asoc.2013.04.007
under in these types of forecasts. Long term forecasting on the other hand deals with forecasts longer than a year [3]. It is primarily intended for capacity expansion plans, capital investments, and corporate budgeting. These types of forecasts are often complex in nature due to several uncertainties such as political factors, economic situation, per capital growth. Planning of new networks and extensions to existing power system for both the utility and consumers require long-term forecasts. Short-term load forecasting is, however, considered as very a difficult task. First, because the load series is complex and exhibits several levels of seasonality: the load at a given hour is dependent not only on the load at the previous hour, but also on the loads at several past hours, even on the loads at past days. Secondly, the consumption pattern is dependent on the prevailing weather conditions [4]. Also, due to power system deregulation, it has become more important i.e., these forecasts are used by energy management system (EMS) to establish operational plans for power stations and to plan transactions in the energy market. A very good example about the importance of load forecasting accuracy is that an increase in 1% of forecasting error caused an estimated increase of ten million pounds of operating costs for one electrical utility in United Kingdom [5]. Thus, the accuracy of STLF reduces the operating costs of electrical utilities in many areas. However, the most important issue in STLF is to extrapolate past load behavior while taking into account effect of other influencing factors such as weather and day of the week. In a power system, temperature is usually the most dominant variable in driving the electricity
S. Kulkarni et al. / Applied Soft Computing 13 (2013) 3628–3635
3629
6.5
Load (MW)
6 5.5 5 4.5 4 3.5
0
5
10
15
20
25
30
35
40
Temperature (Celsius) Fig. 1. Hourly load variations with respect to temperature change – Victoria (January 2004).
demand. A sample scattered plot showing the variations in the consumption pattern with respect to average temperatures is shown in Fig. 1. It is obvious that the relationship between temperature and load demand is non-linear. Previous research work and their results are prone to large errors, when the load forecasting model is exposed to large weather forecast errors. Therefore, improving the quality of weather forecasting is an effective way to improve the load forecasting accuracy [6–10]. As indicated above, STLF is an important area and this is reflected in the literature with many traditional and non-conventional techniques like regression analysis [11–13], time series approach [14,15], neural networks [16–22], fuzzy logic [23–26] and support vector machines [27,28] etc. Based on the literature survey, though various STLF have been proposed, the scope for developing more efficient and accurate models is still a major challenge. Section 1 gives an introduction about short-term load forecasting (STLF) and its importance in power system planning. The proposed work and the justification in applying SNN to STLF are discussed in Section 2. The details about SNN, the step by step training algorithm (spike propagation algorithm) and the setting of control parameters are given in Section 3. In Section 4, the proposed methodology for implementation of SNN to STLF is discussed. Section 5 investigates the results obtained during the training and testing of SNN. The conclusion and the scope for further research are presented in Section 6.
2. Proposed work The proposed work uses spiking neural networks for STLF. The SNNs have evolved due to the significant growth achieved in the field of neurophysiology and information processing in brain. It has been discovered that the neural processing inside the brain is carried out in the form of spike trains and not real valued signals. As a result, the artificial neuron that has found closeness to its biological counterpart is known as spiking neuron and the network based upon this type of neurons is known as spiking neural network (SNN). A typical circuit model of spiking neuron was first proposed in 1952 as Hodgkin–Huxley [29,30] model. The spiking neurons are generally divided into three types: threshold, compartment and conductance based [31]. The threshold model is again divided into two types. They are non linear leaky integrate and fire model; and the spike response neuron model (SRNM) [32]. In the proposed work, the SRNM has been used in the feed forward SNN. However, an understanding of neural coding is necessary for encoding information in terms of spike trains which is given in [33,34]. The methods of encoding real values include rate encoding, population encoding and temporal encoding [35]. In this work, precise time of spikes in the temporal encoding scheme has been implemented. This scheme is chosen since
temporally encoded neurons have better computational ability in spatio temporal context [35]. Temporal encoded SNN have been used for pattern recognition proposed by Bohte in [36]. In [35], authors have successfully tested the performance of temporal encoded SNN for electricity time series forecasting using evolutionary learning technique. However, in this work, an attempt has been made to forecast electrical load using gradient method. The spike propagation algorithm is similar to the gradient descent method used for feedforward backpropagation neural network. The real time dataset is obtained from Australian Energy Market Operator (AEMO) website [37]. The data consist of half hourly load of each day from January 2004 to December 2009. The data is pre-processed by normalizing the load between 0 and 1. The historical data of hourly temperature values are obtained from [38]. The historical data for daily solar radiation is obtained from [39].
3. Spiking neural network The input and the output of a spiking neuron are described by a series of firing times known as the spike train. One firing time means the instant the neuron has fired a pulse. The potential of a spiking neuron is modeled by a dynamic variable and works as a leaky integrator of the incoming spikes and the newer spikes contribute more to the potential than the older spikes. If this integrated sum of the incoming spikes is greater than a predefined threshold level, then the neuron fires a spike. This makes SNN a dynamic system, in contrast with the second generation sigmoid neural networks which are static, and enables it to perform computation on temporal patterns in a very natural biological manner. Though its likeness is closer to the conventional back propagation ANN, the difference lies in having many numbers of connections between individual neurons of successive layers as in Fig. 2. Formally between any two neurons ‘e’ and ‘f’, input neuron ‘e’ receives a set of spikes with firing times te . The output neuron ‘f’ fires a spike only when membrane potential exceeds its threshold value (). The internal state variable xf (t) is described by the spike response function ε (t) weighted by
1
Wef1 d1 te
Wefk e
wef k
dk
wef
(t t e d k )
f m
wef
Wefm dm
(t t i d1)
tf
(t ti d m)
Fig. 2. Multiple delayed synaptic terminals between two neurons (e and f).
3630
S. Kulkarni et al. / Applied Soft Computing 13 (2013) 3628–3635
i=1
h=1
ti -Input Spikes
i=2 h=2 i=3
h=3
i=4
h=4
h=q
i=p
Step 7: Calculate the error, the difference between the actual spike time and the desired spike time.
Output Layer Index:j j=1
j=2
j=3
j=5
E = .5 ∗
tj-Output Spikes
Input Layer Index:i
j=n
Hidden Layer Index:h Fig. 3. Architecture of a multilayer spiking neural network.
the synaptic efficacy. i.e., the weight between the two neurons is given in Eq. (1)
k Yek Wef
k
= ε(t − ti − d )
(2)
dk is the delay between the two nodes in the kth synaptic terminal. ε(t) is the spike response function shaping the post synaptic potential (PSP) of a neuron which is given in Eq. (3), t 1−t/ e
(3)
where is the membrane potential decay time constant. 3.1. Training algorithm for SNN The SNN is a 3 layer network with a single hidden layer as shown in Fig. 3. The parameters which affect the performance of the proposed SNN are discussed in Section 3.2. The step by step procedure of the spike propagation algorithm [37] is given below. Step 1: Prepare an initial dataset. The dataset is normalized between 0 and 1. Step 2: Generate the weights to small random values. Initialize the SNN parameters such as membrane potential decay time constant () and learning rate (˛). Step 3: For the proposed SPNN architecture, input layer (i), hidden layer (h) and output layer (o), do the steps 4–8 ‘itt’ times (itt = no of iterations). The network indices represents the number of neurons where, p = number of neurons in the input layer; q = number of neurons in the hidden layer; n = number of neurons in the output layer; m = number of synaptic terminals between any two neurons of successive layers. Step 4: For each pattern in the training set apply the input vector to the network input. Step5: Calculate the internal state variable of the hidden layer neurons. It is given by Eq. (4). xh (t) =
m i
k=1
k Yik Wih
(4)
Step 6: Calculate the network output which is given by Eq. (5) xj (t) =
m h
2
(6)
k k a Whj = −˛ıh Yih (th )
(7) tjd − tja
k,h
k (∂Yk/hj(t a )/∂t a ) Whj j j
(8)
Change in weight from input (i) to hidden layer (h) is given as
where, m is the number of synaptic terminals between two neurons e and f. k is the weight of the sub-connection k, and Y gives Where, Wef the unweighted contribution of all spikes to the synaptic terminal and is given in Eq. (2) as
ε(t) =
(tja − tjd )
where tja is the actual output spike and tjd is the desired output spike and n is the number of neurons in the output layer. Step 8: Adjust weights of the network in such a way that minimizes the training error. Then, repeat the steps 4–8 for each pair of input and output vectors of the training set until the error for the entire system is acceptably low. The equations for change in weights between the hidden and output layer and input and hidden layers are given below. Change in weights from hidden layer (h) to output layer (o) is given as
(1)
k=1
Yik (t)
j=1
where, ıj =
m
xf (t) =
n
k=1
k Yhk Whj
(5)
k k a Wih = −˛ıh Yih (th )
(9)
n
ıj
where ıh =
k
k (∂Y k (t a )/∂t a ) Whj hj j h
j=1
k,i
k (∂Y k (t a )/∂t a ) Wih ih h h
(10)
3.2. Setting of control parameters The proposed SNN has the following adjustable parameters: Number of hidden nodes (h), weights (Wih and Whj ), learning rate (˛) and decay time constant (). As with many other forecast methods, the accuracy of the proposed SNN is dependent on the appropriate adjustment of its parameters. Mean square error (MSE) is considered as the performance index for adjustment of the control parameters. The range for each of the parameter is selected as W ∈ [0.01, 3], number of hidden layer neurons h ε [12, 30], number of synaptic connections k ∈ [10, 20], decay time constant ∈ [4, 8], and learning rate ˛ ∈ [0.0005, 0.01]. Initially, training samples are constructed for a period of 300 days. Keeping the other parameters constant, within the selected neighborhood, the weights from input to hidden layer and hidden to output layer are varied and the training process is stopped when MSE obtained is minimum. Now with the weights being set, and keeping all other parameters constant, the number of hidden layer neurons is varied. The number of neurons in the hidden layer determines the network’s learning capabilities. The selection of the number of hidden neurons is an important issue in network design. So, optimal number of hidden layer neurons are selected and varied within the given neighborhood and trials are carried out till least MSE is obtained. The number of synaptic connections is another important parameter for obtaining reliable results from the network. The number of connections between two neurons is varied for each trial. The setting for which least MSE is obtained is selected. For the feed forward network used in this problem, the delay value in each synaptic connection is increased in steps of 2 ms. It should also be noted that the delay in each terminal may not be same. The next parameter to be set with respect to the order of precedence is the decay time constant value () which depends on the coding interval (t). The coding interval (t) is the difference between the minimum value and maximum value of the normalized input spikes which is problem dependent. is varied for each trial, but the best possible result is obtained when the coding interval is equal to the decay time constant. The last parameter to be tuned is the learning rate ˛. For high
S. Kulkarni et al. / Applied Soft Computing 13 (2013) 3628–3635
Phase 1
3631
Phase 2
Training Input (Actual Max. & Min Temp)
Forecasted Load Pattern
Binary Inputs
Binary Inputs
Training Input
SNN 1
Historical Weather Database
Test Input (Forecasted Max. & Min Temp)
SNN 2 Historical Load Database
E1
E2
Fig. 4. Block diagram of the proposed SNNSTLF engine.
values of ˛, the MSE obtained after SNN training is very large and did not converge. Therefore, small values are considered. However, it should be noted that minimizing the MSE during training phase of the SNN may lead to over fitting problem. Hence, SNN begins to memorize the training samples instead of learning them. When over fitting occurs in a SNN, the MSE continues to decrease and it seems that the training process progresses, while in fact the generalization capability of the SNN degrades and it loses its forecasting ability for the unseen forecast samples. To overcome this problem, the generalization performance of the SNN should also be monitored during its training phase. Since the forecast error is not available in the training phase, validation error is used as an approximation. For SNN learning, validation samples are a subset of training period that are not used for the adjustment of the control parameters of the SNN. Thus, the validation samples can give a better estimate of the error for the forecast samples. Whenever the validation error increases, the generalization performance of the SNN begins to degrade. It indicates the occurrence of over fitting problem, and so the training process of the SNN by the spike propagation algorithm should be terminated at this iteration. Thus, the iteration with the minimum validation error gives the final results of the SNN training. 4. Implementation of Snn for Stlf The design of SNN model to perform load forecast includes determination of model structure, selection of training algorithm and input variables, which significantly influence the network performance. The architecture and the selection of input variables are discussed in Sections 4.1.1 and 4.1.2.
form. The twelve months in the yearly calendar are encoded in binary form (0001 to 1100). For example, if the binary representation is 0010, it represents month of July (4 neurons). The target values consist of the corresponding hourly temperatures on the day of forecast (6 neurons). Therefore, SNN1 consists of 29 input nodes 6 output nodes, respectively. Training is carried out for 4 times with the input data divided into 4 parts of a day consisting of 24 h. The first part consists of hourly weather variables from 12 A.M to 5 A.M. Similarly, the other 3 parts consist of data for the remaining time horizon. For each of the four input formats, the temperatures for 6 h are forecasted. Therefore, all the 24 hourly temperatures are forecasted. The maximum and the minimum temperatures of the forecasted day is recorded and used as test input for SNN2. The above scheme of testing and training in Phase 1 is carried out for the city of Melbourne in Victoria. The same procedure is repeated for the city of Geelong in Victoria. The objective of the forecast engine is to forecast the electrical load for the state of Victoria in Australia. Therefore, to have an accurate forecasts result, the weather data for 2 cities are considered in Phase 1. Finally, the maximum and the minimum forecasted temperatures for the 2 cities is recorded which will be acting as 4 test inputs for SNN2 in Phase 2. The best combination of parameters for SNN1 is Wih ∈ (1, 2), Whj ∈ (2, 3), h = 15, k = 12, = 7 ms and ˛ = 0.001. The cross correlation analysis between temperature and humidity is shown in Fig. 5. It enables to choose the number of input variables with some time lag which affect the other variable. As seen in the figure, the cross correlation coefficients obtained for a time lag of 1–5 h are highly negative. Therefore, humidity values for 6 h are chosen as inputs to the forecasting model. A similar analysis performed on the maximum temperature and maximum solar radiation is shown in Fig. 6. It is observed that the coefficient
4.1. Proposed SNN architecture for STLF problem
4.1.1. Phase 1 In Phase 1, using SNN1, a day ahead hourly temperature profile is forecasted. The input data consists of hourly temperatures of the previous 2 days (12 neurons), hourly humidity values of the previous 2 days (12 neurons), maximum solar radiation of the previous day (1 neuron) and the corresponding month encoded in binary
1
Cross Correlation Function
The implementation of the proposed model is divided into two phases. The block diagram shown in Fig. 4 consists of 2 SNN’s in a sequential manner. E1 and E2 represent the training errors of SNN1 and SNN2 respectively. In phase two, the forecasted temperature values are given as input to SNN2 to forecast the load pattern.
0.5
0
-0.5
-1 -150
-100
-50
0
50
100
Lags Fig. 5. Cross correlation function between temperature and humidity.
150
3632
S. Kulkarni et al. / Applied Soft Computing 13 (2013) 3628–3635
5. Results and discussion
Cross Correlation Function
1 0.8
5.1. Performance evaluation
0.6 0.4
The accuracy of the results in this paper is evaluated in this paper based on three error indices. They are mean absolute percentage error (MAPE), normalized mean square error (NMSE) and error variance (EV). MAPE is defined by the following equation.
0.2 0 -0.2 -0.4 -400
-200
-100
0
100
200
300
400
Lags
MAPE =
1 Pi − Ai A NH i NH
-300
(11)
i=1
Fig. 6. Cross correlation function between temperature and solar radiation.
NMSE [41] is defined as
NMSE = obtained at zero lag is greater than any other coefficient. Therefore, solar radiation of the previous day is taken as an input variable for SNN1. The form of cross correlation function (ACF) adopted here follows that of Box, Jenkins, and Reinsel, specifically [40].
1 (Pi − Ai )2 2 NH NH
(12)
i=1
1 (Ai − AAve )2 NH − 1 NH
where =
(13)
i=1
EV [41] is defined as 4.1.2. Phase 2 In Phase 2, a day ahead half hourly load profile is forecasted. For example, if the load pattern of Monday is given as test input, the load pattern of Tuesday is forecasted. For producing accurate load forecasts, the best combination of input variables is required. Here in this paper, apart from historical load data, the factors which have significant impact on the consumption pattern like day of the week, weather variables, holiday effect, etc., are chosen as inputs to the network. The topology of SNN2 included the following variables as inputs:
One day half hourly lagged load: (48 neurons) One day lagged temperature values (minimum and maximum values): (4 neurons) Forecasted temperature values obtained from Phase 1 (minimum and maximum values): (4 neurons) Maximum and minimum demand in the last 24 h: (2 neurons) Average temperature value in the last seven days: (2 neurons) Day of the week is encoded in binary form: (001 to 111). For example, if the binary representation is 010, it represents Tuesday (3 neurons). Month number: (0001 to 1100) (4 neurons) Holiday effect: (0 – holiday; 1 – working day) (1 neuron)
The corresponding target training data consists of half hourly load on the day of forecast. Therefore, the SNN2 model has 68 input neurons and output 48 neurons, respectively. It is observed the following set of control parameters provided the best results during the training of the SNN2 model: Wih ∈(2, 3), Whj ∈(1, 2), h = 10, k = 8, = 5 ms and ˛ = 0.0006.
1 NH NH
2 =
i=1
P − A 2 i A i − MAPE
(14)
i
where, Pi and Ai are the i predicted and actual values respectively, AAve is the mean of the actual value and NH is the total number of predictions. 5.2. Numerical results The performance of the SNN is compared with the ANN and Hybrid model in presented in reference [42] for short-term load forecast. The ANN model in [42] is a 3 layer feed forward network trained using the Levenberg–Marquardt approach. The hyperbolic tangent function has been used in both hidden and output layers since it can produce positive and negative values, which can speed up the training process. The Hybrid method in [42] is an adaptive two stage network with self organized map (SOM) and support vector machine (SVM). The data from AEMO website [37] from October 2008 to March 2009 is used to test the proposed SNNSTLF model. Table 1 gives the average mean absolute error (MAE) and MAPE obtained for each month from October 2008 to March 2009. From Table 1, it is obvious that the proposed forecast engine is able to perform better than ANN and Hybrid models [42]. It is found out that the average MAPE given in the last row of Table 1 is 2% less than the Hybrid Model and 19.76% less than the ANN model in [42]. Table 2 compares the MAPE and MAE obtained when the actual and the forecasted temperature values are fed as inputs to SNN2 model. It is to be noted that the actual temperature values cannot be obtained for real time load forecasting. Only the forecasted temperatures can be used for real time forecasting. However, to investigate the performance of SNN2, the actual values of the temperature are
Table 1 Monthly performance comparison. Month
SNN (forecasted temperature as input to SNN2)
Hybrid[42]
MAE
MAPE
MAE
MAPE
ANN[42] MAE
MAPE
October 2008 November 2008 December 2008 January 2009 February 2009 March 2009
123.56 118.58 119.25 129.26 111.54 109.56
2.11 2.06 2.09 2.10 1.92 1.90
121.83 123.50 116.34 126.73 119.07 116.49
2.15 2.12 2.17 2.14 1.95 1.94
134.87 140.52 126.39 168.04 139.68 123.21
2.57 2.63 2.49 2.81 2.37 2.29
Average
118.62
2.03
120.66
2.08
138.79
2.53
S. Kulkarni et al. / Applied Soft Computing 13 (2013) 3628–3635 Table 2 Average monthly performance comparison. Month
3633
Table 5 Performance evaluation on all weekdays – February 2009.
SNN (forecasted Temperature as input to SNN2)
SNN (actual temperature as input to SNN2)
MAE
MAPE
MAE
MAPE
December 2008 January 2009 February 2009 March 2009
119.25 129.26 111.54 109.56
2.09 2.10 1.92 1.90
111.86 113.29 105.52 100.78
1.98 2.07 1.84 1.79
Average
113.62
2.00
104.86
1.92
January 09
Influence of weather variables Including weather variables (Phase 1 and Phase 2)
Excluding weather variables (Phase 2)
Day
MAPE
Max. APE
MAPE
Max. APE
Monday Tuesday Wednesday Thursday Friday Saturday Sunday
1.94 1.96 1.87 1.82 1.90 1.95 2.03
8.45 7.26 7.33 7.33 8.18 8.84 8.38
5.95 5.98 5.78 5.78 6.07 9.21 9.62
9.65 7.52 9.17 9.17 8.28 15.37 19.28
Table 3 Average monthly performance comparison. SNN (actual temperature as input to SNN2)
EV
NMSE
EV
NMSE
0.172 0.198 0.185 0.182
0.0689 0.0784 0.0526 0.0537
0.164 0.189 0.174 0.177
0.0675 0.0725 0.0508 0.0498
taken from the historical data and the errors are calculated. From Table 2, it is observed that an improvement of 4% and 7.77% of MAPE and MAE (respectively) is achieved, if actual temperature is used as inputs to SNN2 model. This analysis shows that if the accuracy of the forecasted temperature is increased, then electrical load forecasts can be enhanced to a certain extent. A comparative analysis using EV and NMSE is tabulated in Table 3. It is observed from the table that the EV and NMSE obtained are less when actual historical temperature values are fed as inputs to the SNNSTLF engine. However, the values obtained for the two cases do not differ by a large value. The results computed in Tables 2 and 3 show that the performance of the forecast engine for the months of February and March is much better than the results obtained for the month of January. This is mainly attributed due the abnormal high peak load demands occurring due to the high temperatures in the month of January. To understand the influence of weather variables on load pattern, SNN2 is trained and tested using solely the historical load data. The results evaluated on each week day for the months of January and February is tabulated in Tables 4 and 5, respectively. The mean absolute percentage error (MAPE) and maximum absolute percentage error (Max. APE) calculated with the incorporation of weather variables for all the Mondays in January is 74% and 8.94% less than when the weather variables are excluded. Error analysis carried out for the month of February is presented in Table 5. It is found that the average MAPE (incorporating temperature) obtained for all Mondays in February is 6% less
7 6.5
Load (MW)
December 2008 January 2009 February 2009 March 2009
SNN (forecasted temperature as input to SNN2)
5.5
Actual Load Forecasted Load
5
4 0
5
10
15
20
25
30
35
40
45
50
40
45
50
Time (Half Hour) Fig. 7. 5 January 2009.
7 6.5 6 5.5
Actual Load Forecasted Load
5 4.5 4 0
5
10
15
20
25
30
35
Time (Half Hour) Fig. 8. 12 January 2009.
than the average MAPE obtained in January. The MAPE and Max. APE computed for Saturdays and Sundays are higher than the other weekdays for both January and February. The MAPE and the maximum absolute percentage error for other typical week days (Tuesday to Friday) is presented in Tables 4 and 5. From the above analysis, it is concluded that the accuracy of the temperature forecasts will indirectly enhance the performance of the load forecasting model.
Table 4 Performance evaluation on all weekdays – January 2009.
7
Influence of weather variables
6.5
Including weather variables (Phase 1 and Phase 2)
Excluding weather variables (Phase 2)
Day
MAPE
Max. APE
MAPE
Max. APE
Monday Tuesday Wednesday Thursday Friday Saturday Sunday
2.08 2.02 2.00 2.04 2.03 2.21 2.28
8.15 7.46 7.23 8.45 8.25 8.78 9.15
8.03 8.07 6.92 7.36 6.49 9.81 10.83
8.95 8.07 9.35 8.31 9.23 19.37 24.65
Load (MW)
January 09
6
4.5
Load (MW)
Month
6
Actual Load Forecasted Load
5.5 5 4.5
0
5
10
15
20
25
30
Time (Half Hour) Fig. 9. 25 February 2009.
35
40
45
50
3634
S. Kulkarni et al. / Applied Soft Computing 13 (2013) 3628–3635
Absolute Percentage Error(%)
Load (MW)
6.5
6
5.5
Actual Load Forecasted Load 5
4.5 0
5
10
15
20
25
30
35
40
45
50
0.18 0.16
SNN(including Weather Variables)
0.14
SNN (excluding WeatherVariables)
0.12 0.1 0.08 0.06 0.04 0.02 0
1 3 5 7 9 11 13 15 17 19 21 23 25 27 29 31 33 35 37 39 41 43 45 47
Time (Half Hour)
Time (Half Hour) Fig. 10. 23 March 2009. Fig. 12. 3 January 2009.
Figs. 7–10 compare the half hourly one day-ahead load pattern with the actual load pattern when forecasted temperatures are considered as inputs to the proposed SNNSTLF. It is observed from these figures that the forecasted curve is almost close to the actual load curve. In Figs. 7 and 8, the MAPE calculated is 2.12% and 2.04%, respectively. In Fig. 7, the MAPE obtained for the first 12 h (24 observations) is found to be 1.91% and it is 2.33% for the remaining time horizon. The actual peak load is 6600 MW, where as the forecasted peak demand is 6515 MW. The average error obtained (Fig. 8) is 2.03%, 1.94% and 2.03% for the first 8 h (16 observations), for the next 8 and for the remaining time horizon, respectively. The actual peak load demand observed in Fig. 8 is 6652 MW, whereas the peak load obtained from the forecasted curve is 6510 MW. Higher deviations from the actual load pattern are observed from 12 P.M. to 6 P.M. The maximum and the minimum absolute error obtained during the forecast period in Fig. 8 are 324.7 MW and 3.9 MW, respectively. The MAPE computed for half hourly one day-ahead forecasts in Figs. 9 and 10 is 2.01% and 1.92%, respectively The minimum absolute error obtained for the half hourly one day ahead forecasts shown in Figs. 9 and 10 is 0.002 MW. The maximum absolute percentage error obtained in Figs. 9 and 10 is 4.23% and 4.01%, respectively. In Fig. 11, the forecasted load curves with and without the weather variables are plotted. It is obvious that the forecasted curve including weather variables follows the actual load pattern more closely that the other curve (without weather variables). Fig. 12 shows the variation in the absolute percentage error for the forecasted load patterns shown in Fig. 11. It indicates that the percentage error in the forecasted curve is significantly reduced when the weather variables are incorporated in SNN2. All the load values in the y-coordinates for Figs. 7–11 are scaled by a factor of 1000 i.e., if the load is 6500 MW, it is scaled to 6.5. The total average time taken for training the SNNSTLF model is 8 min. All the codes for SNN are developed in the Matlab Version 2007b. The hardware configuration of the computer used is Intel Core I5 processor with 2.53 GHz CPU, 4 GB RAM and the operating system used is Windows 7 Home Premium.
6.5
Actual Load Forecasted Load (including Weather Variables) Forecasted Load (excluding Weather Variables)
Load (MW)
6 5.5 5 4.5 4 0
5
10
15
20
25
30
Time (Half Hour) Fig. 11. 3 January 2009.
35
40
45
6. Conclusion A new forecast engine based on SNN for short term load forecasting is presented. The results obtained from the proposed forecast engine are compared with previous existing models available in the literature. The SNNSTLF model involves the SNN handling historical data in an efficient manner with increased training accuracy. It can effectively correlate the non-linear relationship between load pattern and several other factors such temperature, humidity, week day, month of the year etc. This paper also validates the improvement in the performance of the third generation neural networks compared to that of the second generation neural networks such as the Levenberg back propagation neural network and support vector machine based on self organizing map (Hybrid model). The proposed model can be used in the power system operation and planning environment. It can be extended to other power system forecasting applications such as electricity price forecasting, wind power generation forecasting, solar radiation forecasting. References [1] D. Srinivasan, C.S. Chang, A.C. Liew, Survey of hybrid fuzzy neural approaches to electrical load forecasting, in: Proceedings on IEEE International Conference on Systems, Man and Cybernetics, Part 5, Vancouver, BC, 1995, pp. 4004–4008. [2] N. Amjady, Short-term bus load forecasting of power systems by a new hybrid method, IEEE Transactions on Power Systems 22 (1) (2007) 333–341. [3] H.M. Al-Hamadi, S.A. Soliman, Long term/mid term electric load forecasting based on short-term correlation and annual growth, Electrical Power Systems Research 74 (3) (2005) 353–361. [4] H.S. Hippert, C.E. Pedreira, R. Castro, Neural networks for short-term load forecasting: a review and evaluation, IEEE Transactions on Power Systems 16 (1) (2001) 44–55. [5] K.R. Damitha, G.K. George, C.F. Richard, Economic impact analysis of load forecasting, IEEE Transactions on Power Systems 12 (3) (1997) 1388–1392. [6] T. Miyake, J. Murata, K. Hirasawa, One-day-through seven-day ahead electrical load forecasting in consideration of uncertainties of weather information, Electrical Engineering in Japan 115 (8) (1995) 22–32. [7] H. Chen, Y. Du, J. Jiang, Weather sensitive short-term load forecasting using knowledge based ARX models, in: Proc. IEEE Power Eng. Soc. General Meeting, vol. 1, 2005, pp. 190–196. [8] S. Fan, K. Methaprayoon, W.J. Lee, Short-term multi-region load forecasting based on weather and load diversity analysis, in: Proceedings of 39th NAPS, 2007, pp. 562–567. [9] S. Fan, L. Chen, W.J. Lee, Short-term load forecasting using comprehensive combination based on multi-meteorological information, in: Proceedings of IEEE Industrial Commercial Power Systems. Technical Conference, 2008, pp. 1–7. [10] K. Methaprayoon, W. Lee, S. Rasmiddatta, J. Liao, R. Ross, Multi-stage artificial neural network short-term load forecasting engine with front-end weather forecast, in: Proceedings of IEEE Ind. and Comm. Power Systems Technical Conference, 2006, pp. 1–7. [11] A.D. Papalexopoulos, T.C. Hesterberg, A regression-based approach to shortterm system load forecasting, IEEE Transactions PWRS 5 (4) (1990) 1535–1544. [12] L.F. Amaral, R.C. Souza, M. Stevenson, A smooth transition periodic autoregressive (STPAR) model for short-term load forecasting, International Journal of Forecasting 24 (2008) 603–615. [13] J.M. Vilar, R. Cao, G. Aneiros, Forecasting next day electricity demand and price using non-parametric functional methods, Electrical Power and Energy Systems 39 (2012) 48–55. [14] S.S. Pappas, L. Ekonomou, P. Karampelas, D.C. Karamousantas, S.K. Katsikas, G.E. Chatzarakis, P.D. Skafidas, Electricity load demand forecasting of the hellinic
S. Kulkarni et al. / Applied Soft Computing 13 (2013) 3628–3635
[15] [16]
[17]
[18]
[19]
[20]
[21] [22]
[23]
[24]
[25]
[26]
power system using an ARMA model, Electrical Power Systems Research 80 (2010) 256–264. J.W. Taylor, Triple seasonal methods for short-term load forecasting, European Journal of Operational Research 204 (2010) 139–152. A.G. Bakirtzis, V. Petridis, S.J. Kiartzis, M.C. Alexiadis, A.H. Maissis, A neural network short term load forecasting model for the Greek power system, IEEE Transactions on Power Systems 11 (2) (1996) 858–863. P. Bunnoon, K. Chalermyanont, C. Limsakul, Multi substation control central load area forecasting by using HP-filter and double neural networks (DP-NN’S), Electrical Power Energy Systems 44 (2013) 561–570. M. El-Telbany, F. El-Karmi, Short-term forecasting of Jordanian electricity demand using particle swarm optimization, Electrical Power Systems Research 78 (2008) 425–433. C. Xia, J. Wnag, K. Mcmenemy, Short, medium and long term load forecasting model and virtual load forecaster based on radial basis function neural networks, Electrical Power Energy Systems 32 (2010) 743–750. Y. Chen, P.B. Luh, C. Guan, Y. Zhao, L.D. Michel, M.A. Coolbeth, P.B. Friedland, J.R. Stephen, Short-term load forecasting: similar day based wavelet neural networks, IEEE Transactions on Power Systems 25 (1) (2010) 322–330. M. Hanmandlu, B.K. Chauhan, Load forecasting using hybrid models, IEEE Transactions on Power Systems 26 (1 (February)) (2011) 20–29. M. Lopez, S. Valero, C. Senabre, J. Aparicio, A. Gabaldon, Application of SOM neural networks to short-term load forecasting: the Spanish electricity market case study, Electrical Power Systems Research 91 (2012) 18–27. S. Chenthur Pandian, K. Duraiswamy, C. Christober Asir Rajan, N. Kanagaraj, Fuzzy approach for short term load forecasting, Electrical Power Systems Research 76 (2006) 541–548. Y. Bodyanskiy, S. Popov, T. Rybalchenko, Multilayer neuro-fuzzy network for short-term electric load forecasting, Lecture Notes Computer Science 5010 (2008) 339–348. M. Amina, V.S. Kodogiannis, I. Petronias, D. Tomtis, A hybrid intelligent approach for the prediction of electricity consumption, Electrical Power and Energy Systems 43 (2012) 99–108. P. Chang, C. Fan, J. Lin, Monthly electricity demand forecasting based on a weighted evolving fuzzy neural network approach, Electrical Power and Energy Systems 33 (2011) 17–27.
3635
[27] J. Nagi, K.S. Yap, F. Nagi, S.K. Tiong, S.K. Ahmed, A computational intelligence for prediction of daily peak load, Applied Soft Computing 11 (2011) 4773–4788. [28] W.-C. Hong, Y.D. Yucheng Dong, W.Y. Zhang, C. Li-Yueh, B.K. Panigrahi, Cyclic electric load forecasting by seasonal SVR with chaotic genetic algorithm, Electrical Power and Energy Systems 11 (2013) 604–613. [29] A.L. Hodgkin, A.F. Huxley, A quantitative description of ion currents and its applications to conductance and excitation in nerve membranes, Journal of Physiology (London) 117 (1952) 500–544. [30] P. Joshi, W. Maass, Movement generation with circuits of spiking neurons, Neural Computation 17 (8) (2005) 1715–1738. [31] W. Gelsterner, Spiking Neuron Models, Cambridge University Press, Cambridge (United Press in United Kingdom), 2002. [32] W. Gerstner, Time structure of the activity in neural network models, Physical Review E 51 (1995) 738–758. [33] W. Bialek, F.R.R. Steveninck, D. Warland, Spikes: Exploring the Neural Code, MIT Press, Cambridge, 1997. [34] W. Bialek, F.R.R. Steveninck, D. Warland, Reading a neural code, Science 252 (1991) 1854–1857. [35] V. Sharma, D. Srinivasan, A spiking neural network based on temporal encoding for electricity price time series forecasting in deregulated markets, in: Neural Networks International Conference (IJCNN), July, 2010. [36] S.M. Bohte, Spiking Neural Networks, Doctoral Dissertation, Available online, 2002. [37] http://www.aemo.com.au/en/Electricity/NEM-Data/Price-and-Demand-DataSets/AggregatedPrice-and-Demand-2006-to-2010 [38] www.wunderground.com [39] http://www.bom.gov.au/climate/data/index.shtml?bookmark=200 [40] G.E.P. Box, G.M. Jenkins, G.C. Reinsel, Time Series Analysis: Forecasting and Control, third ed., Prentice-Hall, Upper Saddle River, NJ, 1994. [41] N. Amjady, F. Keynia, H. Zareipour, Short-term load forecasts of microgrids by a new bi-level prediction strategy, IEEE Transactions on Smart Grid 1 (3) (2010) 286–294. [42] S. Fan, R.J. Hyndman, Short-term load forecasting based on a semi parametric additive model, IEEE Transactions on Power Systems 27 (1) (2012) 134–141.