Application of fuzzy neural networks and artificial intelligence for load forecasting

Application of fuzzy neural networks and artificial intelligence for load forecasting

Electric Power Systems Research 70 (2004) 237–244 Application of fuzzy neural networks and artificial intelligence for load forecasting Gwo-Ching Lia...

228KB Sizes 2 Downloads 130 Views

Electric Power Systems Research 70 (2004) 237–244

Application of fuzzy neural networks and artificial intelligence for load forecasting Gwo-Ching Liao∗ , Ta-Peng Tsao Department of Electrical Engineering, Fortune Institute of Technology, 125-8 Chyi-Wen Road, Chyi-Shan 842, Kaohsiung County, Taiwan Received 8 May 2003; received in revised form 10 November 2003; accepted 9 December 2003

Abstract An integrated evolving fuzzy neural network and simulated annealing (AIFNN) for load forecasting method is presented in this paper. First we used fuzzy hyper-rectangular composite neural networks (FHRCNNs) for the initial load forecasting. Then we used evolutionary programming (EP) and simulated annealing (SA) to find the optimal solution of the parameters of FHRCNNs (including parameters such as synaptic weights, biases, membership functions, sensitivity factor in membership functions and adjustable synaptic weights). We knew that the EP has a good capability for searching for globe optimal value, but a poor capability for searching for the local optimal value. And, the SA only had a good capability for searching for a local optimal value. Therefore, we combined both methods to obtain both advantages, and so improve the shortcoming of the traditional ANN training where the weights and biases are always trapped into a local optimum. Finally, we use the AIFNN to see if we could improve the solution quality, and if we actually could reduce the error of load forecasting. The proposed AIFNN load forecasting scheme was tested using data obtained from a sample study including 1 year, 1 month and 24 h time periods. The result demonstrated the accuracy of the proposed load forecasting scheme. © 2004 Elsevier B.V. All rights reserved. Keywords: Load forecasting; Evolutionary programming; Simulated annealing; Fuzzy neural network

1. Introduction Short-term load forecasting (STLF) plays an important role in power systems. Accurate short-term load forecasting has a significant influence on the operational efficiency of a power system, such as unit commitment, annual hydro-thermal maintenance scheduling hydro-thermal coordination, demand side management, interchange evaluation, security assessment and others. Improvements in the accuracy of short-term load forecasts can result in significant financial savings for utilities and co-generators. Various forecasting techniques have been proposed in the last few decades. Those models include: time series [1,2], multiple linear regression [3], auto regressive moving average (ARMA) [4,5] and expert system (ES) [6,7]. The time series model uses the historical load data for extrapolation of future loads. It is a non-weather sensitive approach. It must assume that the load is a stationary time series and has normal distribution characteristics. When the historical ∗

Corresponding author. Tel.: +886-7-3835162; fax: +886-7-6618850. E-mail address: [email protected] (G.-C. Liao).

0378-7796/$ – see front matter © 2004 Elsevier B.V. All rights reserved. doi:10.1016/j.epsr.2003.12.012

load data does not support this condition, the accuracy of the forecast is decreased. If the factors that influence the load are considered, the models become more complicated. Determining the order in this model is especially dependent upon the experience of the expert. This gives rise to the difficulties in application. Consumer habits and weather behavior regression models derive linear models for the system load. The main principle behind regression is to use the common relationships between every thing included in the model in order to predict the relative change in one item or variable according to changes in another item or variable. This approach is applied to short-term weather forecasting using weather data, such as temperature, humidity, etc., to establish multiple variable values for the linear regression models between itself and the load. The least square error method is used to estimate the regression coefficient. The main disadvantage of this method is that when the relationship between the determined or reference control variation is unclear, a greater forecasting error value is produced. The ARMA model is not efficient in modeling weekends, holidays, and seasonal changing periods. The expert system approach is a rule-based method for load forecasting, using

238

G.-C. Liao, T.-P. Tsao / Electric Power Systems Research 70 (2004) 237–244

the logic of a power system operator to develop mathematical equations for forecasting. This is often very difficult to do. In recent years, with the development of artificial intelligence, people have been able to realize forecasting using artificial neural networks [8–14]. Back-propagation (BP) based on a multi-layer perceptron is the most popular model for the STLF problem. Although the BP model has been successfully applied to many fields and has solved a number of practical problems, its poor convergence features and its painful speed of convergence have been the bottleneck in using the BP model more effectively. However, the conventional artificial neural network models sometimes suffer from the local minima optimization problem. A neural network with fixed structure trained GA for short-term load forecasting was reported in [15]. Neural networks are very good in learning using GA and EP. Usually, the structure of a neural network is fixed for a learning process. However, a fixed structure may not provide the best performance within a given training period. If the neural network structure is too complicated, the training period will have to be long, and consequently the implementation costs will be high. In this paper, a two-layer fuzzy neural network with EP and SA is proposed to facilitate the tuning of the optimal network structure. The proposed neural network is then used to forecast the daily load. Simulation results will be given to illustrate the performance of the proposed neural network.

2. Artificial neural network 2.1. Fuzzy neural network In here, we use fuzzy logic system (FLS) into neural network. The method is to use the property of learning then from the numerical data, on the excuse of the adjustment of synaptic weights, conclude the relation between input and output. And then, again from the synaptic weights of networks assemble the fuzzy rules. The method can see from Fig. 1. We can make the relation between input and output in the hidden layer of neural network to appear as j IF (x1 is A1 )

fuzzy rule Rj : j

and (x2 is A2 ) and · · · (xp is Ajp ), THEN y is Bj

(1) j

where x is the p-dimensional input vector, y the output, Ai the label of the membership function associated with the

Fig. 2. The structure of FHRCNNs.

input variable xi in rule j and Bj the label associated to the output variable y in rule j. An FLS consists of four basic elements: the fuzzifier, the fuzzy rulebase, the inference engine, and the defuzzifier. The fuzzifier maps the crisp inputs into fuzzy sets, which are used as inputs to the inference engine. The fuzzy rulebase is a collection of rules of the form of (1). The inference engine is a decision-making logic which employs the rules of the fuzzy rule base to produce a fuzzy output. The defuzzifier maps the fuzzy sets produced by the inference engine into crisp numbers. In Fig. 1, wj is the synaptic weight from hidden layer to output layer. 2.2. Fuzzy hyper-rectangular composite neural networks (FHRCNNs) [16] The fuzzy hyper-rectangular composite neural network diagrammatic is as Fig. 2. The FHRCNNs mathematics equation is as yk (xt ) =

J  j=1

mj (xt ) = exp{−Sj [netj (xt ) − netj ]2 } netj =

P 

(Mij − mij )

(2) (3) (4)

i=1

netj (xt ) =

Fig. 1. The structure of fuzzy neural network.

wjk mj (xt ) + θjk

P 

max(Mij − mij , xtt − mij , Mij − xit )

(5)

i=1

where xt = (x1t , x2t , . . . , xpt )T is the training data of input network and p the dimension of input variable. xit , i = 1, 2, . . . , p denotes the input; Mij and mij , j = 1, 2, . . . , nh (it is a adjustable synaptic weight value) the

G.-C. Liao, T.-P. Tsao / Electric Power Systems Research 70 (2004) 237–244

upper-bound and lower-bound value between the ith and the jth hidden node, respectively; nh the number of hidden nodes; mj the membership function; Sj the sensitivity factor in membership function; wjk , j = 1, 2, . . . , nh , k = 1, 2, . . . , nout the weight which from the jth hidden node to the kth output; nout the number of outputs of the proposed neural network; θjk the biases from the jth hidden nodes to kth output node; yk (xt ), k = 1, 2, . . . , nout the kth output of the proposed neural network. 2.3. Parameters tuning of the FHRCNNs In this section, the proposed neural network is employed to learn the input–output relationship. The input–output relationship is described as ykd (xt ) = f(Cid (xt )),

t = 1, 2, . . . , nd

(6)

where (7)

Cid (xt ) = [C1d (xt ), C2d (xt ), . . . , Cndin (xt )],

(8)

The ykd (xt ) is the desired output corresponding to the input Cid (xt ) of an unknown non-linear function f(•), respectively and nd the number of input–output data pairs. The object value is defined as MAPE =

nd nout |ykd (xt ) − yk (xt )| 1  1  nout nd ykd (xt ) k=1

Ov =

(9)

t=1

1 1 + eMAPE

(2) Calculation objective value. Calculate the objective value for every individual. The objective value is to calculate the best values of the fitness: FOl = Ov (Ol )

(12)

where FOl is the objective function value of the set of individuals in a population Ol , and Ov (Ol ) the objective function for the set of individuals Ol . (3) Mutation. Use mutation to generate n constitute offspring from the population. This method adds a Gaussian random variable to the parents population: O l = [X1 , X2 , X3 , . . . , Xz ]

(13)

Xr = Xr + N(0, σr2 )

(14)

for r = 1, 2, 3, . . . , z

where N(0, σr2 ) is a Gaussian random variable function with mean 0 and variance σr2 : σr = β(Xr max − Xr min )

ykd (xt ) = [y1d (xt ), y2d (xt ), . . . , yndout (xt )],

(10)

The object value is to minimize of mean absolute percentage error (MAPE) using EP by setting the chromosome to be [wjk , θjk , Sj , Mij , mij ] for all i, j, k.

3. Evolutionary programming with simulated annealing

239

f(Ol ) fmin

(15)

where fmin is the minimum objective value among the n trial solutions, Xr max and Xr min the maximum and minimum limits of the rth element, β the mutation scale given as 0 < β ≤ 1. The mutation scale should be changed to represent a search prevented from trapping in a local minimum. An adaptive mutation scale is given by changing β after each mutation. The initial β is 1 and then decreases by βstep which is set from 0.001 to 0.01, βfinal is set to 0.005, the β value depends on the number of generations and the complexity of the system. (4) Calculate the offspring objective value. Calculate the objective value for n offspring individuals in the population. (5) Competition. Each individual Ol in the combined population must compete with other individuals to get chance to be transcribed into the next generation. A weight value WOl is assigned to the individual according to the competition as follows: WOl =

b 

Wg

(16)

t=1

3.1. Implementation of the evolutionary programming The EP implementation steps are stated in the following nine stages: (1) Initialization. The initial parent trial vector Ol , l = 1, 2, 3, . . . , n, is determined by setting its ith components: Ol = [wjk , θjk , Sj , Mij , mij ] = [X1 , X2 , X3 , . . . , Xz ] (11) where n is the number of individuals in the population, z the total number of weights and biases, Ol the set of all random individuals and [X] the random individuals generated by the distribution function.

Wg =

 

1

0

if rand1 [0, 1] < otherwise

f(Os ) f(Os ) + f(Ol )

(17)

where WOl is the weight value of individual l, f(Os ) the object function value of a randomly selected individual s, f(Ol ) the object function value for the individual l, and b competition number: s = [b ∗ rand2 [0, 1] + 1]

(18)

where [X] denotes greatest integer less than or equal to X. (6) Selection. After competing, the b trial solutions, the parents and the offspring, are ranked in descending order by the score obtained in (16).

240

G.-C. Liao, T.-P. Tsao / Electric Power Systems Research 70 (2004) 237–244

where f(x0 ) is the object function for the individual population x0 . • Step 3: Mutation to generate individual offspring—by mutation generate a set of individual offspring. This method add a Gaussian Distribution variable to the parents: xn = x0 + N(0, αTi )

(20)

where x0 is the father population, xn the offspring population individual, α the constant, N(0, αTi ) a random Gaussian variable function with mean 0 and variance αTi , and Ti the temperature: Ti = Ci−1 × T0

(21)

where T0 is the initial temperature, i the iteration number, and C the cooling rate, 0 < C < 1. • Step 4: Calculate the offspring objective function Ov (xn ) = f(xn )

(22)

• Step 5: Renew the father generation—check if one of the following equations is satisfied. When these equations are satisfied, they are selected as the new father generation: (1)when *O = Ov (xn ) − Ov (x0 ) < 0.

(23)

(2)acceptance function h(*O, Ti ) *O = Ov (xn ) − Ov (x0 ) > 0

(24)

use the probability Fig. 3. Evolutionary programming implementation with simulated annealing.

(7) The SA is used to test the b individuals one by one. The population is then updated with new individuals. It is to do the advance selection of the solution. (8) Stop rule. The processes for generating a new trial and selecting individuals with best function values are continued until the function value are not obviously being improved, or the given count for the total number of generations is reached. (9) Output. Output the optimal solutions. The flowchart for implementing evolutionary programming with simulated annealing is shown in Fig. 3.

h(*O, Ti ) =

1 1 + exp*O/Ti

(25)

to accept the solution. • Step 6: Determine the amount of heat—at every temperature the search number has a regular standard. If the temperature conform to the standard, go to step 7. Otherwise go to step 3. • Step 7: Stop rule—check if the temperature conforms to the stopping rule. If yes, output the solution, if no, go to step 8. • Step 8: Reduce the temperature Ti according to the annealing schedule (using a simple setting Ti equal to ηTi , where η is a constant between 0 and 1). • Step 9: Incremental iteration count i, i = i + 1, then go to step 3.

3.2. The simulated annealing (SA) approach The SA has nine steps as follows:

4. Short-term load forecasting by AIFNN

• Step 1: Initialization—accepted the solutions from EP then choose a starting point x0 and set a high starting temperature T0 set the iteration count i to 1. • Step 2: Evaluate the father generation objective function Ov (x0 ) = f(x0 )

(19)

AIFNN accepts the optimal parameter [wjk , θjk , Sj , Mij , mij ] provided from Section 3, and at the same time progress the network calculation. Each neural network has 24 outputs representing the expected hourly load for a day. A diagram of the daily load forecasting system is shown in

G.-C. Liao, T.-P. Tsao / Electric Power Systems Research 70 (2004) 237–244

241

Fig. 4. The actual artificial neural network.

Fig. 4. Each neural network has 83 inputs and 24 outputs. Among the 83 input nodes, the first 24 inputs represent the previous 24 h loads. Nodes 25 to 48 represent the past 25 to 48 h loads. Nodes 49 to 52 represent the previous day’s minimum temperature and maximum temperature. Nodes 53 to 76 represent the next 24 h temperature forecast. Nodes 77 to 83 represent the day of the week. We train the forecasting neural network off-line. The off-line training is a time consuming process. However, once trained the system can make the forecast quickly (as a lower number of iterations is needed). In this example, we use historical data from 11 February 1998 to 10 March 2000 for off-line training with 500 iterations. Once trained off-line the forecasting system operates in an on-line mode, and the parameters of the neural network will be updated day by day by 100 iterations. The actual proposed neural network is shown in Fig. 4. Referring to Eqs. (2)–(5), the proposed neural network used for the daily load forecasting is given by yk (xt ) =

J  j=1

wjk mj (xt ) + θjk ,

j = 9, 20, 32, 46, 60 or 75, k = 1, 2, . . . , 24

for EP-SA are [wjk , θjk , Sj , Mij , mij ] for all i, j, k. The number of the iterations to train the proposed neural network is 500. 5. Examples and results To evaluate the performance of the proposed load forecasting scheme, the trained artificial neural network and evolutionary programming were tested with data obtained from a sample study performed on the Taiwan Power System, to predict the daily energy consumption 24 h ahead. The testing data was from 21 April 2000 to 20 July 2000. We used these data to forecast the individual load demands from 21–27 August 2000. As the analysis shows, the load forecasting error was classified in two parts. The first part was the workday (i.e., from Monday to Friday). The second part was non-workdays (i.e., Saturday, Sunday or Holiday). From this result, we could obtain that the load forecasting error for the workdays were smaller than the value for non-workdays. Figs. 5–9 show the forecasting result curve for workdays and non-workdays. In Table 1, the MAPE and maximum

(26)

The number of hidden node (nh ) is changed from 9, 20, 32, 46, 60, or 75 in order to test the learning performance. The MAPE is defined as follows: MAPE =

nd 24 |ykd (xt ) − yk (xt )| 1  1  24 nd ykd (xt ) k=1

(27)

t=1

EP-SA is employed to tune the parameter [wjk , θjk , Sj , Mij , mij ]. The objective is to maximize the object function (10). The population size used for EP is 30. The chromosomes used

Fig. 5. Forecast results for 21 August (Monday).

242

G.-C. Liao, T.-P. Tsao / Electric Power Systems Research 70 (2004) 237–244

Fig. 6. Forecast results for 23 August (Wednesday). Fig. 9. Forecast results for 27 August (Sunday).

Fig. 7. Forecast results for 25 August (Friday).

Fig. 8. Forecast results for 26 August (Saturday).

MAPE comparison is made between the artificial neural network (ANN), genetic algorithm artificial neural network (GA-ANN) and AIFNN. The ANN approach used a traditional search algorithm to finish load forecasting. The GA-ANN approach combined the genetic algorithm and artificial neural network. It used GA to optimize the weights

and biases of the ANN and then used ANN to finish the load forecasting. We use a 1-week time period to forecast the results. From the table we can get that the MAPE and maximum MAPE of AIFNN are all lower than ANN and GA-ANN. Table 2 are the comparison results of MAPE and maximum MAPE on a monthly basis for 1 year. It shows that the average MAPE of ANN, GA-ANN and AIFNN is 1.895, 1.843 and 1.734. From the result we can get that average MAPE of AIFNN has 9.53% improvement than ANN and 6.35% improvement than GA-ANN. Tables 3 and 4 show the average training error from week 1 to week 5 in terms of MAPE, and the average forecasting error from week 6 to week 10 in terms of MAPE on workdays and non-workdays with ANN, GA-ANN and AIFNN, respectively. The best training error is 1.321 and 1.521 using AIFNN for workdays and non-workdays while the nh = 32. Compared with the ANN and GA-ANN, it has a 29.5 and 13.7% improvement on the workdays, and has a 14.7 and 10.6% improvement on the non-workdays. When nh is higher or lower (such as nh = 9 or 75), the average training error is 1.432 and 1.627, and they are all higher than the error value while nh = 30. Therefore, we used the nh = 30, because it can obtain the best result, and saves on implementation time by about 21% over nh = 75. On the other hand, the best forecasting error is 1.317 and 1.501 using the proposed method for workdays and non-workdays, respectively. Compared with the ANN and GA-ANN, it has 27.7 and 17.9% improvement on workdays, and 14.1 and 10.7% improvement on non-workdays.

Table 1 The comparison results of average error and maximum error of day of the week between ANN, GA-ANN and AIFNN Day type

Method ANN (by BP)

GA-ANN

AIFNN

MAPE

Maximum MAPE

MAPE

Maximum MAPE

MAPE

Maximum MAPE

Monday Tuesday Wednesday Thursday Friday Saturday Sunday

2.12 1.31 1.35 1.76 1.38 1.86 1.94

2.37 2.42 2.03 1.86 2.34 2.86 2.12

1.59 1.22 1.34 1.56 1.34 1.77 1.82

2.23 2.22 2.12 1.89 2.16 2.31 2.38

2.01 1.01 1.14 1.22 1.03 1.64 1.71

2.13 2.02 1.98 1.78 2.12 2.23 1.94

Total average

1.67

2.29

1.52

2.18

1.39

2.02

G.-C. Liao, T.-P. Tsao / Electric Power Systems Research 70 (2004) 237–244

243

Table 2 The comparison results of MAPE and maximum MAPE on a monthly of 1 year Month

Method ANN (by BP)

GA-ANN

AIFNN

MAPE

Maximum MAPE

MAPE

Maximum MAPE

MAPE

Maximum MAPE

January February March April May June July August September October November December

2.18 2.08 2.12 1.95 1.78 1.47 1.49 1.76 1.77 2.06 1.98 2.11

2.35 2.37 2.42 2.07 1.97 1.94 1.86 2.34 2.67 2.86 2.12 2.45

2.12 2.01 2.14 1.92 1.67 1.56 1.34 1.55 1.59 2.11 2.01 2.14

2.31 2.34 2.22 2.02 2.12 1.98 1.87 2.12 2.23 2.32 2.21 2.16

1.98 1.92 2.02 1.86 1.54 1.32 1.33 1.47 1.45 1.94 1.92 2.01

2.22 2.13 2.45 2.02 2.01 1.98 1.94 1.78 1.84 2.01 1.84 1.94

Total average

1.895

2.282

1.843

2.252

1.734

2.012

Table 3 Learning results of daily load training and forecasting errors for workday nh

Method ANN (by BP)

GA-ANN

AIFNN

Average training error (%)

Average forecasting error (%)

Average training error (%)

Average forecasting error (%)

Average training error (%)

Average forecasting error (%)

7 15 30 45 60 75

1.765 1.773 1.712 1.786 1.793 1.864

1.774 1.752 1.796 1.682 1.784 1.812

1.562 1.587 1.562 1.596 1.632 1.718

1.553 1.563 1.587 1.621 1.639 1.742

1.432 1.458 1.321 1.445 1.478 1.627

1.411 1.468 1.317 1.456 1.487 1.592

Total average

1.782

1.767

1.599

1.596

1.461

1.455

Table 4 Learning results of daily load training and forecasting errors for non-workday nh

Method ANN (by BP)

GA-ANN

AIFNN

Average training error (%)

Average forecasting error (%)

Average training error (%)

Average forecasting error (%)

Average training error (%)

Average forecasting error (%)

7 15 30 45 60 75

1.863 1.872 1.745 1.798 1.876 1.996

1.832 1.712 1.865 1.786 1.897 1.994

1.737 1.725 1.683 1.696 1.721 1.795

1.721 1.746 1.712 1.702 1.663 1.763

1.625 1.676 1.521 1.697 1.702 1.718

1.636 1.628 1.501 1.665 1.732 1.706

Total average

1.858

1.847

1.726

1.717

1.656

1.645

Table 5 shows the comparative results of different NN approach of MAPE. The different NN include ANN, FNN (ANN + fuzzy system), IEANN (ANN + EP), IEFNN (ANN + EP + fuzzy system), IEANN-SA (ANN + EP + SA) and AIFNN (ANN + EP + fuzzy system + SA). From the table it appears that if the ANN just plus fuzzy system method the improvement ratio of MAPE is about 2.94% and if we

use EP to combine fuzzy system or EP plus TS the improvement ratio is 7.11 and 7.74% than ANN. The best result is when the ANN combined with EP, fuzzy system and SA. The improvement ratio is 16.81% than ANN. From the results we can get the contributions that the fuzzy system, EP and SA to the program. The prediction accuracy is improved by applying the method. Which shows that the proposed

244

G.-C. Liao, T.-P. Tsao / Electric Power Systems Research 70 (2004) 237–244

Table 5 The comparison results of different NN approach of MAPE State

Method ANN (%)

FNN (%)

IEANN (%)

IEFNN (%)

IEANN-SA (%)

14 21 28 35 42 49

1.934 1.937 1.812 1.847 1.868 1.949

1.912 1.864 1.713 1.824 1.834 1.878

1.843 1.826 1.701 1.793 1.812 1.864

1.811 1.805 1.698 1.699 1.776 1.812

1.763 1.796 1.683 1.692 1.801 1.801

1.712 1.734 1.501 1.505 1.626 1.637

Average Improvement ratio compare with ANN

1.891 0

1.837 2.94

1.806 4.71

1.766 7.11

1.755 7.74

1.619 16.8

AIFNN (%)

FNN = ANN + fuzzy system, IEANN = ANN + EP, IEFNN = ANN + EP + fuzzy system, IEANN-SA = ANN + EP + TS, AIFNN = ANN + EP + fuzzy system + SA.

method (AIFNN) is promising for load forecasting in power system.

6. Conclusions AIFNN was presented for shot-term daily load forecasting. The proposed neural network was tuned by EP-SA with all the parameters [wjk , θjk , Sj , Mij , mij ]. The described AIFNN load forecasting scheme was tested with data obtained from a study performed on the taiwan power system, and compared with the ANN and GA-ANN. By doing so we gained several advantages which were: (1) use SA to improve the search capability of EP, and let the EP can find the real optimal parameter of FHRCNNs; (2) using fuzzy system into neural network: the fuzzy hyper-rectangular composite neural network allows the inference value of [wjk , θjk , Sj , Mij , mij ] to be closed to real value. The results indicate that a more accurate load curve forecast can be achieved by the AIFNN approach.

References [1] I. Moghram, S. Ruhman, Analysis and evaluation five of load forecasting techniques, IEEE Trans. Power Syst. 4 (4) (1989) 1484–1491. [2] M.T. Hagan, S.M. Behr, The time series approach to short term load forecasting, IEEE Trans. Power Syst. PWRS-2 (3) (1987) 832–837. [3] A.D. Papalekopulos, T.C. Hesterberg, A regression-based approach to short-term system load forecasting, IEEE Trans. Power Syst. 5 (4) (1990) 1535–1547. [4] S. Vemuri, W.L. Huang, D.L. Nelson, On-line algorithms for forecasting hourly loads of an electrical utility, IEEE Trans. Power Apparatus Syst. PAS-100 (8) (1981) 3775–3784. [5] W. Christiaanse, Short-term load forecasting using general exponential smoothing, IEEE Trans. Power Apparatus Syst. PAS-90 (1971) 900–910. [6] S. Rahman, R. Bhatnagar, An expert system based algorithm for short-term load forecast, IEEE Trans. Power Syst. AS-101 (9) (1982). [7] S. Rahman, Generalized knowledge-based short-term load forecasting technique, IEEE Trans. on Power Systems 8 (2) (1993) 508–514. [8] C.N. Lu, H.T. Wu, S. Vemuri, Neural network based short term load forecasting, IEEE Trans. Power Syst. 8 (1) (1993) 336–342.

[9] K.Y. Lee, Y.T. Cha, J.H. Park, Short-term load forecasting using an artificial neural network, in: Proceedings of the IEEE PAS Winter Power Meeting, No. 91 WM 199-0 PWRS, 1991. [10] T.M. Peng, N.F. Hubele, G.G. Karadg, Advancement in the application of neural networks for short-term load forecasting, IEEE Trans. Power Syst. 7 (1) (1992) 250–257. [11] A. Khotanzed, R.C. Hwang, A. Abaye, D. Maratulam, An adaptive modular artificial neural network hourly load forecaster and its implementation at electric utilities, IEEE Trans. Power Syst. 10 (3) (1995) 1716–1722. [12] H. Yoo, R.L. Pimmel, Short-term load forecasting using a self-supervised adaptive neural network, IEEE Trans. Power Syst. 14 (2) (1999) 779–784. [13] J. Vermoak, E.C. Botha, Recurrent neural networks for short-term load forecasting, IEEE Trans. Power Syst. 13 (1) (1998) 126–132. [14] S.J. Kiartzis, A.G. Bakirtzisa, V. Petridri, Short-term load forecasting using neural networks, EPSR 34 (1) (1995) 1–6. [15] T. Maield, G. Sheble, Short-term load forecasting by a neural network and a reined genetic algorithm, EPSR 31 (2) (1994) 147–152. [16] M.C. Su, Identification of singleton fuzzy models via fuzzy hyper-rectangular composite NN, in: H. Hellen Doom, Driankov (Eds.), Fuzzy Model Identification: Selected Approaches, 1997, pp. 215–250.

Gwo-Ching Liao received his MS-EE from the National Cheng Kung University, Tainan, Taiwan in 1991. He worked in the Department of Electrical Engineering, Fortune Institute of Technology, Kaohsiung County, Taiwan. Currently, he is pursuing his Ph.D. degree in the Sun-Yat-Sen University Kaohsiung, Taiwan. His interests are power system operations and AI application in power system. Ta-Peng Tsao obtained his Diploma (DIPEE) from the Taipei Institute of Technology in 1971 and the Ph.D. degree from Aberdeen University, UK in 1979. Between 1973 and 1976 he was working as an Electrical Design Engineer with China Technical Consultants Inc. He was employed in Taiwan for 3 years, then as the Chief Electrical Engineer at Scott Lithgow Limited for 2 years and finally as a Senior Electrical Engineer II at Brown and Root Limited for 2 years. In 1987, he returned to Taiwan as a Visiting Specialist in the Department of Electrical Engineering at National Sun-Yat-Sen University. He is fellow of IEE and Professor at Sun-Yat-Sen University.