Energy and Buildings 43 (2011) 2893–2899
Contents lists available at ScienceDirect
Energy and Buildings journal homepage: www.elsevier.com/locate/enbuild
Forecasting building energy consumption using neural networks and hybrid neuro-fuzzy system: A comparative study Kangji Li a,b , Hongye Su a,∗ , Jian Chu a a b
Institute of Cyber-Systems and Control, Zhejiang University, Hangzhou 310027, PR China School of Electricity Information Engineering, Jiangsu University, Zhenjiang 212013, PR China
a r t i c l e
i n f o
Article history: Received 16 April 2011 Received in revised form 6 June 2011 Accepted 9 July 2011 Keywords: Genetic algorithm ANFIS Artificial Neural Networks Hierarchical structure Building energy prediction
a b s t r a c t As a regular data-driven method, Artificial Neural Networks (ANNs) are popular in building energy prediction. In this paper, an alternative approach, namely, hybrid genetic algorithm-adaptive network-based fuzzy inference system (GA-ANFIS) is presented. In this model, GA optimizes the subtractive clustering’s radiuses which help form the rule base, and ANFIS adjusts the premise and consequent parameters to optimize the forecasting performance. a hierarchical structure of ANFIS is also suggested to solve the probably curse-of-dimensionality problem. The performance of the proposed model is compared with ANN using two different data sets, which are collected from the Energy Prediction Shootout I contest and a library building located in Zhejiang University, China. Results show that the hybrid GA-ANFIS model has better performance than ANN in term of prediction accuracy. The proposed model also has the same scale of modeling time as ANN if parameters in GA procedure are carefully selected. It can be regarded as an alternative method in building energy prediction. © 2011 Elsevier B.V. All rights reserved.
1. Introduction A large variety of forecasting methods have already been available nowadays for the short time energy consumption. Most of them are based on ANNs and their developments [1–3]. In 1990s, two energy prediction contests hold by American Society of Heating, Refrigerating and Air Conditioning Engineers (ASHRAE) furthermore encouraged the use of ANNs beyond other methods on this field [4,5]. Recent studies of ANNs based predictors mainly focus on three respects: input variables selection, network structure identification and training algorithm developments. In the field of building energy forecasting, how to choose the relevant input variables is a very important issue. Bayesian estimation [6], principle of maximum likelihood [7], statistical test for nonlinear correlation [4], etc. have been employed to detect relevant input variables. Rivals and Personnaz [8] proposed a systematic approach based on least squares estimation (LSE) and statistical tests, where mathematical and statistical tools are used in cooperation for refining the selection of possibly relevant inputs and neurons. Karatasou [3] applied this statistical analysis in building energy applications and got a simple neural networks, which showed that wind speed and humidity are less significant as inputs.
∗ Corresponding author. Tel.: +86 571 8795 1200; fax: +86 571 8795 2279. E-mail addresses:
[email protected] (K. Li),
[email protected], ees
[email protected] (H. Su). 0378-7788/$ – see front matter © 2011 Elsevier B.V. All rights reserved. doi:10.1016/j.enbuild.2011.07.010
As a well-known representative of all training algorithms, Backpropagation (BP) technique is widely used in supervised neural network. But as a gradient descent algorithms, the learning process may trap into local minima and the parameters can not guarantee a global optimization. Many literatures have introduced several stochastic algorithms such as genetic algorithms and simulated annealing to solve the problem [9,10], but there are seldom applications in building energy prediction field because of the complexity and time consuming issues. Recent ten years, a number of researchers have emphasized another hybridization technology, neuron fuzzy systems, which can be found in various applications [11,12]. A fuzzy inference system was proposed in 1993 [11], which combined the fuzzy if-then rules into a neural network-like structure. The learning algorithm is a hybrid algorithm, which consists of the gradient descent (GD) and LSE, to minimize the output error based on the training data sets. To develop the performances of ANFIS model, a number of studies [13–15] have been reported, most of which focus on extracting the optimal parameters of the fuzzy inference system (FIS). Aliyari Shoorehdeli [13] used modified particle swarm optimization (PSO) to train the premise parameters instead of the default GD methods and results showed the comparable performance with less training parameters. Admuthe and Zanaganeh [14,15] used clustering technique to reduce the complexity of the rule base avoiding the curse-of-dimensionality problem. Furthermore, they introduced evolutionary algorithms respectively to optimize the radiuses of clusters, which can find the properties of inputs and optimize the
2894
K. Li et al. / Energy and Buildings 43 (2011) 2893–2899
fuzzy rules of ANFIS. In this paper, a hybrid ANFIS model is developed with GA integrated as a part to help optimize the rule base parameters. The model is applied in two different forecasting cases of building energy consumption. The focus of this paper is on the performance comparison between ANN and GA-ANFIS in the field of forecasting short time building energy. Results will show the comparison in terms of prediction accuracy and time-consuming length. The paper is organized as the following. Section 2 describes the structure of ANNs and GA-ANFIS. The procedure of subtractive clustering and GA are also briefly presented. Section 3 describes the data sets of energy prediction contest I and a library building in Zhejiang University. ANN and GA-ANFIS models are simulated respectively, and results are compared in Section 4. Section 5 gives the conclusions of the paper. 2. Model description
With some learning algorithm, neural networks minimize an error function so that input vectors are closely mapped to specified target outputs. For the single hidden layer network, the function takes the following form:
⎧ h ⎨ ⎩
j=1
wj
n
j i=1
1 , 1 + exp(−x)
wji xi + wj0
+ w0
then the two-rule ANFIS architecture can be described in Fig. 1. The final output of the given network can be calculated as follows:
⎭
2
,
(1)
(2)
f =
(3)
The most commonly used neural networks is the feedforward BP network. A simplified procedure for the learning process is summarized as follows: • Provide training data consisting of patterns of inputs and target outputs. • Assess the approaching performance of the network output. • Adapt the connection strengths to produce better approximations of the desired target outputs. • Continue the process until the first desired stop condition is achieved. The usual approaches for improving generalization are early stopping technique and regularization method. In early stopping methods, the training data are divided into training subset and validation subset. The training process will stop before minimizing the training error when the validation error begins to rise. In regularization methods, the performance function is modified by adding a term that consists of network weights and biases, which forces the network response to be smoother and less likely to over fit. For building energy prediction, how to choose the possibly relevant inputs and network structure is important to guarantee the generalization performance. Rivals and Personnaz [8] proposed a
2
ωi fi
i=1 2
=
ωi (pi x + qi y + ri )
i=1 2
ωi
i=1
,
(4)
ωi
i=1
where {pi , qi , ri } are adaptive, and labeled as consequent parameters. ωi is the so-called firing strength of the rules, which is given by ωi = Ai (x) × Bi (y),
i = 1, 2.
(5)
In this work, the member function Ai (x) (and Bi (x)) is chosen to be gauss-shaped, which is smooth, simple and popular used in many literature [12,15]. It is described as Ai (x) = exp
or tansig function: f (x) = tanh(x).
2.2.1. ANFIS structure As a kind of adaptive network, ANFIS creates a FIS whose membership function parameters are adjusted with learning algorithms. In the five-layer structure of ANFIS [11], the nodes in first and fourth layers are adaptive, and the others are fixed. The parameters of adaptive nodes can be changed to optimize the approaching performance by one or more learning algorithms. For example, if the FIS has two inputs and one output, and if the rule base only contains two fuzzy if-then rules of Takagi and Sugeno’s type [16],
⎫ ⎬
where the network outputs are the predicted values of the variable y, expressed by the function f (x, w) of the inputs x and the “weight vector” w is a group of the synaptic weights xij and xj . The scalars, h denotes the number of neurons in hidden layer and n denotes the number of inputs. (·) is a linear activation function of output layer and (·) is a non-linear activation function of hidden layer, often chosen as logsig function: f (x) =
2.2. GA-ANFIS
Rule 1: If (x is A1 ) and (y is B1 ) then (f1 = p1 x + q1 y + r1 ), Rule 2: If (x is A2 ) and (y is B2 ) then (f2 = p2 x + q2 y + r2 ),
2.1. ANNs
yˆ (k) = f (x, w) =
systematic approach based on least squares estimation and statistical tests. This method got a simple neural networks in the application of energy prediction contest I [3], which is used in this paper for performance comparison between ANN and GA-ANFIS models.
2 x − bi −
ai
,
(6)
where {ai , bi } are adaptive and labeled as premise parameters. In the learning procedure, the adaptive parameters can be updated by GD method, sequential LSE or their combination, which is applied in this study. The hybrid learning procedure is composed of two passes, a forward pass and a backward pass, which is proved to be highly efficient in training the ANFIS [11,14,15]. In the forward pass, premise parameters are fixed and the sequential LSE are used to optimize consequent parameters. In the backward pass, consequent parameters are fixed and premise parameters are updated with GD, which makes the error rate of output node back propagate to input end. 2.2.2. Rule base optimization Because there are no expert knowledge to arrange the rule base in the application of building energy system, subtractive clustering technique, as a fast, one-pass method, is presented for estimating the number and centers of rules [17] instead of the default linear method. Considering a collection of n data points (x1 , x2 , . . ., xn ) each of which contains input and output variables of ANFIS, the potential of xi to be a cluster center may be estimated as the following, Di =
n j=1
exp
−
xi − xj 2 (ra /2)
2
,
(7)
K. Li et al. / Energy and Buildings 43 (2011) 2893–2899
2895
Fig. 1. The architecture of ANFIS model, two inputs, two rules.
where Di is the potential of ith data point. A point will have a higher potential value if it has more neighbor points close to itself, and the point with highest potential will be first selected to be the cluster center. Radius ra defines a neighborhood, out of which the other points contribute little to its potential. Assume that xc1 is first selected and Dc1 is its potential value. Then, the potential value of each data point xi is modified by the following equation,
Di = Di − Dc1 exp
−
xi − xc1 2 (ra /2)
2
,
(8)
where Di is the reduced potential value and is labeled as squash factor, which is multiplied by radius values to determines the neighboring clusters within which the existence of other cluster centers are discouraged. This process is repeated until enough cluster centers are produced. Furthermore, a evolutionary algorithm, GA, is combined with ANFIS to adjust the clustering radiuses in this paper. GA has the features of robustness and effectiveness and is well-suited for discontinuous and multi-modal functions. The basic evolutionary procedure of GA is presented in many literatures [18] and described in Fig. 2. When subtractive clustering’s parameters are adjusted by GA, the fuzzy if-then rules will indicate some qualities of the building energy system, which is probably unavailable only by gradient based methods.
2.2.3. Hybrid GA-ANFIS model In this section, a hybrid neuro-fuzzy model is proposed to forecast energy consumption. The subtractive clustering method is suggested to adjust the rule base instead of regular linear space partitioning. And parameters of the clusters explained in Section 2.2.2 are optimized using GA. The networks are trained in two stages. Stage I, the stochastic optimization method runs, and a one-pass ANFIS is called in each generation for evaluating the fitness value of any candidate solution. The objective is to minimize the training and cross validating error of the prediction model. The objective function is specified in Eq. (13). This stage iterates until the stop conditions of GA is reached. Stage II, with the optimal rule parameters, ANFIS model runs again to adjust the parameters in the antecedent and consequent parts. To guarantee the generalization of the model, the epoch is suggested to be small. The proposed flow diagram is illustrated in Fig. 3. There are two points should be noticed when using this hybrid network. Firstly, GA is a time consuming procedure, so if the training data set is numerous, part of them is suggested for rule base adjustment. Secondly, when there are many input variables (for example, more than 5 inputs), which is of common situation in building energy prediction, a hierarchical structure of ANFIS is suggested to resolve the curse-of-dimensionality problem [19], as shown in Fig. 7. The
Generate the initial population of clustering parameters in GA
Population initialization Build a FIS with the clustering parameters generated
Population evaluation
Stopping criteria are satisfied?
Yes Output
Update clustering parameters in the next generation of GA
Optimize the FIS paratmeters by one-pass ANFIS and evaluate the objective function
No Selection Crossover
Stopping criteria are satisfied?
No
Yes Mutation Population evaluation and updating Fig. 2. The flow diagram of a GA optimization model.
Adjust the FIS paratmeters by ANFIS with optimal clustering parameters Fig. 3. The flow diagram of hybrid GA-ANFIS.
2896
K. Li et al. / Energy and Buildings 43 (2011) 2893–2899
T(t) S(t) output
s sh ch
Fig. 6. ANN structure with 4 neurons in single hidden layer, S1. Fig. 4. Outside of the library building.
selection of the inputs for each fuzzy sub-model is based on the sequence of importance to the forecasting. 3. Data sets: input variables and data pre-processing
xi − xmin , xmax − xmin
i = 1, 2, . . . , m
4. Modeling and results 4.1. Data set A
We use two different data sets, provided from two different buildings, called data sets A and B, respectively. Both data sets represent real world data and their general properties are described as follows. The data set A is the benchmark PROBEN 1, and comes from the Great Building Energy Predictor Shootout I, organized by ASHRAE [20]. It consists of the following inputs: temperature, solar radiation, humidity ratio and wind speed. The contest required the prediction of building energy use (electricity, hot and cold water) of a big building, without any other details (like type of use and occupancy). For the input variables, data were available at hourly intervals for the period from September 1989 to February 1990, whereas energy consumption data were available only for September to December 1989. The data set B derives from a library building located in Zhejiang University, Hangzhou, China. The library has ten floors on the ground with gross floor area of 25,542 m2 (Fig. 4). There are about 1100 seats in the library and most of the building occupancy occurs between 8:30 and 22:00. The data set only consists of daily temperature and occupancy. The daily temperature is obtained from the local meteorological station. The opening schedule of each reading room of the library, described in Fig. 5, is roughly considered as the library’s hourly occupancy variable. To ensure that no special factor is dominant over the others, all inputs and outputs are normalized to the interval (0, 1) by a linear scaling function: Xi =
where m is the number of data points collected for a given input variable.
(9)
4.1.1. ANN results We use feed forward neural network with a single hidden layer of tansig neurons to predict hourly energy consumption. The number of hidden neurons and the relevant inputs are selected using the methodology based on least squares estimation and statistical tests [8]. Results of the tests show that 5 independent inputs and 4 hidden neurons are relevant to the application [3]. The network is described in Fig. 6 and the input variables are shown below: S1 : x (t) = (T (t), S(t), s, sh, ch),
(10)
where T(t) is the temperature, S(t) the solar flux, s the session flag, sh and ch are sine and cosine of the hour of the day. The data set A includes a total of 4208 time steps, where data [1,1296] are available for training, and [2927,4208] for testing. The task is to predict the whole building electric power consumption (WBE) y (t) from the known measurements on x (t). To evaluate the obtained results, the coefficient of variation (CV) is used, which has been applied in the ASHRAE contests [20]. It is given as the following,
N (ypred,i − ydata,i )2 /N
CV =
i=1
y¯ data
.
(11)
After a dozen times of predictions using above mentioned ANN model, the best five results are shown in Table 1, which also shows the average and best results. In order to examine the effect of input variables, when short past values of energy consumption are introduced to the network, we consider a second input set S2, which is shown below: S2 : x (t) = (y(t − 1), y(t − 2), T (t), s, ch)
(12)
Table 1 Accuracy of ANN model using data set A, S1.
Fig. 5. The calendar of the library in one week.
No.
Time (s)
Train (CV)
Test (CV)
1 2 3 4 5
4 5 3 4 5
0.0898 0.0890 0.0881 0.0893 0.0902
0.1060 0.1062 0.1054 0.1030 0.1055
Average Best
4.2 4
0.0893 0.0893
0.1052 0.1030
K. Li et al. / Energy and Buildings 43 (2011) 2893–2899
2897
Table 4 Accuracy of GA-HANFIS model using data set A, S1.
Table 2 Accuracy of ANN model using data set A, S2. No.
Time (s)
Train (CV)
Test (CV)
No.
Time (s)
Train (CV)
Test (CV)
1 2 3 4 5
7 5 7 7 5
0.0241 0.0261 0.0241 0.0245 0.0248
0.0323 0.0358 0.0329 0.0365 0.0329
1 2 3 4 5
8.9 9.2 6.7 8.1 7.3
0.0968 0.0975 0.0963 0.100 0.0980
0.0991 0.0978 0.0997 0.0961 0.0988
Average Best
6.2 7
0.0247 0.0241
0.0341 0.0323
Average Best
8.0 8.1
0.0977 0.100
0.0983 0.0961
ANFIS 1
S(t) s
ANFIS 2
Table 5 Accuracy of GA-HANFIS model using data set A, S2.
T(t)
y
sh ch
Table 3 Main parameters used in GA-HANFIS model. Main parameters
GA-HANFIS
Maximum number of generations in GA Population size Crossover probability in GA Number of elites passed to the next generation Number of epochs in ANFIS Training method in ANFIS
5 5 0.8 2 1 LSE and GD
The results are described in Table 2, which indicates that short past consumption values can dramatically increase the accuracy of the prediction. 4.1.2. GA-HANFIS results To compare the prediction performance with ANN, the same training and test data are used in GA-ANFIS model. Because there are five input variables in S1 and S2, we applied a hierarchical structure of ANFIS to overcome the curse-of-dimensionality problem, which is shown in Fig. 7. The training procedure of the hierarchical ANFIS (HANFIS) model includes two stages. First stage is the rule base optimization stage. To speed up the GA procedure and guarantee the generalization performance, we consider randomly choosing a part of the training data (130 hourly data in this application) to adjust the rule bases by GA. The objective function of GA in this work is to minimize the CV of HANFIS prediction, which is defined below, CVtrn + CVofit 2
,
Time (s)
Train (CV)
Test (CV)
1 2 3 4 5
7.3 8.8 8.6 9.1 8.0
0.0220 0.0230 0.0217 0.0216 0.0229
0.0286 0.0266 0.0285 0.0294 0.0260
Average Best
8.4 8
0.0222 0.0229
0.0278 0.0260
Table 6 Accuracy of two models using data set A.
Fig. 7. The hierarchical structure of ANFIS, S1.
CVtot |min =
No.
(13)
where CVtrn is the CV of the training data (80% of 130 hourly data) and CVofit is the CV of the avoid-over-fitting data (20% of 130 hourly data). To increase the generalization performance, the number of generation and population size are set as 5. The main parameters used in hybrid GA-HANFIS model are given in Table 3. In second stage, the entire training data are used to train HANFIS after the optimized rule base parameters are obtained. In this period, 20% of the training data (259/1296) are randomly chosen to avoid over fitting and the remaining 80% data are directly
CV
ANN (S1)
GA-HANFIS (S1)
ANN (S2)
GA-HANFIS (S2)
[6]
Train Test
0.0893 0.1030
0.100 0.0961
0.0241 0.0323
0.0229 0.0260
– 0.103
Table 7 Time consuming of two models using data set A. Models
S1 (s)
S2 (s)
ANN GA-HANFIS
4.2 8.0
6.2 8.4
used for training. Results for data set S1 and S2 are summarized in Tables 4 and 5. 4.2. Comparison of two models As can be seen, in general the GA-HANFIS models achieved smaller CVs when compared with ANN using the same data sets. When GA joins to adjust subtractive clustering’s parameters, the fuzzy if-then rules of ANFIS can indicate some qualities of the building energy system, which is probably unavailable only by gradient based methods. A accuracy comparison is summarized in Table 6. A graphical comparison between the predictions and the real loads is given in Fig. 8. The obtained models also can be compared in term of time consuming by measuring the total modeling time. From Table 7, it can be seen that GA-HANFIS has the same scale of modeling time as ANN. There are two reasons. Firstly, only 10% of training data are randomly chosen in GA procedure. Secondly, the number of generation and population size are selected small. It must be pointed out that GA-HANFIS is more complex than the structure of ANN, because GA is integrated as a part. 4.3. Data set B To examine the performance of the hybrid GA-ANFIS model with different qualitative variables, the data set B is collected from a real building. It has fewer environmental variables and rough information about the building energy system which is more common in prediction tasks.
2898
K. Li et al. / Energy and Buildings 43 (2011) 2893–2899
Fig. 10. Predicted building electricity loads using ANN and GA-HANFIS, S4.
S4 : x (t) = (y(t − 1), y(t − 2), T (t), ch, opy), Fig. 8. Predicted building electricity loads using ANN and GA-HANFIS, S2.
To highlight different characters of the input selection, two input sets are considered. The first input set only includes three independent variables, daily high temperature (dry-bulb), hourly occupancy and the hour of the day. To find the importance of timelagged loads to the prediction, short past values of the hourly load are added to the models, which compose the second input set. Two data sets are described as following: (14)
0.8 0.6 0.4 0.2 0 0.2
0.4
0.6
0.8
1 0.8 0.6 0.4 0.2 0 0
0.2
0.4
y(t−1)
y(t−2)
1
Degree of membership
0
where S3 only contains independent variables, T(t) is daily temperature, ch is cosine of the hour of the day, oc represents the hourly occupancy of the library, and S4 contains short past values of the energy consumption. The data from 8 October to 14 November 2009 (900 hours in total) are collected to build the models. They are divided into two sets: the training set and the test set. The training set consists of four weeks (732 hourly data) in the period. the other week’s data (168 hourly data) form the test data. It must be noticed that different building data present different characteristics and a unique, generally applicable set of parameters
Degree of membership
Degree of membership
1
Degree of membership
Degree of membership
S3 : x (t) = (T (t), ch, oc),
0.8 0.6 0.4 0.2 0 0
0.5
oc
1
(15)
0.6
1 0.8 0.6 0.4 0.2 0
0.8
0
0.5
ch 1 0.8 0.6 0.4 0.2 0 0.225
0.425
0.625
T(t)
Fig. 9. Final membership functions of GA-HANFIS, S4.
0.825
1
K. Li et al. / Energy and Buildings 43 (2011) 2893–2899 Table 8 Accuracy and time consuming of two models using data set B. Performance
ANN (S3)
GA-ANFIS (S3)
ANN (S4)
GA-HANFIS (S4)
CV Time (s)
0.0520 1
0.0447 1.5
0.0301 2
0.0266 8.4
is indefensible. For S3, a GA-ANFIS model is selected with 30% of training data are used in GA procedure. For S4, a hierarchical ANFIS structure are used and 50% training data are randomly chosen in GA optimization. The other parameters of GA-ANFIS remain same as the parameters described in Table 3. Fig. 9 describes the final optimal membership shapes (S4) with the optimal clustering parameters identified as (Ry(t−1) , Ry(t−2) , Rch , Roc , RT (t) ) = (0.36, 0.26, 0.10, 0.79, 0.10) (16) Results are summarized in Table 8, and a graphical comparison between the predictions and the real loads is given in Fig. 10. From Table 8, we confirm that with rough data set, which is of common situation in prediction tasks, GA-ANFIS achieves better accuracy compared with ANN. And the second scale training time of the proposed model is also acceptable for hourly building energy prediction, which is not so strict with real time demand. 5. Conclusions In the area of building energy forecasting, ANN is a regular datadriven method, which is adopted by most of literatures in the past. In this paper, the hybrid GA-ANFIS model is presented, in which GA optimizes fuzzy if-then rule base by finding the best parameters of the subtractive clusters, and ANFIS adjusts the premise and consequent parameters to match the training data. In the application of Great Building Energy Predictor Shootout I, a hierarchical structure of GA-ANFIS is presented to overcome the curse-of-dimensionality problem. The calculated results indicated better performance compared with ANN in term of forecasting accuracy. In this application, GA-ANFIS method also obtains comparable modeling time with ANN which can be guaranteed by the following two considerations. Firstly, only part of the training data are randomly chosen in GA procedure. Moreover, the number of generation and population size are selected small. With GA integrated as a part, the neuro-fuzzy system has more complex structure compared with regular ANN. A library’s hourly energy prediction is applied at the end. It has fewer environmental variables and rough information which is of common situation in prediction tasks. Results confirm the accuracy and time consuming performance of the proposed GA-ANFIS model with different parameter configuration. Acknowledgements This work is supported by the National Creative Research Groups Science Foundation of China (NCRGSFC: 60721062), the National
2899
Natural Science Foundation of P.R. China (NSFC: 60736021) and the National High Technology Research and Development Program of China (863 Program 2008AA042902). The authors would like to acknowledge the staff of Zhejiang Supcon Software Co., Ltd. for providing the energy consumption data and weather data used in this paper. References [1] J. Yang, H. Rivard, R. Zmeureanu, On-line building energy prediction using adaptive artificial neural networks , Energy and Buildings 37 (2005) 1250–1259. [2] A.H. Neto, F.A.S. Fiorelli, Comparison between detailed model simulation and artificial neural network for forecasting building energy consumption , Energy and Buildings 40 (2008) 2169–2176. [3] S. Karatasou, M. Santamouris, V. Geros, Modeling and predicting building’s energy use with artificial neural networks: methods and results , Energy and Buildings 38 (2006) 949–958. [4] M.B. Ohlsson, T.S. Rognvaldsson, C.O. Peterson, B.P. Soderberg, H. Pi, Predicting system loads with artificial neural networks – methods and results from ‘the great energy predictor shootout’ , ASHRAE Transactions 100 (1994) 1063–1074. [5] J.S. Haberl, S. Thamilseran, Great energy predictor shootout ii: measuring retrofit savings – overview and discussion of results , ASHRAE Transactions 102 (1996) 419–435. [6] D.J. MacKay, Bayesian nonlinear modeling for the prediction competition, ASHRAE Transactions 100 (1994) 1053–1062. [7] R.H. Dodier, G.P. Henze, Statistical analysis of neural networks as applied to building energy prediction , International Solar Energy Conference (1996) 495–505. [8] I. Rivals, L. Personnaz, Neural-network construction and selection in nonlinear modeling , IEEE Transactions on Neural Networks 14 (2003) 804–819. [9] D.F. Cook, C.T. Ragsdale, R.L. Major, Combining a neural network with a genetic algorithm for process parameter optimization , Engineering Applications of Artificial Intelligence 13 (2000) 391–396. [10] L. Zhang, Y.F. Bai, Genetic algorithm-trained radial basis function neural networks for modelling photovoltaic panels , Engineering Applications of Artificial Intelligence 18 (2005) 833–844. [11] J.-S.R. Jang, Anfis: adaptive-network-based fuzzy inference system , IEEE Transactions on Systems, Man and Cybernetics 23 (1993) 665–685. [12] S. Jassar, Z. Liao, L. Zhao, Adaptive neuro-fuzzy based inferential sensor model for estimating the average air temperature in space heating systems , Building and Environment 44 (2009) 1609–1616. [13] M. Aliyari Shoorehdeli, M. Teshnehlab, A. Sedigh, Novel hybrid learning algorithms for tuning anfis parameters using adaptive weighted pso , in: Fuzzy Systems Conference, 2007, FUZZ-IEEE 2007, 2007, pp. 1–6. [14] L. Admuthe, S. Apte, Computational model using ANFIS and GA: application for textile spinning process , in: Computer Science and Information Technology, 2009, ICCSIT 2009, 2009, pp. 110–114. [15] M. Zanaganeh, S.J. Mousavi, A.F. Etemad Shahidi, A hybrid genetic algorithmadaptive network-based fuzzy inference system in prediction of wave parameters , Engineering Applications of Artificial Intelligence 22 (2009) 1194–1202. [16] T. Takagi, M. Sugeno, Derivation of fuzzy control rules from human operator’s control actions , IFAC Proceedings Series (1984) 55–60. [17] T. Miyazaki, M. Hagiwara, Fuzzy inference based subjective clustering method , in: Systems, Man and Cybernetics, 1995, Intelligent Systems for the 21st Century, IEEE International Conference, vol. 3, 1995, pp. 2886–2891. [18] H.-G. Beyer, Evolutionary algorithms in noisy environments: theoretical issues and guidelines for practice , Computer Methods in Applied Mechanics and Engineering 186 (2000) 239–267. [19] M. Brown, K. Bossley, D. Mills, C. Harris, High dimensional neurofuzzy systems: overcoming the curse of dimensionality , IEEE International Conference on Fuzzy Systems 4 (1995) 2139–2146. [20] J.F. Kreider, J.H. Haberl, Predicting hourly building energy use: the great energy predictor shootout-overview and discussion of results , ASHRAE Transactions 100 (1994) 1104–1118.