MOGA Design of Temperature and Relative Humidity Models for Predictive Thermal Comfort 1 F
António E. Ruano*, Pedro M. Ferreira ** Hugo Mendes* * Centre for Intelligent Systems, University of Algarve, 8005-139 Faro, Portugal; (e-mail:
[email protected] ,
[email protected]). ** Algarve Science and Technology Park, Campus de Gambelas, Pavilhão A5, 8005-139 Faro, Portugal (e-mail:
[email protected]) HU
UH
Abstract: The use of artificial neural networks in various applications related with energy management in buildings has been increasing significantly over the recent years. One of these applications is predictive HVAC control, which aims to maintain thermal comfort while simultaneously minimizing the energy spent, within a specified prediction horizon. Thermal comfort depends on several variables; among them inside temperature and relative humidity are key factors. In this paper the design of predictive neural network models for these two climate variables is discussed. The design approach uses a Multi-Objective Genetic Algorithms (MOGA) to determine the structure of the network, together with an efficient derivative-based estimation algorithm. Simulations with real weather and climate data show that excellent predictive models can be obtained with this methodology. Keywords: Temperature prediction, Relative humidity prediction, Neural network models, MOGA. 1. INTRODUCTION In EU countries, primary energy consumption in buildings represents about 40% of total energy consumption, and, depending on the countries, half of this energy is spent for indoor climate conditions. On a technological point of view, it is estimated that the use of efficient building energy management systems can save up to 20% of the energy consumption of the building sector, i.e, 8% of the energy consumption of the overall Community consumption (Dexter, et al. (1996)). The main goal of HVAC systems is to supply thermal comfort for the occupants of a building. The most used thermal index is the Predicted Mean Vote (PMV), developed by Fanger Fanger (1972), which in itself is a function of 2 variables related with the occupants (activity and clothing levels) and 4 climate variables (air temperature, relative humidity and velocity, and mean radiant temperature). If we want to maximise this index, in a predictive fashion, this means that forecasts of the dependent variables must be available. This paper is focused on developing forecasts of two of those variables, namely air temperature and relative humidity, with neural networks (NNs). The majority of approaches found in the literature deals with temperature in buildings (see, for instance, Thomas and Soleimani-Mohseni (2007) and Ruano, et al. (2006)). Indoor relative humidity is not so common. Sigumonrong, et al. (2001) addressed the two problems, but
1
their focus was not in prediction. Our group developed several climate models for greenhouses (Ferreira, et al. (2008)). Lu and Viljanen (2009) compared the performance of NNs and Genetic Algorithms for inside temperature and relative humidity prediction in buildings. The next section summarizes the model identification methodology employed. Section 3 describes the data used. Results are presented in Section 4. Conclusions and future work end this paper. 2. MODEL IDENTIFICATION PROCEDURE The identification procedure uses for parameter estimation a Levenberg-Marquardt (LM) algorithm minimising a modified training criterion, and the model structure is selected suing a Multi-Objective Genetic Algorithm (MOGA). For details about this methodology see, for instance, Ruano, et al. (2005); for details about its application see, for instance, Teixeira, et al. (2006). The procedure is briefly summarised in the following subsections. 2.1 Model Training In this work, we consider Radial Basis Function (RBF) models of the form: n
y ( x, w , C, σ ) = w0 + ∑ wiϕi ( x, ci , σ i ) ,
(1)
i =1
The authors acknowledge the support of the Portuguese National Science Foundation (FCT PTDC/ENR/73345/2006) and the European Commission (grant PERG-GA-2008-239451)
where x is the input vector, ci and σ i are the centre vector and the spread for the ith neuron, and ϕi is the Gaussian function:
ϕi ( x, ci , σ i ) = e
−
x − ci 2σ i2
2.3 Model Evaluation
2 2
.
(2)
For a specified number of neurons, n, and for a determined set of inputs, X, off-line training a RBF NN corresponds to determining the values of w, C, and σ such that (3) is minimized: X
Ω ( X, w , C, σ ) =
t − y ( X, w , C, σ )
X
2 2
,
2
X
X
X
X
X
t − φ ( X, C, σ ) w 2
X
2
,
(5)
By computing the global optimum of the linear parameters (w) w.r.t the nonlinear parameters C and σ , ˆ ( X, C, σ ) = φ + ( X, C, σ ) t , w
(6)
where “+” denotes a pseudo-inverse operation, and replacing (6) in (4) , the training criterion is now only dependent on the nonlinear parameters: X
X
Ψ ( X, C, σ ) =
ε tr εt εp
t − φ + ( X, C, σ ) φ ( X, C, σ ) t
2
Meaning RMS of the training error
Objective set up as restriction
RMS of the test error
minimization
Max. RMS of the pred. error
minimization
2-norm of w
restriction
(4)
φ ( X, C, σ ) = ⎡⎣1 ϕ1 ( X, c1 , σ 1 ) " ϕn ( X, cn , σ n ) ⎦⎤
X
Table 1. Objectives employed in MOGA
w
2
where
X
considered. Regarding model performance, three objectives are specified as detailed in table 1, where RMS stands for Root Mean Square.
Symbol X
Ω ( X, w, C, σ ) =
Model evaluation is usually done on the basis of complexity and performance measures. Regarding the first, in this work, only the 2-norm of the NN output linear combiner, w 2 was
(3)
where t is the target vector and, in contrast with (1) and (2) , (3) is now applied to a set of patterns. As the model output is a linear combination of the basis functions, (3) can be given as: X
input terms (delayed inputs and plant outputs) optimises a number of pre-specified goals and objectives.
The third objective in table 1 is computed on the basis of the long-term model prediction error taken from the multi-step model simulation over the prediction horizon ph. If the testing data set, Xt has p data points and for each point the model is used to make predictions up to ph steps ahead, then an error matrix is constructed, ⎡ e (1,1) e (1, 2 ) ⎢ e e 2,1 ( ) ( 2, 2 ) E=⎢ ⎢ # # ⎢ ⎢⎣ e ( p − ph,1) e ( p − ph, 2 )
" e (1, ph ) ⎤ ⎥ " e ( 2, ph ) ⎥ , (8) ⎥ % # ⎥ " e ( p − ph, ph ) ⎥⎦
where e ( k , i ) is the model error taken from the simulation in
2 2
(7)
2
instant k of Xt, at step i in the prediction horizon. If R is the RMS function computed over the columns of E, then
Equation (7) is minimized using a Levenberg-Marquardt algorithm.
ε p = max { R ( E )}
The training procedure terminates when the evaluation of (7) over a different set, the test set, ceases to decrease (a technique known as early-stopping).
Once the MOGA execution is stopped, the preferable set of models is evaluated on a third data set, the validation data set, in order to avoid any bias that may have occurred during the MOGA optimisation towards the testing data set. The final selection of one model is then carried out on the basis of the objective values obtained by the MOGA and the objectives values obtained over the validation data set.
X
X
X
X
2.2 MOGA optimization
The training procedure described above only estimates the model parameters. The complete design of a NAR or NARX predictive model is only complete once the number of neurons and the input terms are selected. This constitutes a combinatorial optimisation problem which typically has a multi-criteria character. In order to address this part of the model design, the MOGA (Fonseca and Fleming (1998)) is considered to evolve a preferable set of models where the number of neurons and the selection of
(9)
3. DATA Weather and climate data for the Estoi School secondary School were available for the whole year of 2004, sampled at a 5-minutes rate. In terms of weather data, we picked external temperature, relative humidity and solar radiation. A room was chosen, which had 2 windows and 2 doors. Magnetic sensors had been applied to the doors and the windows, enabling to detect if they were open or closed. The room had
2 temperature sensors and 1 relative humidity sensor.
Te s t D a t a
This data was divided between training (first 3500 samples: 26 January – 8 February), test (next 1500 samples: 8 -13 February) and validation (last 1500 samples: 13-18 February). The following figures show this data.
10
Int Temp (ºC)
Int Temp (ºC)
15
0
500 1000 S a m p le s
1500
0
500
1000
1500 2000 S a m p le s
2500
3000
3500
V a lid a t io n D a t a 60
50 40
30
20
0
15
500 1000 S a m p le s
50 40
30
1500
0
500 1000 S a m p le s
1500
Tra in in g D a t a 70
1000
1500 2000 S a m p le s
2500
3000
3500
40 0
500
1000
1500 2000 S a m p le s
2500
3000
3500
V a lid a t io n D a t a Ext Rel Hum (%)
80 60 40
500 1000 S a m p le s
50
30
100
0
60
Te s t D a t a 60
40
1500
Fig. 5. Internal Relative Humidity
80
0
500 1000 S a m p le s
1500
Tra in in g D a t a 150
V a lid a t io n D a t a
0.8
0.8
0.6
0.6
Doors State
500
Doors State
0
Int Rel Hum (%)
10
0.4 0.2 0
100
0
500 1000 S a m p le s
1500
0.4 0.2 0
0
500 1000 S a m p le s
1500
Tra in in g D a t a 0.8
50 0
0
500
1000
1500 2000 S a m p le s
2500
3000
3500
Fig. 2. External Relative Humidity Solar Radiation (Wm-2)
400 200
500 1000 S a m p le s
1500
1000
500
0
0
500 1000 S a m p le s
1500
1000
500
0
500
1000
Fig. 3. Solar Radiation
0.2 0
500
1000
1500 2000 S a m p le s
2500
3000
3500
Fig. 6. Doors State
Tra in in g D a t a
0
0.4
V a lid a t io n D a t a
600
0
0.6
0
Te s t D a t a 800
0
Doors State
Ext Temp (ºC) Ext Rel Hum (%)
20
60
Te s t D a t a
Ext Rel Hum (%)
1500
25
Te s t D a t a
5
1500
100
Solar Radiation (Wm-2)
500 1000 S a m p le s
30
10
Fig. 1. External Temperature
Solar Radiation (Wm-2)
0
Fig. 4. Internal Temperature
Tra in in g D a t a
20
15
1500
15
25
5
500 1000 S a m p le s
Int Rel Hum (%)
15
500 1000 S a m p le s
0
20
Tra in in g D a t a
Int Rel Hum (%)
20
Ext Temp (ºC)
Ext Temp (ºC)
20
0
20
V a lid a t io n D a t a
25
5
25
15
Int Temp (ºC)
Data was chosen covering the period between 26 January to 18 February. During this period, no window was opened. We picked the internal temperature values of the two sensors, and averaged them. We codified the state of the doors (DS) as: 0 – both closed; 0.5 – one open and 1 – both open.
Te s t D a t a
V a lid a t io n D a t a 25
30
1500 2000 S a m p le s
2500
3000
3500
4. RESULTS With this data, models for the inside temperature and relative humidity were developed. For each model two experiments were conducted. In all experiments, MOGA iterated 100 times a population of 100 candidates, and convergence was typically obtained after 60 iterations. 4.1 Internal Temperature
The following input terms were considered for the internal temperature model: 288 internal temperature lags (1 day); 36 external temperature lags (3 hours); 36 solar radiation lags (3hours); 24 doors state lags (2 hours). MOGA has been
parameterised as detailed in Table 2. Table 2. MOGA Parameterisation for Temperature Parameter
ε tr εt εp w
Value <2º minimise minimise
2
ph Number of neurons Number of input terms Available input terms
<5 48 (4 hours) [2 ..20] <30 384
Two different MOGA executions have been performed. The next figure shows some results for the first experiment. In the 3 dispersion graphs, the blue dots represent the nondominated solutions. The preferential set is shown in red. The top left graph shows the RMSE of the 1-step-ahed prediction in the test set (y-axis) versus the RMSE of the 1-step-ahead prediction in the training set (x-axis). The top-right figure illustrates the RMSE of the 48-steps.ahead prediction in the test set versus the 1-step-ahead in the training set. The RMSE of the 48-steps-ahead prediction in the test set versus the 1step-ahead in the test set is shown in the bottom-left graph, while the RMSE of the test set of the preferable solutions, versus the prediction horizon considered, is illustrated in the bottom-right graph.
Fig. 8. Results for temperature models – second experiment Table 3. Prediction errors
ε p ,tr
minimum 1.05
mean 2.62
maximum 11.10
ε p ,t
0.85
2.5
14.25
0.98
3.19
16.47
ε p ,tr
1.09
2.02
4.04
ε p ,t
0.84
1.81
5.55
0.91
2.04
6.32
1
1
ε p ,v
1
2
2
ε p ,v
2
In order to select 1 model, out of the solutions in the preferable set of both experiments, the natural logarithm of the maximum RMSEs, along the prediction horizon, were determined for the training, test and the validation sets, and are presented in the next figure. It is clear that solution 10, of experiment 2, achieves the best compromise between training, test and validation sets. Temperature models 10
ln ( max ( RMSE val ) )
3
Fig. 7. Results for temperature models – first experiment
2.5 2
1 5
0.5
7
0 7
-0.5 3
Figure 8 shows results obtained for the second experiment. The explanation of the graphs is as described before. In this experiment the preferable set had 10 models, where 9 had 6 neurons and 1 employed 8 neurons. Table 3 shows some statistics of the maxima RMSEs, for all preferable solutions, along the prediction horizon. The first three lines show the values obtained for the first experiment and the last three the ones for the second experiment.
1 64 5 2 9 17 12 4 314 11 16 13 8 9 2 10
2.5 2 1.5 1
The preferable set had 18 solutions. Within these models, 11 used 7 neurons, 7 models had 5 neurons and 1 model employed 8 neurons
1 15 19 18
3
1.5
8 6
2.5 2 1.5
0.5
1 0
ln ( max ( RMSE test ) )
0.5 -0.5
0
ln ( max ( RMSE training ) )
Fig. 9. Temperature models (preferable set) The next three figures show the simulations obtained for the chosen model, together with the measured data. In all figures, the top left graph shows the results for 1-step ahead prediction (5 min), the top right for 18-steps-ahead (90 min), the left bottom for 36-steps-ahead (3 hours) and the right bottom for 48-steps-ahead (4 hours). Fig. 10 illustrates the results for the training set, fig. 11 for the test set, and fig. 12 for the validation set. It should also be noted that, in order to save computation time, the predictions are not computed for
all samples of the corresponding set, but in steps of 12 samples (i.e, in an hourly basis). Due to the fact that different steps-ahead need different lags, the starting sample index is not the same throughout the different simulations. 18 steps-ahead (1 1/2 h) - RMSE = 0.74 30
28
28
26
26
Temperature (ºC)
Temperature (ºC)
1 step-ahead (5 min) - RMSE = 0.19 30
24 22 20 18 16
24 22 20 18
0
50
100
150 Samples
200
250
16
300
0
30
30
28
28
26
26
24 22 20 18 16
50
100
150 Samples
200
250
300
In the same way as for temperature models, 2 experiments have been conducted for the relative humidity. The next two figures show dispersion graphs, and the evolution of the RMSE, obtained for the first experiment. The explanation of these graphs is the same as for figure 7. This experiment resulted into 6 solutions in the preferable set, all with 6 neurons.
48 steps-ahead (4 h) - RMSE = 1.09
Temperature (ºC)
Temperature (ºC)
36 steps-ahead (3 h) - RMSE = 0.99
parameterised as in Table 3, with the differences that ε tr < 5% and the number of available terms is now 420.
24 22 20 18
0
50
100
150 Samples
200
250
16
300
0
50
100
150 Samples
200
250
300
Fig. 10. Temperature predictions – training set 18 steps-ahead (1 1/2 h) - RMSE = 0.79 28
26
26 Temperature (ºC)
Temperature (ºC)
1 step-ahead (5 min) - RMSE = 0.48 28
24 22 20 18 16
24 22 20 18
0
20
40 60 Samples
80
16
100
0
28
26
26
24 22 20 18 16
40 60 Samples
80
100
Fig. 13. Results for relative humidity models – first experiment
48 steps-ahead (4 h) - RMSE = 0.77
28
Temperature (ºC)
Temperature (ºC)
36 steps-ahead (3 h) - RMSE = 0.74
20
The results for the second experiment are depicted in fig. 18. Again, 6 solutions were obtained for the preferable set, all with 5 neurons.
24 22 20 18
0
20
40 60 Samples
80
16
100
0
20
40 60 Samples
80
100
Fig. 11. Temperature predictions – test set 18 steps-ahead (1 1/2 h) - RMSE = 0.70 26
24
24 Temperature (ºC)
Temperature (ºC)
1 step-ahead (5 min) - RMSE = 0.26 26
22 20 18 16
0
20
40 60 Samples
80
22 20 18 16
100
0
26
26
24
24
22 20 18 16
20
40 60 Samples
80
100
48 steps-ahead (4 h) - RMSE = 0.89
Temperature (ºC)
Temperature (ºC)
36 steps-ahead (3 h) - RMSE = 0.85
Fig. 14. Results for relative humidity models – second experiment
22 20 18 16
4.2 Internal Relative Humidity
Table 4 shows some statistics of the maximum RMSEs, for all preferable solutions, along the prediction horizon. Figure 15 shows the maximum RMSEs obtained by the preferable set solutions in the training, test, and validation sets. Solution 5 of the first experiment achieves the best performance in validation, while obtaining the 3rd best value in training, and a better than average value for the test set. Results obtained with this solution are presented in figs.16-18.
The following input terms were considered for the internal relative humidity model: 288 internal relative humidity lags (1 day); 36 external relative humidity lags (3 hours); 36 internal temperature lags (3 hours); 36 solar radiation lags (3hours); 24 doors state lags (2 hours). MOGA has been
5. CONCLUSIONS This paper presents work in progress. To be used in control, actuator signals must be incorporated in the models. To account for the geographical, buildings and seasonal changes, the models must be adapted on-line. In order to build
0
20
40 60 Samples
80
100
0
20
40 60 Samples
80
100
Fig. 12. Temperature predictions – validation set As it can be seen, this particular solution achieves excellent results, even for 4 hours in advance in the validation set.
1 s t e p -a h e a d (5 m in ) - R M S E = 0 . 4 0
Table 4. Prediction errors
ε p ,t
0.93
1.04
1.11
ε p ,v
1.37
1.60
1.85
ε p ,tr
0.82
0.87
1.00
ε p ,t
0.97
1.24
1.72
1.57
1.66
1.79
1
2
2
ε p ,v
2
ln( max ( RMSEval ) )
4
0.6 6 5
1
0.5
3
4
0.45
2
0.4 0.35
1 3
2 6
0.6 0.4
5 0.2 0 -0.2
ln ( max ( RMSE test ) )
-0.35
-0.3
-0.25
-0.2
-0.05
-0.1
-0.15
0
ln ( max ( RMSE training ) )
Fig. 15. Relative humidity models (preferable set)
Relative Humidity (%)
60
55 50 45 40 35
Relative Humidity (%)
1 8 s t e p s -a h e a d ( 1 1 / 2 h ) - R M S E = 0 . 6 1 65
60
0
100
200
55 50 45 40 35
300
100
200
300
S a m p le s
3 6 s t e p s -a h e a d ( 3 h ) - R M S E = 0 . 6 7
4 8 s t e p s -a h e a d (4 h ) - R M S E = 0 . 7 1
65
65
60
60
55 50 45 40 35
0
S a m p le s
Relative Humidity (%)
Relative Humidity (%)
1 s t e p -a h e a d (5 m i n ) - R M S E = 0 . 3 4 65
0
100
200
55 50 45 40 35
300
0
100
S a m p le s
200
300
S a m p le s
Fig. 16. Relative humidity predictions – training set 1 step-ahead (5 min) - RMSE = 0.41
18 steps-ahead (1 1/2 h) - RMSE = 0.80 60 Relative Humidity (%)
Relative Humidity (%)
60 55 50 45 40 35
0
20
40 60 Samples
80
55 50 45 40 35
100
0
36 steps-ahead (3 h) - RMSE = 1.01
80
100
60 Relative Humidity (%)
Relative Humidity (%)
40 60 Samples
48 steps-ahead (4 h) - RMSE = 1.09
60 55 50 45 40 35
20
0
20
40 60 Samples
80
100
55 50 45 40 35
0
20
40
40 60 Samples
Fig. 17. Relative humidity predictions – test set
0
20
40 60 S a m p le s
80
50 45 40 35
100
0
60
60
55
55
50 45 40
0
20
40 60 S a m p le s
80
20
40 60 S a m p le s
80
100
4 8 s t e p s -a h e a d (4 h ) - R M S E = 1 . 3 7
100
50 45 40 35
0
20
40 60 S a m p le s
80
100
Fig. 18. Relative humidity predictions – validation set
0.7
0.55
45
35
Relative Humidity Models
0.65
55
50
Relative Humidity (%)
maximum 0.82
1
55
3 6 s t e p s -a h e a d (3 h ) - R M S E = 1 . 2 7
Relative Humidity (%)
ε p ,tr
mean 0.77
1
60
35
minimum 0.74
1 8 s t e p s -a h e a d (1 1 / 2 h ) - R M S E = 1 . 0 3
60 Relative Humidity (%)
Relative Humidity (%)
predictive thermal comfort models, models of air velocity and mean radiant temperature must also be built. Finally, if energy spent is taken into consideration for predictive control, a model of energy consumption must also be built.
80
100
REFERENCES Dexter, A. L., Phill D. and C, E. (1996). Builidings: fact or fiction? HVAC & R Research, (2), 105 Fanger, P. O. (1972). Thermal comfort: analysis and applications in environmental engineering. McGrawHill, New York Thomas, B. and Soleimani-Mohseni, M. (2007). Artificial neural network models for indoor temperature prediction: investigations in two buildings. Neural Computing & Applications, (16), 81-89 Ruano, A. E., Crispim, E. M., Conceicao, E. Z. E. and Lucio, M. M. J. R. (2006). Prediction of building's temperature using neural networks models. Energy and Buildings, (38), 682-694 Sigumonrong, A. P., Bong, T. Y., Fok, S. C. and Y.W., W. (2001). Self-learning neurocontroller for maintaining indoor relative humidity. In International Joint Conference on Neural Networks, Washington DC, USA, 1297-1301. Ferreira, P. M., Ruano, A. E. and Ieee. (2008). Application of Computational Intelligence Methods to Greenhouse Environmental Modelling. In Int Joint Conference on Neural Networks, Hong Kong, 3582-3589. Lu, T. and Viljanen, M. (2009). Prediction of indoor temperature and relative humidity using neural network models: model comparison. Neural Computing & Applications, (18), 345-357 Ruano, A. E., Ferreira, P. M. and Fonseca, C. M. (2005). An Overview of Nonlinear Identification and Control with Neural Networks. In A. E. Ruano (ed.), Intelligent Control Systems using Computational Intelligence Techniques, 37-87. Institution of Electrical Engineers. Teixeira, C. A., Ruano, A. E., Ruano, M. G., Pereira, W. C. A. and Negreira, C. (2006). Non-invasive temperature prediction of in vitro therapeutic ultrasound signals using neural networks. Medical & Biological Engineering & Computing, (44), 111-116 Fonseca, C. M. and Fleming, P. J. (1998). Multiobjective optimization and multiple constraint handling with evolutionary algorithms. Part I : A unified formulation. IEEE Transactions on System, Man and CyberneticsPart A: Systems and Humans, (28), 26-37