Linearity versus non-linearity in forecasting Nile River flows

Linearity versus non-linearity in forecasting Nile River flows

Advances in Engineering Software 34 (2003) 515–524 www.elsevier.com/locate/advengsoft Linearity versus non-linearity in forecasting Nile River flows ...

518KB Sizes 8 Downloads 79 Views

Advances in Engineering Software 34 (2003) 515–524 www.elsevier.com/locate/advengsoft

Linearity versus non-linearity in forecasting Nile River flows Maha Tawfik Strategic Research Unit, National Water Research Center, NWRC Administration Building, El-Kanater, P.O. Box 5, Qalyoubia 13621, Egypt Received 14 January 2002; accepted 6 March 2003

Abstract Egypt is almost totally dependent on the Nile River for satisfying about 95% of its water requirements. Aswan High Dam (AHD), located at the most upstream point of the river controls Egypt’s share of water. Once a release decision is made, there is no chance of retrieving or recovering this released water. Therefore, long- and short-term forecasts of Nile flows at Aswan have been recognized to be of great importance to allow better management and operation of the reservoir. Several autoregressive (AR) models of uni- or multi-site flows upstream of Aswan had been developed to forecast monthly reservoir inflows for some lead-time. Most of these models failed to forecast, with satisfactory accuracy, the peak flows of July, August, and September due to high variability of flows during these months. Some hydrologists contributed this inaccuracy to the linearity assumption embedded in AR models. Artificial neural networks (ANNs) are being tested as a forecast tool to consider the non-linearity. Several neural networks using Neuralyste software have been investigated against updated AR models. The results indicated that the inclusion of non-linearity in the ANNs forecast might in some cases lead to improved forecast accuracy. q 2003 Elsevier Science Ltd. All rights reserved. Keywords: Artificial neural networks; Flow forecasts; Autoregressive models

1. Introduction Water is a fragile and vulnerable resource that is essential for human life and human activities. Management of water resources through construction of large dams and the creation of large reservoirs necessitated the search for the optimal reservoir operation policies to satisfy downstream demands and preserve the upstream environment. Prediction of reservoir inflows is of great significance to water resources planners and reservoir operators. Good forecasts lead to better and more refined operation of the reservoir and, consequently, in many cases to improved hydropower generation. Forecasting can be either long-term or shortterm. Long-term forecasting is crucial mainly for water resources planning. However, there exists no reliable technique for long-term flow forecasting. On the other hand, short-term forecasting for few weeks or even months ahead is a tractable problem with some degree of accuracy. Short-term forecasting is highly beneficial for reservoir daily or weekly operation and management. Monitoring and analysis of streamflow records are essential for proper river E-mail address: [email protected] (M. Tawfik).

basin management and development projects. In addition, the issue of reservoir inflow forecasting is of special significance to a country like Egypt, depending mainly on the Nile River to satisfy about 95% of the country’s water requirements. Aswan High Dam (AHD) is the major regulation facility on the Nile River in Egypt, controlling the country’s share of the Nile water. However, because of the large size of the basin and the absence of regular telemetry measurements, it is important to make the best use of the available streamflow information to predict the inflows into the AHD reservoir and toperform better management of its operation. Decision makers and water planners have been concerned with the problem of defining optimal operating policies for the AHD reservoir. National development plans as well depend greatly on the precise assessment of water resources availability in the future. Therefore, forecasting of AHD reservoir inflows has been the focus of many research and studies. 2. Previous studies on Nile flow forecasting The Nile Basin covers an area of about 2,900,000 km2 and the river itself extends for about 6700 km. The river

0965-9978/03/$ - see front matter q 2003 Elsevier Science Ltd. All rights reserved. doi:10.1016/S0965-9978(03)00039-5

516

M. Tawfik / Advances in Engineering Software 34 (2003) 515–524

receives its water from two sources: the Equatorial Lakes Plateau and the Ethiopian Plateau. The rainfall patterns on both Plateaus exhibit large variations, resulting in diverse streamflow hydrographs of the different gauging stations on various tributaries of the Nile River. There had been several attempts directed towards developing the appropriate and the most accurate forecast model for the AHD reservoir inflows. Monthly flow forecasts obtained from these models were satisfactory to be considered in the analysis of the operation of the AHD reservoir for most months of the year except for the flood season (July, August, and September). These efforts started as early as 1980 when the Water Master Plan Project in cooperation with Cairo University and the MIT developed the multi-variate stepwise linear regression model [3,8]. Later, the recursive least squares estimator was developed by the same group of researchers. The model had the ability to update the flow estimation once new observations became available without repeating the entire forecasting process from the beginning. Stedinger et al. [13,14] indicated that the causality matrix used to identify the variables included in the autoregressive (AR) equation of the previous models was incorrect. Thus, they developed an updated version of an AR forecast model. In 1993, Tawfik [16] observed some inconsistency in the results obtained by the models of the two groups of researchers. She also employed a longer time series of the streamflow records at all the gauging stations considered. MINITABw statistical software was used to develop a new AR forecast model. Nevertheless, Tawfik compared the results obtained from her model with the results from the two previous models. She stated that her results were comparable to the results from the previous models with respect to error estimates and regression coefficients in all months except July, August, and September. The errors and coefficients from Tawfik’s AR model were better than the ones obtained by the other models for the three months under consideration. Nevertheless, they were not satisfactory to derive the reservoir operating policies.

3. Artificial neural networks It has been recognized that the previously described forecast models for July, August, and September may not be of a linear nature but rather inherit some non-linear characteristics. Thus, AR models are possibly not the most appropriate models for such problems. It is the intention of this paper to investigate the potential of using artificial neural networks (ANNs) as a new approach to obtain better forecasts for these months. ANNs offer an efficient and cost effective alternative to statistical models. It can handle linear and non-linear problems of pattern recognition. ANNs have been widely adopted by water resources planners and managers to predict and forecast water resources variables. Maier and Dandy [7] presented a comprehensive review for the application of

neural networks in prediction and forecasting of water resources variables. ANNs are basically multi-dimensional regression paradigms based on input – output relationships. They do not require pre-knowledge about the physical system or the laws governing the phenomenon. However, they capture the characteristics of the system through the learning process, so that a tuned network serves as an efficient tool for simulation and forecasting studies. ANNs are made up of a large number of interconnected elements (or nodes) called neurons, which are capable of receiving, processing, and sending signals (information). These neurons are arranged in several layers in specific network structures, whose selection is dictated by the type of problem under consideration. A typical network consists of a first (input) layer, a last (output) layer, and a number of in-between (hidden) layers. Each neuron at each layer is connected to the neurons in the subsequent layers through weighted interconnections. The net input to each neuron is converted to an activated value (through an activation function) and is compared with the threshold value (bias) to generate the output of each neuron [15]. For example, the output of the jth neuron, Qj ; is described by the following equation: Qj ¼ AFðNetj þ Bj Þ

ð1Þ

where Netj is the net input to the neuron, Bj is the bias, and AF is the activation function that transforms the net input value with the bias into the desired output value. Several types of activation functions can be utilized such as linear, hyperbolic, step, etc. The net input can be expressed as: X Netj ¼ ðXj £ Wij Þ ð2Þ i

where Xj are the inputs to the jth neuron from all i neurons in the previous layer, and Wij are the weights associated with these inputs. The connection weights W are optimized through a learning process until it can approximate a function that transfers the inputs to the desired output. The process of optimizing the connection weights is known as training or learning. Training of an ANN is very similar to the calibration of any model. Typically, networks are trained by using the standard error back propagation algorithm, which provides a computationally efficient method for changing the weights in a feed forward network. Networks trained with this algorithm have been applied successfully to solve some difficult and diverse problems. However, one major limitation of this training algorithm is that the network architecture (number of layers and number of neurons in each layer) has to be fixed in advance. Determination of the appropriate network architecture is one of the most important, and also one of the most challenging tasks in the model building process. The weight updating is carried out according to the criterion of minimizing the sum of the squared errors

M. Tawfik / Advances in Engineering Software 34 (2003) 515–524

between the desired output and the network-simulated output. The following equation expresses the weight updating function: Weightnew ¼ Weightold þ ð1 2 MÞ £ LR £ X þ MðWeightold 2 Weightold21 Þ

ð3Þ

where X is the input vector to the neural network, LR is the learning constant, M is the momentum constant, Weightnew is the new updated weight, Weightold is the old weight, and Weightold21 is the previous old weight. The constants in this equation are known as the network training parameters that are set by the user. The learning rate is directly proportional to the size of the steps taken in the weight space. It determines the magnitude of the correction term applied to adjust each neuron’s weight during training. The learning rate supplies either a smaller or a greater adjustment to the weights. Optimal learning rate are typically determined by trial and error. Nevertheless, it has to be positive and must lie between 0 and 1. In this application, the learning rate is set as 1. During the learning process, the network may be trapped in a local minimum. The problem is solved by using a momentum term that helps the network to slide through these local minima and allows for overall convergence. It can be added by making weight changes equal to some fractions of the last weight change in addition to the gradient. The momentum is usually chosen to be 0.9, which is the selected value for the current application. Once the training phase (optimization of weights) has been completed, the performance of the trained network has to be validated or verified on another independent data set, using specific criteria. It is important to note that it is vital for the validation data not to have been used as part of the training process in any capacity. Nevertheless, poor validation can be attributed to network architecture, a lack of or inadequate data/data preprocessing, normalization of training/validation data, or even over-training. 3.1. Neural networks and statistical approaches in time series forecasting ANN modeling approaches are perceived to overcome some of the difficulties associated with traditional statistical approaches. The rules that govern the traditional statistical models are seldom considered in the ANN model building process, and there is a tendency among users to throw a problem blindly at a neural network in the hope that it will formulate an acceptable solution [4]. Nevertheless, recent studies indicated that the consideration of statistical principles in the ANN model building process might improve model performance [11,12]. Consequently, it is vital to adopt a systematic approach in the development of ANN models, taking into account factors such as data pre-processing, determination of adequate model inputs,

517

deciding on suitable network architecture, estimation of required parameters, optimization, and model validation (Maier and Dandy [7]). However, statistical models that can be expressed in a neural network form include a number of regression, discriminate, density estimation, and graphical interaction models. Several previous authors discussed the issue of flow forecasting including rainfall-runoff, streamflow, and reservoir inflows. It has also been suggested that certain ANN models are equivalent to time series models of the ARMA type [5]. Connor et al. [2] have shown that feed forward neural networks are a special case of non-linear autoregressive (NAR) models and that recurrent neural networks are a special case of non-linear autoregressive moving average (NARMA) models. Furthermore, Chon and Cohen [1] demonstrated the equivalence of ARMA and NARMA models with feed forward ANNs utilizing polynomial transfer functions. Karunanithi et al. [6] evaluated the use of ANN approach in predicting flows at some gauging stations of the Huron River, Michigan and compared their results with that obtained from a power model. They concluded that the neural networks were better in estimating high flows than power models. In all previously discussed cases, the prediction accuracy was the main performance criterion and was measured using a variety of metrics. 3.2. Nile flow forecasting This section describes the development of several neural networks to forecast the Nile flows at Aswan in July, August, and September. The results are then compared to the results obtained using a traditional statistical approach (an AR model). It is intended to carry out a comparison, using some error criteria in order to evaluate the benefits gained using the neural networks in place of the AR model. Fig. 1 shows a map representing the main tributaries of the Nile River. Four gauging stations on the Nile were selected to represent the main tributaries of the river. These stations are Malakal representing the White Nile, Khartoum representing the Blue Nile, Atbara representing Atbara River, and Aswan for the main Nile. The data of the monthly natural flows of these four stations were extracted for the period from 1912 to 1989. The annual flow time series are exhibited in Fig. 2, while Fig. 3 displays the average monthly hydrographs of four stations and demonstrates the large variability in monthly flows among stations. The basic statistics for the monthly flows at these stations are listed in Table 1. The data of the six previous months at each station were employed to estimate the flow at Aswan for the current month. Thus, the flows of the four stations from January through June are used to evaluate the flow of Aswan in July, etc.

518

M. Tawfik / Advances in Engineering Software 34 (2003) 515–524

Fig. 1. Map of the Nile basin.

3.3. Neural network structure Neuralyste software [10] was used to develop the neural networks. The software offers great potential by using data already available on spreadsheets. Each network consists of multiple inputs and a single output. The back propagation

network is the most prevalent and generalized neural network currently in use, and it is the one provided by the software. Six ANN structures were developed through Neuralyste. Each consisted of an input layer, an output layer, and only one hidden layer. The input layer for all networks contained 24 neurons representing the flows at the four stations for

Fig. 2. Annual natural flows of the four selected stations on the Nile during 1912–1989.

M. Tawfik / Advances in Engineering Software 34 (2003) 515–524

519

Fig. 3. Average monthly hydrographs of the four selected stations on the Nile River.

the previous six months, while the output layer contained only one neuron representing the flow at Aswan for the forecasted month as shown in Fig. 4. The number of neurons in the hidden layer ðLÞ varied from 6 to 48 neurons. Neuralyste software generates the connection weights randomly within specified upper and lower limits set by the user to connect the neurons in each layer with the neurons in the subsequent layer. The weights are then updated during the training process until satisfactory results are reached. The software provides an error function that is minimized during training to reach the optimal values of connection weights. The mean square error (MSE) function is the one used in the current application, but other error functions can be used as well. The advantages of using the MSE include the ease in calculation, the penalties given to large errors, the ease in calculating the partial derivative with respect to weights, etc. The software offers several types of transfer functions including sigmoid, linear, hyperbolic, step, Gaussian, etc. The most commonly used transfer (activation) function is the sigmoid, which is the one used in the current application and it has the following shape: f ðxÞ ¼ ð1 þ e2x Þ21

decided to use the initial 75% of the data records (58 records) for training and to use the last 25% (20 records) for prediction. However, because the last 20 records contained extremely low flow values and thus it was expected that the forecasting of this period would be very poor, it was decided as a second approach to select the last 75% of the records for training, while the initial 25% would be used for prediction. However, in both cases, the resulting model should be able to predict future flows unconditionally. The software is executed and the four error criteria were used to evaluate the prediction capability. The root MSE for both the training and the validation sets is defined as: vffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi uX u n u ðQiact 2 Qiforecast Þ2 u t i¼1 RMSE ¼ n21

Table 1 Basic statistics of monthly flows at the four selected stations in billion cubic meters (bcm) Mean (bcm)

ð4Þ

The learning rate was chosen to be 1 as mentioned previously, while the momentum rate was set at 0.9. It is a common practice to split the available data into two subsets: a training set and a validation (prediction) set. Thus, the natural flow data of stations are divided into two sets. It has been reported by Flood and Kartam [4] and Minns and Hall [9] that typically, ANNs were unable to extrapolate beyond the range of data used for training consequently resulting in poor forecasts/predictions when the validation data contained values outside the range of these used for training. Therefore, two approaches were followed in forecasting the Nile flows. In the first approach, it was

ð5Þ

Std. dev. (bcm)

Max. (bcm)

Min. (bcm)

July

Malakal Khartoun Atbara Aswan

2.538 5.547 1.614 6.688

0.291 1.749 0.768 1.663

3.500 10.674 5.161 10.477

2.110 1.034 0.189 3.610

August

Malakal Khartoun Atbara Aswan

2.939 15.549 5.672 17.556

0.382 2.800 1.732 3.626

4.156 23.189 13.191 27.396

2.335 8.054 2.935 10.946

September

Malakal Khartoun Atbara Aswan

3.148 14.256 3.806 20.138

0.494 3.378 1.399 4.069

5.210 25.230 7.410 28.495

2.467 5.072 1.201 10.059

520

M. Tawfik / Advances in Engineering Software 34 (2003) 515–524

For the case of using the initial 75% of the data for training (calibration): QAsJuly ¼ 3:545 2 1:46QAsMay þ 2:35QAsJune

ð8aÞ

QAsAugust ¼ 10:145 2 5:1QKhMarch þ 4:6QKhApril 2 1:63QAsJune þ 1:76QAsJuly

ð8bÞ

QAsSept ¼ 4:851 þ 0:872QAsAugust

ð8cÞ

For the case of using the last 75% of the data for calibration:

Fig. 4. ANN proposed structure consisting of an input layer, an out put layer, and one hidden layer.

QAsJuly ¼ 5:389 þ 1:18QKhJune

ð9aÞ

QAsAugust ¼ 8:743 þ 1:32QAsJuly

ð9bÞ

QAsSept ¼ 7:354 þ 0:804QAsAugust 2 3:67QKhApril þ 69QAtApril 2 9:9QAtMarch

where RMSE is the root MSE, Qiact is the ith year actual Aswan flow, Qiforecast is the ith year forecast of Aswan flow, and n is the number of years of either training or validation. The maximum absolute error for the entire data set is calculated as: Max:Abs:Error ¼ Max lðQiact 2 Qiforecast Þl

ð6Þ

i

The average error is also determined for the entire data range as follows: n X

AVGError ¼

ðQiact 2 Qiforecast Þ

i¼1

ð7Þ

n

The errors were calculated for each case of neural network structures for both approaches. MINITABw statistical software package was also used to obtain updated AR models for each month, using the stepwise regression option for having 24 inputs to produce one output. The same two approaches in splitting the data for testing (calibration) and prediction (validation) were followed, using the same set of data. The following forecast equations for the three months were obtained.

ð9cÞ

where QAs is the monthly natural flow at Aswan, QKh is the monthly natural flow at Khartoum, and QAt is the monthly natural flow at Atbara. It can be seen that the forecast equations of each month, using the two approaches in splitting the calibration and validation data sets, contained different sets of variables. Using the first approach, Aswan forecasted flow of July was dependent on Aswan flows of May and June, while in the second approach, July flow was calculated using only Khartoum flow on June. For the AR model, the same error estimates were calculated for both the calibration and the validation sets. The RMSEs, the maximum absolute errors, and the average errors for all cases are exhibited in Tables 2– 7. The tables show that for all cases of neural network structures for the three months, RMSEs for the training sets were significantly smaller than the RMSEs for the prediction sets. Also, the RMSEs for training the neural networks were smaller than the RMSEs for calibration sets, using the AR models. However, the RMSEs for the prediction set for the neural networks were in most cases larger than their corresponding values for validation of the AR models. Table 2 presents the errors in ANN forecasts of July when the first approach is employed. It also shows the errors of the AR model. It is clear from the table that the errors for

Table 2 Results of predicting July flows employing several ANNs and an AR model, using the initial 75% of the data for training Proposed ANN

ANN structure

RMSE training

RMSE prediction

Max. absolute error

Average error

ANN1 ANN2 ANN3 ANN4 ANN5 ANN6 AR

24– 6–1 24– 12–1 24– 18–1 24– 24–1 24– 36–1 24– 48–1 Autoregressive

0.302 0.291 0.244 0.353 0.478 0.494 1.117

2.662 2.987 3.240 3.590 3.728 3.361 1.192

6.350 5.917 7.022 6.762 6.957 6.882 3.80

0.675 0.771 0.779 0.951 1.051 1.003 0.855

M. Tawfik / Advances in Engineering Software 34 (2003) 515–524

521

Table 3 Results of predicting July flows employing several ANNs and an AR model, using the last 75% of the data for training Proposed ANN

ANN structure

RMSE training

RMSE prediction

Max. absolute error

Average error

ANN1 ANN2 ANN3 ANN4 ANN5 ANN6 AR

24–6– 1 24–12 –1 24–18 –1 24–24 –1 24–36 –1 24–48 –1 Autoregressive

0.159 0.117 0.178 0.144 0.171 0.299 1.812

1.930 1.961 2.238 2.062 2.388 2.114 1.883

5.034 3.843 3.765 3.918 4.316 4.033 4.808

0.444 0.495 0.564 0.497 0.609 0.624 1.512

Table 4 Results of predicting August flows employing several ANNs and an AR model, using the initial 75% of the data for training Proposed ANN

ANN structure

RMSE training

RMSE prediction

Max. absolute error

Average error

ANN1 ANN2 ANN3 ANN4 ANN5 ANN6 AR

24–6– 1 24–12 –1 24–18 –1 24–24 –1 24–36 –1 24–48 –1 Autoregressive

0.721 0.496 0.803 0.583 0.699 0.862 1.900

6.652 6.907 5.499 5.922 6.269 5.722 3.213

16.165 15.416 14.689 15.190 15.119 17.017 7.559

1.744 1.781 1.610 1.548 1.742 1.562 1.722

training are significantly smaller than the errors for prediction for the ANNs, while the errors of calibration and validation of the AR model are of comparable values. Also, the training errors of the ANNs are much smaller even than the calibration error of the AR. The maximum absolute error is much less for the AR model than for all cases of ANN. Nevertheless, the average errors of the ANNs are of comparable values to the average error obtained from the AR model. When the last 75% of the data was used for training, different results were obtained as displayed in

Table 3. The table exhibits reduction in RMSE for both the training and the validation sets for all ANN cases. Table 3 shows that three out of four of the error criteria (RMSE for training, maximum absolute error and average error) for the neural network of structure containing 12 neurons in the hidden layer are smaller than the values obtained from the AR model. Fig. 5 displays the actual Aswan streamflows in July compared with the predicted flows using the AR model and the neural network that gave the best overall error estimates

Table 5 Results of predicting August flows employing several ANNs and an AR model, using the last 75% of the data for training Proposed ANN

ANN structure

RMSE training

RMSE prediction

Max. absolute error

Average error

ANN1 ANN2 ANN3 ANN4 ANN5 ANN6 AR

24–6– 1 24–12 –1 24–18 –1 24–24 –1 24–36 –1 24–48 –1 Autoregressive

0.623 0.638 0.625 0.631 0.658 0.625 2.453

3.532 4.392 4.328 4.118 4.232 4.378 2.065

8.315 9.683 10.778 9.886 8.936 9.623 7.058

0.972 1.187 1.121 1.052 1.152 1.162 1.834

Table 6 Results of predicting September flows employing several ANNs and an AR model, using the initial 75% of the data for training Proposed ANN

ANN structure

RMSE training

RMSE prediction

Max. absolute error

Average error

ANN1 ANN2 ANN3 ANN4 ANN5 ANN6 AR

24–6– 1 24–12 –1 24–18 –1 24–24 –1 24–36 –1 24–48 –1 Autoregressive

0.688 1.204 0.814 0.859 0.958 1.207 1.814

3.608 5.132 5.174 5.577 5.234 5.542 2.905

10.450 10.827 10.584 8.524 7.707 10.033 6.782

1.40 1.821 1.525 1.646 1.713 1.896 1.625

522

M. Tawfik / Advances in Engineering Software 34 (2003) 515–524

Table 7 Results of predicting September flows employing several ANNs and an AR model, using the last 75% of the data for training Proposed ANN

ANN structure

RMSE training

RMSE prediction

Max. absolute error

Average error

ANN1 ANN2 ANN3 ANN4 ANN5 ANN6 AR

24– 6–1 24– 12–1 24– 18–1 24– 24–1 24– 36–1 24– 48–1 Autoregressive

0.547 0.557 0.578 0.623 0.504 0.934 1.764

2.451 2.768 2.288 2.720 2.827 2.900 2.323

7.034 8.825 7.308 9.757 8.940 7.947 7.516

0.737 0.750 0.710 0.727 0.713 1.122 1.513

among the six proposed networks for the entire period of 1912 – 1989. The second approach of using the last 75% of the data for training of the ANN (calibration of the AR) and the rest for testing (validation) was used. The figure shows that the AR predictions does not change significantly as it is mainly affected by the constant term of the regression equation, while the flows of Khartoum in June are very small (Eq. (9a)). The figure also shows that the ANN predictions are almost identical to the actual flows during the training period with minor differences, while there are differences during the testing period. Tables 4 and 5 display the results obtained for August forecasts for both approaches, using the ANNs and AR models. Table 5 shows that the RMSEs for prediction using the first approach are significantly high, while the RMSEs of the second approach are much less. However, training errors in both cases are of almost the same magnitude. Nevertheless, two error criteria (RMSE for training and the average error) of the ANN containing six neurons in the hidden layer are considered better than the corresponding values of the AR model in the case of the second approach. The RMSR of prediction and the maximum absolute error of

the same ANN are considered worse than the values obtained from the AR model. In such a case, a multi-criteria analysis would assist in determining the most appropriate model. Fig. 6 exhibits the predicted flows of August from the AR model and the ANN with structure of 24– 6– 1 neurons compared to the actual flows for the period 1912– 1989. In this case also the second approach for data splitting was adopted. The figure shows that the AR model followed the same pattern of the actual flows; nevertheless, it underestimated the high flows and overestimated the low ones. The selected ANN provided better predictions for the flows during the training period but it also failed in predicting the flow extremes of the years 1913, 1915, 1916, 1925, 1927. Tables 6 and 7 present the results for ANNs and AR models for September forecasts for both approaches. The tables show significant reductions in errors when the second approach is followed. Table 6 reports that the results of the AR model are better than all neural networks. In Table 7, all the error criteria of the neural network of structure containing 18 neurons in the hidden layer are smaller than the corresponding ones of the AR model, resulting in better forecasts.

Fig. 5. Predicted july streamflows into ahd reservoir, using the ar model and the ann model of 24 –12–1 neurons compared with the actual inflows, using the last 75% of the data for training.

M. Tawfik / Advances in Engineering Software 34 (2003) 515–524

523

Fig. 6. Predicted August streamflows into AHD reservoir, using the AR model and the ANN model of 24 –6–1 neurons compared with the actual inflows, using the last 75% of the data for training.

Fig. 7 shows the actual and predicted September flows using both the AR model and the neural network of structure 24 –18 –1. Again, the second approach for data splitting was adopted. The figure exhibits better predictions for the two models throughout the time horizon from 1912 till 1989.

4. Conclusion and recommendation The motivation for this study is to find an approach that helps in predicting the Nile River inflows into the AHD

reservoir for the months of July, August, and September. Most statistical techniques applied to this case failed in providing satisfactory flow forecasts for these months. None had provided acceptable results to be implemented in the reservoir operation analysis. This work is also intended to demonstrate the usefulness of neural networks as efficient alternatives to mathematical models in the context of numerical simulations. The data for training of the network has been obtained for representative locations along the Nile Basin. Several ANNs have been used for forecasting the Nile river flows at

Fig. 7. Predicted September streamflows into AHD reservoir, using the AR model and the ANN model of 24– 18–1 neurons compared with the actual inflows, using the last 75% of the data for training.

524

M. Tawfik / Advances in Engineering Software 34 (2003) 515–524

Aswan, based on available flow data on some stations along the Nile for a specific period of time. The forecasts were compared to the ones obtained using the autoregression analysis as a traditional statistical approach. The accuracy of the predictions was examined using RMSE for both the training (calibration) and prediction (validation) data sets, maximum absolute error, as well as the average error. Not only ANNs but also the AR model have been affected by the approach used in splitting the data set. Accuracy of ANNs has been significantly affected by the split approach. It has been verified that forecasting of some extreme values not present in the training data set would be very poor, and the forecast improved when extreme values were included in the training set. Few structures in some cases proved to be better than the AR models with respect to some of the error criteria chosen. Therefore, it is very encouraging to proceed with further analysis. Based on the results obtained so far, it is not entirely established that the ANNs are worse in the prediction of the AHD reservoir inflows during the flood season than the AR models. They may be even better if more ANN structures are analyzed and investigated.

[4] [5]

[6]

[7]

[8]

[9] [10] [11] [12]

[13]

References [14] [1] Chon KH, Cohen RJ. Linear and nonlinear ARMA model parameter estimation using an artificial neural network. IEEE Trans Biomed Engng 1997;44(3):168–74. [2] Connor JT, Martin RD, Atlas LE. Recurrent neural networks and robust time series prediction. IEEE Trans Neural Networks 1994;5(2): 240–54. [3] Curry K, Bras RL. Multi-variate seasonal time series forecast with application to adaptive control. Technical Report 253, Ralph

[15]

[16]

M. Parsons Laboratory, Department of Civil Engineering, Cambridge, MA: MIT; 1980. Flood I, Kartam N. Neural networks in civil engineering: 1. Principles and understanding. J Comput Civil Engng 1994;8(2):131–48. Hill T, Marquez L, O’Connor M, Remus W. Artificial neural network models for forecasting and decision-making. Int J Forecast 1994; 10(1):5– 15. Karunanithi N, Grenney WJ, Whitley D, Bovee K. Neural networks for river flow prediction. ASCE J Comput Civil Engng 1994;8(2): 201 –20. Maier H, Dandy GC. Neural networks for the prediction and forecasting of water resources variables: a review of modeling issues and applications. Environ Model Software 1999;15(1):101–24. Master Water Plan Project. Multi-lead forecasting of the River Nile streamflows. Technical Report 21 (UNDP/EGY/81/031), Ministry of Irrigation, Cairo, Egypt, 1983. Minns AW, Hall MJ. Artificial neural networks as rainfall-runoff models. Hydrol Sci J 1996;41(3):399–417. Neuralyste User’s Guide, Cheshire Engineering Corporation, October, 1994. Ripley BD. Neural networks and related methods for classification. J R Stat Soc B 1994;56(3):409–56. Sarle WS. Neural networks and statistical models. Proceedings of 19th Annual SAS Users Group International Conference; 1994. pp. 1538–1550. Stedinger JR. Comment on ‘Real-time adaptive closed loop control of reservoirs with the High Aswan Dam as a case study’ by L.R. Bras, R. Buchanan, and K.C. Curry. Water Resour Res 1984;20(11): 1763–4. Stedinger JR, Sule BF, Loucks DP. Stochastic dynamic programming models for reservoir operation optimization. Water Resour Res 1984; 20(11):1499–505. Tawfik M, Ibrahim A, Fahmy H. Hysteresis sensitive neural network for modeling rating curve. ASCE J Comput Civil Engng 1997;11(3): 206 –11. Tawfik MM. Optimal real-time forecasting and control of reservoir operations. PhD Dissertation. Colorado State University, Fort Collins, CO 80523, 1993.