Receding horizon control for water resources management

Receding horizon control for water resources management

Applied Mathematics and Computation 204 (2008) 621–631 Contents lists available at ScienceDirect Applied Mathematics and Computation journal homepag...

205KB Sizes 1 Downloads 219 Views

Applied Mathematics and Computation 204 (2008) 621–631

Contents lists available at ScienceDirect

Applied Mathematics and Computation journal homepage: www.elsevier.com/locate/amc

Receding horizon control for water resources management Andrea Castelletti 1, Francesca Pianosi, Rodolfo Soncini-Sessa * Dipartimento di Elettronica e Informazione, Politecnico di Milano, Milan, Italy

a r t i c l e

i n f o

Keywords: Water resources management Stochastic optimal control Adaptive control

a b s t r a c t Integrated water resources management (IWRM) is recognized worldwide as the reference paradigm to meet society’s long-term needs for water resources while maintaining essential ecological services and economic benefits. In previous publications [A. Castelletti, R. Soncini-Sessa, A procedural approach to strengthening integration and participation in water resource planning, Environmental Modelling & Software 21(10) (2006) 1455– 1470; A. Castelletti, F. Pianosi, R. Soncini-Sessa, Integration, participation and optimal control in water resources planning and management, Applied Mathematics and Computation, (2007), doi:10.1016/j.amc.2007.09.069], the authors have already insisted on the need for a procedural approach to make the IWRM paradigm truly operational; they have emphasized the role played by dynamic optimization in rationalizing and facilitating the selection by the decision maker of a best compromise planning alternative. When planning alternatives also include management policies, as in the case of the water reservoir networks considered in this paper, the best compromise off-line policy resulting from the planning exercise has to be actually implemented in the daily management of the system. Here, again, dynamic optimization may play a central role, as it can be adopted on-line to improve the performance of the off-line policy by exploiting any new useful information available in real-time (e.g., inflow predictions, a power station being temporarily out of service, etc.). In this paper, this approach is explored through a real-world case study of a simple reservoir system. The off-line management policy computed in a previous planning process is refined on-line with a receding horizon control scheme combined with an inflow predictor. The results yield indications that the approach can provide significant advantages to cope with extreme events, particularly those occurring in unusual periods of the year. Ó 2008 Elsevier Inc. All rights reserved.

1. Introduction Balancing human water needs and ecological services is essential to ensure sustainable social welfare, economic prosperity and ecosystem health. In this connection, integrated water resource management (IWRM, see GWP [7]) is unanimously recognized as the key instrument to replace the traditional, fragmented, sector-by-sector approach to water resource management that has led to poor services and unsustainable resource use. In the last two years, however, some criticisms have been raised [9]: it is argued that the broad concept needs further elaboration, e.g., that the strong interaction of land-use planning [6] with the water cycle is neglected, and that many practical questions remain to be clarified on the putting into practice of the paradigm [8]. Concerning this second argument, the authors have already insisted ([2,4] and references therein) on the need for a procedural approach, namely the participatory and integrated planning (PIP) pro* Corresponding author. E-mail addresses: [email protected] (A. Castelletti), [email protected] (F. Pianosi), [email protected] (R. Soncini-Sessa). 1 Work completed while on leave at the Centre for Water Research, University of Western Australia (ref. 2181). 0096-3003/$ - see front matter Ó 2008 Elsevier Inc. All rights reserved. doi:10.1016/j.amc.2008.05.044

622

A. Castelletti et al. / Applied Mathematics and Computation 204 (2008) 621–631

cedure described in [11], to support the technical implementation of IWRM. They have also emphasized the central role held by dynamic optimization in rationalizing the decision-making process underlying the IWRM, especially when planning alternatives comprise management decisions (e.g., the daily release from each reservoir in a reservoir network) and these are transformed into planning ones by means of an off-line control policy. In those circumstances, when the planning exercise is over and a best compromise alternative has been selected by the decision maker, the relevant off-line management policy has to be actually implemented in the daily operation of the system. Since it is computed by solving an optimization problem, this policy is a Pareto-efficient one, and thus the best policy that can be obtained with the information available at the time of planning. We cannot exclude [1], however, that, as new information (e.g., inflow predictions, news about a power station temporary out of service, etc.) becomes available in real-time, the short-term performance of this policy could be improved, exactly by exploiting such information, so as to adapt it better to the situation the system is currently facing. In this way, the complex adaptive nature [5] of water systems that was somehow ignored in the off-line design can be appropriately taken into account on-line. In this paper, this potential is explored for multipurpose water reservoir networks, which are a particularly complex class of water systems to deal with, owing to various aspects that they possess, including the existence of multiple conflicting interests, the randomness of the outflows from the catchment feeding each reservoir and finally the strong non-linearity of the reservoirs’ release-functions and the objectives. In the approach proposed, the best compromise off-line management policy resulting from the planning exercise is improved on-line by solving a receding horizon control problem in which the a priori probability distribution functions of the catchments’ outflows used to compute the off-line policy are replaced by those generated by a dynamic inflow predictor. The approach is applied to a real-world case study of a multipurpose regulated lake fed by a natural catchment. The best compromise off-line policy selected at the end of a previous planning process [12] is refined on-line using different steps-ahead outflow predictions provided by an outflow predictor (e.g., a persistent, or a perfect, or a real predictor). Finally, it is worth noting that the proposed technique can also be seen as a useful way of overcoming the well-known ‘‘curse of dimensionality” that affects stochastic dynamic programming, the optimization method usually adopted to synthesize off-line policies for water reservoir networks. To reduce the number of state variables, and thus mitigate the effect of the computational burden, the system model used in the off-line problem is often simplified (more precisely reduced) by eliminating the model of the uncontrolled parts of the system (i.e., the natural catchments) and considering their outputs (i.e., the outflows) among the system’s disturbances. The dynamics of the outflows is then accounted for by solving the problem online and updating their probability distribution functions with a dynamic outflow predictor fed with real-time information. This information may include not only the current state of the natural uncontrolled catchments, namely the one ignored in the off-line problem formulation, but also any other variable useful for predicting future outflows, like, for example, precipitation or snow-cover measures. The paper is structured as follows. In Section 2 the general model of a water reservoir network is introduced. Section 3 presents the off-line and on-line control problem formulation. Section 4 is entirely devoted to the case study and its results. Finally, in Section 5 we draw some conclusion on the work. 2. Model of the system We consider the general case of a water system composed of N reservoirs that are fed by M uncontrolled catchments and serve L water users, like, for example, power plants or irrigation districts. 2.1. Reservoirs The model of the jth water reservoir is based on the usual mass balance equation

sjtþ1 ¼ sjt þ qjtþ1  rjtþ1 ;

ð1aÞ

where sjt is the storage in the j-th reservoir at time t, qjtþ1 is the inflow volume in the time interval ½t; t þ 1Þ and r jtþ1 is the release in the same interval.2 Other terms like direct precipitation on the reservoir, infiltration and evaporation have been neglected but they can easily be added to the mass balance when necessary. The inflow qjtþ1 is the outflow of a drainage network fed by the releases r itþ1 ði ¼ 1; 2; . . . ; i–jÞ of the upstream reservoirs (if any) and by the outflows aktþ1 ðk ¼ 1; 2; . . .Þ from the natural uncontrolled catchments. The release rjtþ1 is a function of the control variable ujt (which is the release decision made at time t for reservoir j), of the storage sjt and of the inflow qjtþ1

rjtþ1 ¼ Rjt ðsjt ; ujt ; qjtþ1 Þ: The function Rjt ðÞ is called release-function and it is a non-linear periodic function by means of which any potential deviation of the actual release rjtþ1 from the release decision ujt (e.g., when the available water is not sufficient to realize the decision or when spill takes place) can be described appropriately (see [11] for a detailed description). 2

The time subscript of each variable denotes the time instant at which it assumes a deterministic value.

A. Castelletti et al. / Applied Mathematics and Computation 204 (2008) 621–631

623

2.2. Uncontrolled catchments Simple statistical models are usually adopted to describe the outflows from natural uncontrolled catchments. For instance, the outflow aktþ1 from the kth uncontrolled catchment can be assumed to be a cyclostationary, lognormal, stochastic process with periodic mean lkt and standard deviation rkt , and its dynamics be described as

aktþ1 ¼ expðyktþ1  rkt þ lkt Þ; k

A ðz

1

Þyktþ1

¼

ð2aÞ

ektþ1 ;

ð2bÞ

where Ak is a polynomial in the backward shift operator z1 and ektþ1 is a zero mean Gaussian white noise with constant variance. 2.3. Water users The presence of the L water users can be formalized by defining for each one of them a step-cost function g it ðÞ associated to the system’s transitions from t to t þ 1, i.e.,

g itþ1 ¼ g it ðxt ; ut ; etþ1 Þ with i ¼ 1; . . . ; L, the arguments of

ð3Þ g it ðÞ

being defined in the next Section.

2.4. Global model The global model of the water system is obtained by suitably aggregating the models of the reservoirs, catchments and water users. The result is a discrete-time, periodic, non-linear, stochastic system of the form

xtþ1 ¼ ft ðxt ; ut ; etþ1 Þ;

ð4Þ

where xt 2 Rnx , ut 2 Rnu and et 2 Rne are the state, control and disturbance vectors. The state is composed of the state variables of the N reservoirs, i.e., their storages, and the state variables of the M catchments T M xt ¼ ½s1t ; . . . ; sNt ; y1t ; . . . ; y1tp1 ; . . . ; yM t ; . . . ; ytpM ; . . .  ; k

ð5Þ

1

where pk is the order of polynomial A ðz Þ in Eq. (2b). The control vector is composed of the N release decisions for the N reservoirs

ut ¼ ½u1t ; . . . ; uNt T : The disturbance vector is composed of the M random disturbances that appear in the models of the uncontrolled catchments and any other random variable that could be used to describe random terms in the reservoir mass balance equation (e.g., evaporation, infiltration, etc.) or in the model of water users. For example, if uncontrolled catchments are described with models of the form (2b) and no other disturbance affects the water system, the disturbance vector is given by T etþ1 ¼ ½e1tþ1 ; . . . ; eM tþ1  :

The disturbance vector etþ1 is described in terms of a pdf /t ðÞ, which at each time t may be function of the state and control at the same time

etþ1  /t ðjxt ; ut Þ:

ð6Þ

3. The control problem For each of the L water users, an objective function J i (with i ¼ 1; . . . ; L) can be defined to express the cost that the user pays over a given time horizon. Each cost function is spontaneously defined by aggregation of the step-cost functions (3). The time horizon can be of either finite or infinite length; let us first consider the first case and turn back to the infinite horizon case later on. Assuming the length h of the horizon be a finite positive number, the objective function can be defined as

J i ¼ Ee1 ;...;eh

h1 X

g it ðxt ; ut ; etþ1 Þ þ g ih ðxt Þ;

ð7Þ

t¼0

where g ih ðÞ is a penalty function over the final state. Since at each time t the release decision is given by the control law

ut ¼ mt ðxt Þ;

ð8Þ

the scope of the off-line control problem (OffCP) is to define the sequence of control laws mt ðÞ over the horizon ½0; h  1, i.e., the release policy

624

A. Castelletti et al. / Applied Mathematics and Computation 204 (2008) 621–631

p ¼ ½m0 ðÞ; . . . ; mh1 ðÞ

ð9Þ i

that minimizes the objective functions J . The multi-objective OffCP is thus formulated as

min½J 1 ; J 2 ; . . . ; J L 

ð10Þ

p

subject to the constraints (4), (6) and (8) for t ¼ 0; . . . ; h  1, (9) and given x0 . Since the objectives are conflicting, the solution is not a unique optimal policy but it is constituted by the set P of Pareto-efficient policies. In general, only a number of elements of P are actually determined by solving several single-objective OffCPs. The ‘aggregate’ objective function J of each single-objective OffCP is obtained by linear combination of the L objective functions (7) with a given coefficient (weights) vector jk1 ; . . . ; kL j. The constraints set, instead, is the same for all the single-objective OffCPs, i.e., the one of the original multi-objective problem. Each single-objective OffCP can be solved through Stochastic Dynamic Programming, i.e., by recursively solving, backward in time, the Bellman equation

Ht ðxt Þ ¼ min Eetþ1 ½g t ðxt ; ut ; etþ1 Þ þ Htþ1 ðxtþ1 Þ;

ð11Þ

ut

PL

PL

where Ht ðÞ is the optimal cost-to-go function for the aggregate objective J ¼ i¼1 ki J i , g t ðxt ; ut ; etþ1 Þ ¼ i¼1 ki g it ðxt ; ut ; etþ1 Þ and Hh ðxh Þ ¼ g h ðxh Þ for any xh . Once the optimal cost-to-go functions have been derived for all time instants t ¼ h  1; . . . ; 0, the optimal control law at any time t is obtained as

mt ðxt Þ ¼ arg min Eetþ1 ½g t ðxt ; ut ; etþ1 Þ þ Htþ1 ðxtþ1 Þ: ut

ð12Þ

If the Bellman equation cannot be solved analytically, as is almost always the case, except for the linear quadratic Gaussian case, an approximate solution is derived through the discretization of the state, control and disturbance sets and the approximate evaluations of the optimal cost-to-go functions in a finite number of points. The resulting control law is given in the form of a look-up table in which each (discretized) state value xt is associated with the (discretized) optimal control value ut . In the infinite horizon case, which is usually most suitable in the case of water reservoirs management, where the life time of the system under study is assumed to be infinite, the ith objective function ði ¼ 1; . . . ; LÞ can be defined as

J i ¼ lim

E

h!1 e1 ;...;eh

h1 1X g i ðxt ; ut ; etþ1 Þ h t¼0 t

ð13Þ

thus expressing the expected cost per time unit for the ith sector. The solution strategy is the same as in the finite horizon case: the multi-objective OffCP is replaced by a set of single-objective OffCPs, and each of these is solved by means of SDP. To avoid divergence, which would occur if Eq. (11) were recursively applied over an infinite horizon, the solution algorithm  t Þ of a reference state xt (the successive foresees replacing Ht ðxt Þ with the difference between Ht ðxt Þ and the cost-to-go Ht ðx approximation algorithm, [13,14]). Asymptotical convergence of the algorithm is guaranteed under suitable conditions [1]. Among these conditions is that the system be periodic of period T, i.e., the state transition function (4), the disturbance pdf (6) and all the step-costs (3) be periodic functions. Then, the optimal cost-to-go functions Ht ðÞ are also periodic of period T. It follows that the control laws are periodic functions and thus the searched for policy can be defined as the finite sequence of T control laws

p ¼ ½m0 ðÞ; . . . ; mT1 ðÞ: 3.1. On-line approach The idea of the on-line approach is as follows: the models of the uncontrolled catchments are eliminated and their outflows aktþ1 , k ¼ 1; . . . ; M, are included among the disturbances of the reduced water system model. This is possible because these subsystems are not influenced by the control ut . By doing so, the number of components in the state vector (5) is re~ t the reduced state vector. At each time t, an on-line optiduced, since the components ykti do not appear. We denote with x mal control problem (OnCP) over a finite horizon ½t; t þ h is formulated and solved. For each time s in the finite horizon ½t; t þ h, the pdf /s ðÞ of the disturbance is provided by a dynamic predictor that uses all information It available at time t. This includes all the information that is significant for the prediction of the catchments outflow, e.g., precipitation and temperature measures. Once the on-line problem has been solved, only the first control for the interval ½t; t þ 1Þ is actually applied while, at time t þ 1, a new problem is formulated over the horizon ½t þ 1; t þ 1 þ h with pdfs for the disturbances based on Itþ1 (receding horizon principle). Note that the on-line updating of the outflow pdfs can be based on a model more sophisticated than model (2a). In most of the cases, in fact, the description of the uncontrolled catchment provided by model (2a) is a rough approximation but it can not be improved due to the need of limiting the state dimension in the off-line solution with SDP. The multi-objective OnCP can be formulated as a stochastic closed-loop control problem, usually known as partial open-loop feedback control (POLFC) problem

A. Castelletti et al. / Applied Mathematics and Computation 204 (2008) 621–631

min½J 1 ; J 2 ; . . . ; J L 

ð14aÞ

p

with

" Ji ¼

tþh1 X

E

etþ1 ;;etþh

625

# ~ s ; us ; esþ1 Þ þ g~itþh ðx ~ tþh Þ g is ðx

ð14bÞ

s¼t

g~itþh ðÞ being a suitable penalty function over the final state, subject to

~sþ1 ¼ ~f s ðx ~ s ; us ; esþ1 Þ; x esþ1  /s ðjIt Þ s ¼ t; . . . ; t þ h  1; ~t given; x ~ s Þ; us ¼ ms ðx

ð14cÞ ð14dÞ ð14eÞ

s ¼ t; . . . ; t þ h  1;

ð14fÞ

p ¼ ½ut ; mtþ1 ðÞ; . . . ; mtþh1 ðÞ:

ð14gÞ

The solution approach of the multi-objective OnCP is the same as detailed for the off-line case, namely the computation of several Pareto-efficient policies (over a finite horizon) through the resolution of a number of single-objective OnCPs. The final choice of the control to be implemented among the possible, efficient controls for the interval ½t; t þ 1Þ requires on-line negotiations among the interested parties. 3.2. Choice of the penalty function The choice of the penalty functions g~h ðÞi that appear in the objectives (14b) is crucial since it significantly influences the performances of the closed-loop system. However, the choice is rarely immediate. One possibility [3,10] is to proceed as follows. First, one solves an off-line infinite horizon problem with the reduced model and a trivial predictor, i.e., with a priori pdf for the description of the disturbance. This implies solving a number of single-objective OffCPs and then choosing, among the efficient policies so obtained, the best compromise one. The latter is the solution of a particular OffCP problem, univocally defined by a given weights vector jk1 ; . . . ; kL j. If this problem has been solved by means of SDP, the optimal cost-to-go functions Hk ðÞ are also available for k ¼ 0; . . . ; T  1. Since these functions express the future cost that is associated to each state value, at each time instant, it follows that they can be used as penalty functions over the final state of the finite horizon online problem. More precisely, the multi-objective OnCP (14a) can be replaced by the following single-objective OnCP

" min p

E

etþ1 ;;etþh

tþh1 X

L X

s¼t

i¼1

# ~ s ; us ; esþ1 Þ þ Htþh ðx ~tþh Þ ki g is ðx

ð15Þ

subject to (14(c–g)). The limit of this approach is that, since the solution of the OffCP requires using SDP, it can be followed only if the state of the reduced model is not too large. Since the state reduction can involve only components that are not affected by the control (typically, natural catchments) only reservoir networks composed of few reservoirs can be managed with this approach, as in the case study discussed next. 4. Application to the daily management of Lake Maggiore Lake Maggiore is a regulated lake located south of the Alps between Italy and Switzerland. It is the most important water system of the sub-alpine area on account of its multiple and conflicting socio-economic uses (irrigation, hydropower generation, navigation, flooding reduction, etc.). At the end of 1999 an EU-Interreg project was funded with the purpose of exploring whether any planning alternative (i.e., combination of structural and normative interventions and a release policy) exists that the parties can agree on to resolve, or at least mitigate, the existing conflict (see [11] for a detailed description of the project and its outcomes). Although the project actually ended with the identification of a set of reasonable alternatives, i.e., the alternatives gathering a large consent among the parties, here we assume that the Italian and Swiss governments have agreed on the choice of the best compromise alternative from that set. More precisely, we assume that they have chosen alternative A34 which foresees the excavation of the lake outlet (with an increase of 600 m3/s of the outflow capacity), the modification of the regulation range (with the setting of its upper extreme to 1.5 m all through the year) and, finally, the use of an efficient release policy pA34 , designed off-line for the new system configuration. In the policy design, (a) the objective of the OffCP was defined as a convex linear combination of flooding reduction around the lake and satisfaction of the downstream irrigation users; (b) it was assumed that the inflow etþ1 be generated by a purely random process (white noise) with a priori given, periodic probability distribution. The OffCP was solved by means of SDP. Therefore, the T optimal cost-togo functions HA34 k ðÞ, k ¼ 0; . . . ; T  1 corresponding to A34 have been computed. At each time t, the optimal control according to policy pA34 can thus be computed as A34 uA34 t ðxt Þ ¼ arg min Eetþ1 ½g t ðxt ; ut ; etþ1 Þ þ H tþ1 ðxtþ1 Þ; ut

where HA34 tþ1 ðÞ is the proper optimal cost-to-go function for the time t þ 1.

ð16Þ

626

A. Castelletti et al. / Applied Mathematics and Computation 204 (2008) 621–631

Since none of the structural interventions prescribed by A34 has yet been realized, the behaviour of the system subject to this alternative can be studied only through simulation. A deterministic simulation of the system was run, based on the historical time series of the inflow, thus allowing for a comparison with the historic regulation (alternative zero). In the next paragraph, such comparison will be developed with reference to a significant flood event.

5000 inflow history A34

4000

persistent predic. h=1 persistent predic. h=2

Flow [m3 /s]

persistent predic. h=4 3000

2000

1000

0 15.09

20.09

25.09

30.09

5.10

10.10

5

15.10

20.10

25.10

30.10

4.11

20.10

25.10

30.10

4.11

C history A34

4

persistent predic. h=1 persistent predic. h=2 persistent predic. h=4

Level [m]

3 flood threshold

B

2

1

A

0

1 15.09

D

20.09

25.09

30.09

5.10

10.10

15.10

Fig. 1. The flood event of autumn 2000. (a) Lake inflow, historical release and releases generated by A34 and by POLFC schemes fed by persistent predictors with different prediction horizons ðh ¼ 1; 2; 4Þ. (b) Historical level and levels that would have been obtained with A34 and with a POLFC scheme fed by the above mentioned predictors.

627

A. Castelletti et al. / Applied Mathematics and Computation 204 (2008) 621–631

4.1. Performances of the off-line policy In autumn 2000, three subsequent flood waves occurred (dot-dashed line in Fig. 1a) which, under the historic regulation, produced two flooding events in correspondence with the second and third inflow peak (dotted line in Fig. 1b). The level and release trajectories that would have been obtained with A34 are reported in Fig. 1a and b (continuous black line). Notice how the second flooding event (B) would have been completely avoided and the third (C) significantly reduced. The improvement

a

5000 inflow history A34

4000

perfect predic. h=1 perfect predic. h=2

Flow [m 3 /s]

perfect predic. h=4 3000

2000

1000

0 15.09

b

20.09

25.09

30.09

5.10

10.10

15.10

20.10

25.10

30.10

4.11

20.10

25.10

30.10

4.11

5 C history A34 perfect predic. h=1

4

perfect predic. h=2 perfect predic. h=4

Level [m]

3 flood threshold

B

2

1

A

0

1 15.09

20.09

25.09

30.09

5.10

10.10

15.10

Fig. 2. Same as Fig. 1 one but with a perfect predictor.

628

A. Castelletti et al. / Applied Mathematics and Computation 204 (2008) 621–631

is due to the effect of both the excavation and a more efficient regulation. The first allows for a larger release, the level being equal, and thereby a faster level decrease. Consider for example what occurs on 22nd September (point D in the figure): the historic level coincides with the one produced by A34, however the historic release is 153 m3/s (the maximum releasable for that lake level), while the release with A34 is 709 m3/s. It is this higher release that allows A34 to significantly reduce the lake level on the following days. As far as the regulation is concerned, the reader has to know that policy pA34 tries to maintain the reservoir level around zero as long as possible in the month of September (indeed this is the level value before the flood event), because this level is the one to which the minimum expected costs are associated in this season. This explains why, after 22nd September, the maximum possible volumes are released by keeping the dam gates completely open, although the reservoir level is within the regulation range. With the historical regulation, instead, after 22nd September the release is kept constant in spite of the increase in the lake level. Finally, let us observe what happens at the end of the flood event. In the month of October policy pA34 allows for a gradual increase in the lake level because in this period the flood probability decreases (due to the rise of the thermal zero elevation) and consequently the interest in storing water for the spring irrigations prevails. That is the reason why both the historical regulation and policy pA34 tend to fill the lake in this period. However, the full capacity is reached on 5th November with the historical regulation, while it is reached on 9th November with pA34 , which is more cautious (both events are off the figure). 4.2. Performances of the on-line policy Since in the hydrometeorological sub-Alpine regime of the Lake Maggiore area the autumn is characterized by heavy rain and therefore by floods, the performance of policy pA34 might be improved by using the on-line POLFC scheme combined with an inflow predictor fed by precipitation measures. This means that, at every time t, a finite horizon problem of the form (15) has to be solved. The optimal cost-to-go functions HA34 k , obtained by solving the OffCP with SDP, can be used as the penalty associated to the final state of the receding horizon. As for the inflow pdfs over the receding horizon, they can be updated at each time t according to the statistics provided by the inflow predictor. In the following, we will briefly analyze the behaviour of the system under the POLFC scheme as different inflow predictors are used and as the length h of the receding horizon varies. 4.2.1. Preliminary results The simplest h-step-ahead predictor is the persistent predictor: it assumes that the expected value ^etþsjt of the inflow for the interval ½t þ s  1; t þ sÞ, conditional on information at time t, be equal to the inflow value et that was recorded in the last time interval ½t  1; tÞ, i.e.,

1.5 history

B

A34 perfect predic. h=1 1

perfect predic. h=2 perfect predic. h=4

Level [m]

A

0.5

0

0.5 15.09

20.09

25.09

30.09

5.10

Fig. 3. An enlargement of the first two flooding events reported in Fig. 2b.

10.10

629

A. Castelletti et al. / Applied Mathematics and Computation 204 (2008) 621–631

s ¼ 1; . . . ; h:

^etþsjt ¼ et

ð17Þ

In other words, it is assumed that the inflow value does not change through time and thus the last inflow recorded at instant t is the only information necessary for predicting the next h inflows. We shall also assume that the conditional variance of

a

4000 inflow history A34 real predic. h=1

3

Flow [m /s]

3000

2000

1000

0 5.07

b

10.07

15.07

20.07

25.07

30.07

15.07

20.07

25.07

30.07

3 history A34 2.5

real predic. h=1

flood threshold

Level [m]

2

1.5

1

0.5 5.07

10.07

Fig. 4. The flood event of summer 1987. (a) Inflow, historical release and releases generated by A34 and by a POLFC scheme fed by a real predictor with h ¼ 1. (b) Historical level and levels that would have been obtained with A34 and with a POLFC scheme fed by a real predictor with h ¼ 1.

630

A. Castelletti et al. / Applied Mathematics and Computation 204 (2008) 621–631

etþs be zero. The level trajectories obtained with the on-line policy when the persistent predictor is used and for different prediction horizons ðh ¼ 1; 2; 4Þ, are reported in Fig. 1b (lines with different tones of grey). For h ¼ 1, the trajectory practically overlaps that of the A34 while for higher values of h the peak levels (A) and (B) are higher than those produced by A34: the longer the prediction horizon h, the higher the level. It is thus evident that it is dangerous to use the POLFC scheme when the predictor is not a good one. It is therefore, opportune to look for a better predictor. Intuitively, the best h-step-ahead inflow predictor is the perfect predictor: for every instant s between t þ 1 and t þ h, it supplies the value es , which effectively will occur, i.e.,

^etþsjt ¼ etþs

s ¼ 1; . . . ; h:

ð18Þ

Clearly, such a predictor is not realizable in practice. However, the 1-step-ahead predictor is the best predictor one can imagine adopting in a POLFC scheme, and therefore we may expect that with a h-step-ahead predictor the performance of the on-line policy is not too far from the upper bound of the performance reachable with any other h-step-ahead predictor;3 thus, it allows one to estimate the maximum performance obtainable with an on-line policy. The trajectories produced with the perfect predictor on different prediction horizons ðh ¼ 1; 2; 4Þ are reported in Fig. 2. There is now an improvement compared with A34; the longer the horizon, the more marked it is. Observing in particular the first event (A) (see Fig. 3), one notes that the bringing forward of the reservoir’s spilling in order to buffer the first inflow peak corresponds precisely to the prediction horizon used: for example, with the 4-step-ahead predictor (lighter grey line) the policy begins to release more than the other policies on 15th September (4 days before the increase of the inflow) and this allows it to reduce the peak level on 21st by 0.40 m compared with A34. With all the predictors, the improvement is marked on the first peak, more contained on the second (0.06 m with a 4-step-ahead predictor and 0.03 m with a 1-stepahead one) and zero on the third. This is easily explained by observing that from 22nd September the lake is in free regime with all the policies (also with the POLFC, which is driven by the penalty that it inherits from A34) and as a consequence of the asymptotic stability of the system all the trajectories tend to overlap as time goes on. Of course, they do not overlap the historic trajectory, because the historic system was not excavated and thus the behaviours of the two systems in free regime are different. Finally, let us observe what happens at the end of the flood event. Remember that both A34 and the historical regulation tend to fill the reservoir: the historical regulation reaches the full capacity on 5th November and A34 on the 9th November. The on-line policies, which inherit the same tendency through the penalty, anticipate the filling because the more they see ahead the more they are able to establish that there is no more risk of floods. 4.2.2. POLFC scheme and inflow prediction based on precipitation measures As we have already emphasized a perfect predictor actually is not realizable: the performance obtained with it are therefore only useful to have an idea of the upper bound of the performance that one expects from the use of an on-line policy. With a real predictor, plausibly the improvement with respect to policy A34 will be less marked. And in fact with the best predictor that we were able to create (1-step-ahead4) we earned only 0.02 m on peak B. From this analysis we can conclude that the improvement of the off-line policy A34 by an on-line policy is, all in all, modest. The reason lies in the fact that the period considered (autumn) is a usual flood season and thus the off-line policy already takes due account of this. The advantage of the POLFC becomes more significant when an unexpected event occurs. In 1987, for example, surprisingly a flood took place in the month of July (a unique case in the hydrological series). Fig. 4 shows that with a 1-step-ahead real predictor the peak reduction compared with A34 is of 0.07 m with respect to 0.08 m of the 1-stepahead perfect predictor. The reduction may appear very modest, but notice that a reduction of 0.01 m corresponds to a reduction of 1 ha of the flooded area! 5. Conclusions and remarks An on-line approach to policy design for water reservoir networks has been presented in this paper. It relies on the use of real-time information to adapt and improve SDP-based policies designed off-line with a priori information. The potential of the approach has been explored through a real world case study. Results yield indications that the improvements of the offline policy is, all in all, modest when floods actually occur in the flood season (i.e., when they are a priori expected to occur), while become significant with unexpected events. Our research reveals that an increase in the prediction horizon causes an increase in performance, but that increase is less than proportional to the increase of the prediction horizon. On the other hand, the identification of a good inflow predictor entails costs and these increase as the prediction horizon increases. In the choice of predictor it is thus necessary to find a trade-off between such costs and the resulting improvement in the performance. In this connection, the information given by the perfect predictor can constitute important information.

3 Notice that a POLFC with a h-step-ahead perfect predictor may not always produce the best performance, because it assumes that the inflow probability distribution from time t þ h þ 1 onwards coincides with the a priori one (remember that it is driven by the penalty). As a consequence, when at time t þ h þ 1 the actual inflow is greater than the one expected a priori, a h-step-ahead predictor that at time t þ h predicts an inflow greater than the one provided by the perfect predictor can produce a better performance. 4 Notice that the concentration time of the catchment is less than 24 h.

A. Castelletti et al. / Applied Mathematics and Computation 204 (2008) 621–631

631

Acknowledgements The authors would like to thank Daniele de Rigo, Luca Tepsich and Enrico Weber for the essential contribution to the development of the software used to perform calculations in the paper, which is now part of the DSS TwoLe (see ). This manuscript was completed while the first author was on leave at the Centre for Water Research at the University of Western Australia, so he is grateful to Professor Jorg Imberger, the Director, for the invitation, and for the financial support provided through a Gledden Visiting Senior Fellowship. References [1] D.P. Bertsekas, Dynamic Programming and Stochastic Control, Academic Press, New York, 1976. [2] A. Castelletti, F. Pianosi, R. Soncini-Sessa, Integration, participation and optimal control in water resources planning and management, Applied Mathematics and Computation, (2007), doi:10.1016/j.amc.2007.09.069. [3] A. Castelletti, F. Pianosi, R. Soncini-Sessa, Water reservoir control under economic, social and environmental constraints, Automatica 44 (2008) 1595– 1607, doi:10.1016/j.automatica.2008.03.003. [4] A. Castelletti, R. Soncini-Sessa, A procedural approach to strengthening integration and participation in water resource planning, Environmental Modelling & Software 21 (10) (2006) 1455–1470. [5] J. Casti, Reality Rules I&II. Picturing the World in Mathematics: The Fundamentals, the Frontier, Wiley, Chichester, 1997. [6] M. Falkenmark, J. Rockstrom, The new blue and green water paradigm: breaking new ground for water resources planning and management, Journal of Water Resources Planning and Management 132 (3) (2006) 129–132. [7] GWP – Global Water Partnership, Integrated water resources management, TAC Background Paper 4, GWP Secretariat, Stokholm, S, 2000. [8] B.A. Lankford, D. Merrey, J. Cour, N. Hepworth, From integrated to expedient: an adaptive framework for river basin management in developing countries, Research Report, International Water Management Institute, vol. 10, 2005. [9] D.J. Merrey, P. Drechsel, F.W.T. Penning de Vries, H. Sally, Integrating ‘livelihoods’ into integrated water resources management: taking the integration paradigm to its logical next step for developing countries, Regional Environmental Change 5 (2005) 197–204. [10] A. Nardini, C. Piccardi, R. Soncini-Sessa, A decomposition approach to suboptimal control of discrete-time systems, Optimal Control Applications and Methods 15 (1) (1994) 1–12. [11] R. Soncini-Sessa, A. Castelletti, E. Weber, Integrated and Participatory Water Resources Management. Theory, Elsevier, Amsterdam, The Netherlands, 2007. [12] R. Soncini-Sessa, F. Cellina, F. Pianosi, E. Weber, Integrated and Participatory Water Resources Management. Practice, Elsevier, Amsterdam, The Netherlands, 2007. [13] Y.S. Su, R.A. Deininger, Generalization of White’s method of successive approximations, Operations Research 20 (2) (1972) 318–326. [14] D.J. White, Dynamic programming, Markov chains, and the method of successive approximations, Journal of Mathematical Analysis and Applications 6 (1963) 373–376.