Dynamic optimization with state variable path constraints1

Dynamic optimization with state variable path constraints1

PII: Computers Chem. Engng Vol. 22, No. 9, pp. 1241—1256, 1998 Copyright ( 1998 Elsevier Science Ltd All rights reserved. Printed in Great Britain S0...

243KB Sizes 0 Downloads 67 Views

PII:

Computers Chem. Engng Vol. 22, No. 9, pp. 1241—1256, 1998 Copyright ( 1998 Elsevier Science Ltd All rights reserved. Printed in Great Britain S0098-1354(98)00012-X 0098—1354/98 $19.00#0.00

Dynamic optimization with state variable path constraints1 William F. Feehery and Paul I. Barton* Department of Chemical Engineering, Massachusetts Institute of Technology, Cambridge, MA, USA (Received 17 December 1997) Abstract A new method for solving dynamic optimization problems that contain path constraints on the state variables is described. We establish the equivalence between the inequality path-constrained dynamic optimization problem and a hybrid discrete/continuous dynamic optimization problem that contains switching phenomena. The control parameterization method for solving dynamic optimization problems, which transforms the dynamic optimization problem into a finite-dimensional nonlinear program (NLP), is combined with an algorithm for constrained dynamic simulation so that any admissible combination of the control parameters produces an initial value problem that is feasible with respect to the path constraints. We show that the dynamic model, which is in general described by a system of differential-algebraic equations (DAEs), can become high-index during the state-constrained portions of the trajectory. During these constrained portions of the trajectory, a subset of the control variables are allowed to be determined by the solution of the high-index DAE. The algorithm proceeds by detecting activation and deactivation of the constraints during the solution of the initial value problem, and solving the resulting high-index DAEs and their related sensitivity systems using the method of dummy derivatives. This method is applicable to a large class of dynamic optimization problems. ( 1998 Elsevier Science Ltd. All rights reserved Keywords: path constraints; state variable; differential-algebraic equations

1. Introduction

subject to

Constrained dynamic optimization problems, also called constrained optimal control problems, are of interest in many areas of engineering. Some examples are determination of optimal operating policies for chemical plants subject to safety, operational, and environmental constraints, determination of the control policy for an industrial mechanical robot, and solution of the minimum time-to-climb problem for an aircraft that is required to stay within a specified flight envelope.

f (xR , x, y, u, t)"0,

(2)

g (x, y)40,

(3)

k (xR (t ), x(t ), y(t ), u (t ), t )40 ∀p3 M0, 2 , n N, p p p p p p p (4) x"x(t)3Rmx , u"u(t)3R mu,

y"y (t)3Rmy, t3¹,[t , t ], 0 f

f : R mx]R mx]Rmy]Rmu]R P R nf , 1.1. Problem statement The constrained dynamic optimization problem of interest in this paper is to optimize a cost functional t f

min $"/ (x (t ), y (t ), t )#: ¸ (xR , x, y, u, t) dt f f f u,t t f

(1)

0

* Corresponding author. E-mail: [email protected]. 1 This research was supported by the United States Department of Energy under grant DE-FG02-94ER14447.

g : R mx]R mx]R my]R mu]R P R ng , k : R mx]R mx]R my]R mu]R P R nkp , p n "m #m . f x y In this formulation, x are differential state variables and y are algebraic state variables. Equation (2) defines a general system of differential-algebraic equations (DAE), equation (3) a set of path inequality constraints on the state variables, and equation (4)

1241

1242

W.F. FEEHERY and P.I. BARTON

a set of point constraints on the state variables at time t (including the initial and final times). p For clarity of exposition, we assume that the DAE (2) has a differential index (defined in Section 2) of at most one. The methods described in this paper are easily extended to a more general formulation of this dynamic optimization problem that includes equality path constraints, which are treated as additional equations in the DAE (possibly raising the index) (Feehery et al., 1995), time-invariant parameters in the DAE, path constraints that are also functions of the control variables, and constraints on the control variables. 1.2. Previous work There is an extensive literature on numerical methods for the solution of state-constrained dynamic optimization problems, which fall into three classes. The dynamic programming method was described in Bellman (1957) and the approach was extended to include constraints on the state and control variables in Luus (1993). Indirect methods focus on obtaining a solution to the classical necessary conditions for optimality which take the form of a two-point boundary value problem. There are many examples of such methods (for example Bryson and Ho, 1975; Denham and Bryson, 1964; Dixon and Bartholomew-Biggs, 1981; Dreyfus, 1962; Kelley, 1964; Miele et al., 1982; Pesch, 1989). Finally, direct methods transform the infinite-dimensional dynamic optimization problem into a finite-dimensional mathematical programming problem. There are two general strategies within the framework of the direct method. In the sequential, or control parameterization, method the control variables are discretized over finite elements using polynomials (Brusch and Schapelle, 1973; Petzold et al., 1996; Vassiliadis et al., 1994; Vassiliadis, 1993). The coefficients of the polynomials and the size of the finite elements then become decision variables in a master nonlinear program (NLP). Function evaluation is carried out by solution of an initial value problem (IVP), and gradients may be evaluated by solving either the adjoint equations (see, for example, Sargent and Sullivan, 1977) or the sensitivity equations (see, for example, Vassiliadis, 1993). Sensitivity equations are preferred because of the efficiency with which they may now be calculated for large DAEs (Feehery et al., 1997; Maly and Petzold, 1996; Rosen and Luus, 1991). In the simultaneous strategy both the controls and the state variables are discretized using polynomials on finite elements, and the coefficients and element sizes become decision variables in a much larger NLP (for example, see Bartholomew-Biggs, 1995; Gill et al., 1994; Kraft, 1994; Tanartkit and Biegler, 1995; Yen and Nagurka, 1992). Unlike control parameterization, the simultaneous method does not require the solution of IVPs at every iteration of the NLP. The advantage of the simultaneous strategy is the relative ease with which state variable path con-

straints may be handled by including them directly in the optimization problem as a set of point constraints. However, the theoretical properties of the discretization employed in the simultaneous strategy are invalid for nonlinear DAEs which have index'2 (at least locally) (Logsdon and Biegler, 1989), which can easily occur with inequality path-constrained problems. Furthermore, the simultaneous strategy causes the NLP to grow explosively with the size of the DAE and the time horizon of interest, and solution of such large NLPs requires special techniques and careful attention to determining an initial guess for the optimization parameters that leads to convergence of the NLP algorithm. On the other hand, the NLPs that result from control parameterization are much smaller because the master NLP includes only the parameters that discretize the control variables, and therefore the size of the master NLP is a function of the discretization of the controls, and not of the size of the DAE. Furthermore, control parameterization can take advantage of the highly refined heuristics available in state-of-the-art DAE integrators. The main disadvantage of control parameterization to date has been the difficulties that arise when path constraints on state variables are included in the problem, because the state variables are not directly included in the NLP. In recent years, growing attention has been paid to dynamic systems which exhibit coupled discrete and continuous behavior, known as hybrid systems. Within this framework, discrete and continuous subsystems interact at isolated points in time known as events. The impact of an event on a continuous subsystem may take the form of switches or jumps (Park and Barton, 1997). Switches refer to discrete changes in the functional form of the DAEs and/or control variable trajectories, and, loosely speaking, jumps refer to discontinuous changes in the differential state trajectories. This paper establishes the equivalence between an inequality path-constrained dynamic optimization problem and a hybrid dynamic optimization problem that contains switching phenomena. Within this context, inequality path-constrained problems can be formulated as a restricted class of hybrid dynamic optimization problems. This insight is used to develop a control parameterization algorithm in which path constraints are handled directly by the IVP solver. As a consequence, we are able to handle efficiently a broad class of state-constrained dynamic optimization problems within a control parameterization framework, and, more generally, develop a numerical strategy for a class of hybrid discrete/continuous dynamic optimization problems. Section 2 establishes the equivalence between the inequality path-constrained dynamic optimization problem and a hybrid dynamic optimization problem containing switching phenomena. In particular, it is shown that this yields a problem in which the differential index may fluctuate along the solution trajectory. Section 3 presents a review of methods for handling state variable path constraints within control

Dynamic optimization parameterization, and presents an overview of the algorithm that we propose. The following several sections describe details of this algorithm. Our implementation of the algorithm is described in Section 7. A numerical example, the constrained brachistochrone problem, is presented in Section 8. Conclusions are presented in Section 9. 2. Path constraints, high-index DAEs, and the hybrid dynamic optimization problem In order to illustrate the hybrid discrete/continuous formulation for path-constrained dynamic optimization, consider the set of feasible trajectories between two points in state space, subject to inequality constraints on the path between these points. Two such trajectories are illustrated for a two-dimensional state space in Fig. 1. Each feasible trajectory connecting the two points is composed of a series of constrained and unconstrained segments, where a constrained segment tracks one or more active path constraints. Trajectories are characterized by a sequence of constraint activations and deactivations at boundaries between constrained and unconstrained segments. A trajectory may be discontinuous at such points (often referred to as corners in the optimal control literature (Bryson and Ho, 1975)). When an unconstrained trajectory intersects one of the path constraints, feasibility may be enforced by augmenting the DAE (2) describing the unconstrained dynamics with the set of active constraints: g (x, y)"0, j

(5)

where g is the set of all locally active constraints in j equation (3). Each equation in g takes up one degree j of freedom in the augmented DAE, and therefore for every g a control is released and permitted to be i

1243

implicitly determined by the active inequality. However, an IVP in which the degrees of freedom are fluctuating along the solution trajectory is problematic from the point of view of analysis and solution. In order to avoid these problems with fluctuating degrees of freedom, we reformulate the path-constrained problem in a higher-dimensional space by relabeling the controls u in equations (2)—(4) as u , and c introducing a new set of variables u 3R mu. For s example, consider a problem with a single control and a single path constraint. In unconstrained portions of the trajectory (i.e., when g(x, y)(0), the dynamics of the system are described by f (xR , x, y, us, t)"0

(6)

us"uc,

(7)

uc"uc (t),

(8)

and the constrained portions (i.e., when g(x, y)"0) as f (xR , x, y, us, t)"0,

(9)

g(x, y)"0,

(10)

uc"uc (t)

(11)

Note that equation (7) has been replaced with equation (10), but the degrees of freedom for the overall dynamic system remain unchanged. Thus, during unconstrained portions of the trajectory the control that actually influences the state (u ) is equivalent to the s forcing function for the system (u ), whereas in conc strained portions of the trajectory u is determined s implicitly by the active path constraint (10) and is unrelated to u . In a control parameterization context, c u would be prescribed by the control parameters over c the entire time horizon, but during constrained portions of the trajectory it would not influence the solution trajectory. Equations (6)—(7) correspond to a hybrid dynamic system that experiences autonomous switching in response to state (or implicit) events (Park and Barton, 1997). This behavior is illustrated in Fig. 2 using the finite automaton representation introduced in Barton and Pantclides (1994). Further, as we show below, the differential index of the constrained problem will in general differ from that of the unconstrained problem. Given these preliminaries, it is now evident that a path-constrained dynamic optimization with a single control and a single inequality is equivalent to the following hybrid discrete/continuous dynamic optimization problem: t f

min $"/(x(t ), y(t ), t )#: ¸(xR , x, y, us, t) dt (12) f f f u (t), t t c

f

0

subject to

Fig. 1. Feasible trajectories in constrained state space.

f (xR , x, y, us, t)"0,

(13)

u "u ∀t3¹ : g(x, y)(0, s c g(x, y)"0 ∀t3¹ : g(x, y)"0.

(14)

1244

W.F. FEEHERY and P.I. BARTON constructs a series of underlying lower-index DAEs by replacing structurally singular equation subsets with the corresponding differentiated subset at each iteration. The algorithm terminates when the Jacobian of the current underlying DAE is not structurally singular. Consistent initial conditions are defined by augmenting the original DAE with all the differentiated equations identified by the algorithm. Definition 2.3 (Structural index of a DAE). ¹he structural index i of a DAE is the maximum number of times s that any subset of equations in the DAE is differentiated using Pantelides’ algorithm.

Fig. 2. Autonomous switching of constrained dynamic simulation.

Theorem 2.1. ¹he index and the structural index of a DAE are related by i5i . s

The extension to multiple controls and inequalities is evident, provided each inequality is matched with a unique control. Note that this hybrid dynamic optimization problem exhibits the following properties:

Proof: Consider the modification to Pantelides’ algorithm described in Feehery and Barton (1996), in which the variables a and equations

f Autonomous switching defined by implicit events. f The number and order of implicit events at the

optimum is unknown a priori. f In general, the index of the system fluctuates at the

events. Each of these properties has its dual in the classical analysis of the original path-constrained formulation (Bryson and Ho, 1975). In order to show that the dynamic optimization problem may become high index whenever there are active inequalities, we must first present some theorems and definitions. The following definition was presented in Brenan et al. (1996): Definition 2.1 (Differential index of a DAE). ¹he minimum number of times that all or part of a DAE must be differentiated with respect to time in order to determine xR (t) and yR (t) as continuous functions of x(t), y(t), u(t) and t is the differential index i of a DAE. By this definition, ordinary differential equations (ODEs) have i"0. The term high-index DAE is commonly used to refer to systems with i52. Standard numerical integration codes experience difficulty when applied to systems with i52, and only limited classes of high-index problems may be solved directly at present. Definition 2.2 (Structurally singular DAEs). A DAE is structurally singular if the Jacobian of the DAE with respect to the highest-order time derivatives (e.g. xR and y in Eq. (2)) in the DAE is structurally singular. In Pantelides (1988), Pantelides’ algorithm is presented for obtaining a consistent initial condition for structurally singular, but consistent DAEs. The algorithm starts with the structurally singular DAE and

a"xR

(15)

are appended to the underlying DAE as necessary at each step of the algorithm so that time derivatives of order greater than one do not appear in the underlying DAE obtained through the next step. Since the new variable a is uniquely determined by equation (15), the augmented system has the same index as the original DAE. Define z' i as the set of highest-order time derivatives s in the final underlying DAE obtained with this modified Pantelides’ algorithm, f (is ). Define J] i as the Jacs obian of f (is ) with respect to z' i . s There are four cases: i"i : If J] i is nonsingular and z' i does not contain s s s any algebraic variables, then f (is ) has i"i "0 and s according to the implicit function theorem all time derivatives are uniquely determined given the state variables. i"i #1: If J] i is nonsingular and z' i does contain s s s some algebraic variables, then all time derivatives are not uniquely determined by this final underlying DAE. Since by assumption

C

J] i " s

Lf (is ) Lf (is ) LxR Ly

D

(16)

is nonsingular, where xR and y are, respectively, the time derivatives and algebraic variables in z' i , one s further differentiation of f (is ) produces Lf (is ) Lf (is ) Lf (is ) f (is#1)" aR # xR # yR LaR Lx Ly Lf (is ) Lf (is ) # uLQ # "0, Lu' Lt aR "x¨ ,

(17) (18)

where u' "Mu, (du/dt), 2 , (d(is ) u/dt (is ) )N. Since J] i is s nonsingular the time derivatives aR and yR are uniquely (i ) determined by f s .

Dynamic optimization i'i : If J] i is singular, then by the properties of s s Pantelides’ algorithm it must be numerically but not structurally singular. Since the general DAE is nonlinear, no information about whether z' i is uniquely s determined or not is conveyed by numerical singularity of J] i . It is possible that additional differentiations s must be performed to uniquely determine z' i , and s therefore the differential index may be larger than the structural index. i (i / : Since Pantelides’ algorithm terminates with s the first underlying DAE that possesses a structurally nonsingular J] i , and structural nonsingularity of J] i is s s a necessary condition for f (is ) to uniquely define the highest-order time derivatives z' i , it is not possible for s any previous f (is!j ) , j"1, 2 , i to uniquely deters mine z' i !j . Therefore, the structural index is a lower s bound on the differential index. K Definition 2.4 (Structurally well-behaved DAEs). A structurally well-behaved DAE is one for which J] i is s nonsingular for the entire time horizon of interest. Note that for structurally well-behaved DAEs, i"i or i"i #1. Although the structural index is s s a global property of DAEs, the Jacobian can be numerically singular during some or all of the time domain, and thus the differential index is only a local property. The algorithms described in this paper apply to structurally well-behaved DAEs. We limit our focus to this class of DAEs because of the practical difficulties in detecting numerical singularity of matrices, and the theoretical difficulties of using numerical singularity to determine the true differential index of the DAE. Furthermore, almost all problems of practical interest are structurally well-behaved. For the remainder of this paper, the DAEs are assumed to be structurally well-behaved. In general, the strongest statement that can be made is that appending equation (5) to an index-one DAE may result in a high-index DAE. There are cases in which the augmented DAE also has i"1. For example, consider the DAE and inequality: xR #y#u"0,

(19)

y!u!x"0,

(20)

2x#y!550,

(21)

where x and y are state variables and u is a control variable, which is not high-index when the inequality (21) is active. However, there are some important classes of DAE for which appending state path constraints to the DAE in the manner described above will result in high-index DAEs.

1245

where there is some s-x, s3R ng , such that (Lg/Ls) is nonsingular, to an explicit ODE: xR !/(x, u, t)"0

(23)

yields a DAE with i52, assuming the DAE is solvable. Proof : Appending the path constraints requires n controls to become algebraic state variables deg noted by y-u in the augmented DAE. Define sN "xCs and uN "uCy. A suitable partitioning of equation (23) yields sNQ !/ (s, s6 , y, uN , t)"0, (24) 1 sR !/ (s, sN , y, uN , t)"0. (25) 2 By assumption, s is uniquely determined by equation (22) so it is not available to determine xR and yR . Differentiate (24), (25) and (22) with respect to time to yield L/ L/ L/ L/ L/ sN® ! 1 s5 ! 1 s60 ! 1 yR ! 1 uNQ ! 1"0, LsN Ly LuN Lt Ls (26) L/ L/ L/ L/ L/ s® ! 2 s5 ! 2 s60 ! 2 yR ! 2 uNQ ! 2"0, Ls LsN Ly LuN Lt (27) Lg Lg sR # sNQ "0. Ls LsN

(28)

sNQ is uniquely determined by equation (24) then s5 by equation (28), but equation (25) is unavailable to determine yR . Differentiating equation (28) with respect to time yields

C D

L2g Lg T L2g j s5 j s5 #2s6Q T j s¨ #s5 T Ls Ls2 Ls6 Ls #s6Q T

C D

L2g Lg T j s6Q # j s6® "0 Ls6 2 LsN

(29)

for all j"1, 2 , n . g Nonsingularity of the matrix

C D I

0

L/ 1 Ly

0

I

L/ 2 Ly

Lg Lg LsN Ls

(30)

0

is a sufficient condition for equations (26) and (27) to determine s6® , s¨ , and yR uniquely. Since equation (22) has been differentiated twice, according to Definition 2.1 the index is at least two, and since there is no guarantee that equation (30) has full rank, the index may be greater than two. K

Theorem 2.2. Appending n 4n state constraints of g u the form

Theorem 2.3. Appending n 4n state constraints of g u the form

g(x)"0 ,

g (x, u)"0

(22)

(31)

1246

W.F. FEEHERY and P.I. BARTON

to an explicit ODE of the form (23) yields a DAE with i51, assuming the DAE is solvable. Proof : Since Equation (31) does not contain any time derivatives, the augmented system is structurally rank-deficient with respect to the time derivatives and hence must be at least index-1. K

In Feehery and Barton (1996) we describe a practical and efficient implementation of the method of dummy derivatives in the ABACUSS process modeling environment. In particular, we address the issue of automated dummy pivoting. 3. Constrained dynamic optimization

A consequence of Theorems 2.2 and 2.3 is that optimal control of ODE systems subject to path inequality constraints requires either implicit or explicit treatment of DAEs. For other cases, the index may stay the same or increase when the DAE is augmented with state path constraints. In problems where the index increases, as the theorems indicate, the new index may indeed rise by more than one. An example of this is the DAE:

The first part of this section presents a review of methods that have been used to handle state variable path constraints and an analysis of the advantages and disadvantages of each method. An overview is then presented of our fluctuating-index feasible-path method for dynamic optimization with path constraints.

xR "u !5x , (32) 1 0 1 xR "5x !3x , (33) 2 1 2 xR "3x !4x , (34) 3 2 3 x 410, (35) 3 where u is an input to the unconstrained problem 0 and a state variable in the constrained problem. This problem is index-4 when the inequality is active, and index-0 otherwise. The case where (Lg/Lx) is rank deficient yields a useful insight for diagnosing poorly posed problems. If (Lg/Lx) is rank deficient at a constraint activation, it implies that the currently active constraints are at least locally either redundant or inconsistent. We can exclude the inconsistent case because inconsistent inequalities cannot be satisfied simultaneously. Hence, only the redundant case is possible. Therefore, if (Lg/Lx) becomes rank deficient, we can ignore a subset of the active constraints. The approach that we follow for solution of highindex DAEs is the method of dummy derivatives (Feehery and Barton, 1996; Mattsson ad So¨derlind, 1993). Briefly, in this approach a DAE with i52 is augmented with all the differentiated equations required to derive the underlying index-1 system. The resulting overdetermined system is then made fully determined by selecting a subset of the time derivatives to be removed from the time discretization of the backward-difference formula (BDF) integrator. The resulting system is an index-1 DAE which has the same solution set as the original high-index DAE, eliminating any problems with constraint drift. The time derivatives that are removed from the discretization are termed dummy derivatives, and are treated as algebraic variables. Often there is more than one equivalent index-1 DAE for a given high-index DAE. Furthermore, it may become necessary to switch among members of this family of equivalent index-1 systems along the solution trajectory to avoid local numerical problems. This process of switching among equivalent index-1 systems is called dummy pivoting.

There have been several methods proposed for handling state variable path constraints within the control parameterization framework. Vassiliadis (1993) also presents a review of these methods. There are three classes of methods: slack variables, penalty functions, and interior-point constraints. Although the slack variable approach (Jacobson and Lele, 1969; Jacobson et al., 1971; Valentine, 1937) (often known as Valentine’s method) was not developed for use with control parameterization, its extension is straightforward. The slack variable approach transforms the inequality path constraint into an equality path constraint by introducing a slack variable a(t):

3.1. Review of methods for handling inequality path constraints in control parameterization

g(x, y)!1 a2 (t)"0. (36) 2 The constraint (36) is differentiated k times with respect to t, until a control u may be eliminated from j the original DAE using algebraic manipulations and substitutions with the set of equations created by differentiating (36). A transformed dynamic optimization problem may then be posed, where the control variable is (d(k)a/dt(k)) and the k!1 other derivatives of a(t) are treated as state variables. There are several drawbacks with the slack variable method. It can be shown (Jacobson and Lele, 1969; Valentine, 1937) that stationarity of the slack-variable underlying index-1 problem is only necessary for stationarity of the original problem, which admits the possibility of spurious numerical solutions. Another problem is that the transformed dynamic optimization problem contains a singular arc when a"0, which adversely affects the convergence rate of the control parameterization NLP. Also, all inequality constraints require additional computational effort, even those that are never active during the solution trajectory. Finally, there does not appear to be any way to extend the slack variable method for problems that contain more inequality path constraints than controls. Another method that has been used to enforce inequality path constraints is the use of a penalty

Dynamic optimization function (for example, Bryson and Ho, 1975; Xing and Wang, 1989). This may be done by augmenting the objective function (1) as t f

$@"$#K : rT (t) r(t) dt t

(37)

0

where K3R is a large positive number and r(t)3Rng ` is a measure of the constraint violation, e.g.: r (t)"max (g (t), 0). (38) i i This approach can cause numerical difficulties because it requires KPR to satisfy the constraint exactly. Also, the use of the max operator creates implicit discontinuities and therefore this type of problem is inherently a discrete/continuous dynamic optimization. Although not recognized by most previous authors, the sensitivity variables in such problems require special attention after an implicit discontinuity (see Section 6 and Gala´n et al., 1998). Alternatively, the penalty function could be used to form a set of end-point constraints ((t)3Rng (for example, Sargent and Sullivan, 1977): t f

( (t )" : r (t) dt"0 i f i t

(39)

1247

that can occur with infeasible-path optimization methods due to invalid models or numerical problems that occur outside the feasible state-space region. Furthermore, our method produces smaller NLPs than those required by infeasible-path methods because the state constraints are handled directly by the DAE solver. The heart of our method may be more accurately described as a constrained dynamic simulation algorithm, since it enforces state path constraints during the simulation subproblem that is required at each master iteration of the NLP. Constrained dynamic simulation requires detecting activation and deactivation of inequalities, determining which controls cease to be degrees of freedom in the constrained portions of the trajectory, and numerical integration of the resulting fluctuating-index DAE. The constrained dynamic simulation problem is

n

o(g(t ))4e , i"1, 2 , n , (40) i i pc n where o()): R g P R is a function that provides a measure of the violation of the constraints at a given t , e 3R , is a small positive number, and n is the i i ` pc number of point constraints. The problem with this method is that n PR is necessary to guarantee pc that the state path constraint is not violated during any portion of the optimal trajectory, and the size of the control parameterization NLP increases with n . pc With the exception of Valentine’s method, all of the methods proposed here allow path constraints to become violated during intermediate iterations of the NLP. Such methods can cause problems if the reason for including the path constraint is that the DAE is not valid or experiences numerical problems outside the feasible region. 3.2. Fluctuating-index feasible-path method for dynamic optimization with path constraints We propose here a method for solving path-constrained problems within the control parameterization framework. This method enforces state variable path constraints through direct solution of the highindex DAEs during the state-constrained portions of the solution trajectory. This eliminates the problems

(41)

uc"uc (t),

(42)

a (us!u c )# + b g (x, y)"0 ij j i i i j/1

∀i"1, 2 , n , u ∀t3¹,

0

This approach also causes numerical difficulties because the gradients of the end-point constraints are zero at the optimum, which reduces the rate of convergence near the solution. Yet another method proposed to deal with inequality path constraints is to enforce them via interiorpoint constraints (for example Vassiliadis et al., 1994; Xing and Wang 1989):

g

f (xR , x, y, us, t)"0,

n g

a # + b "1 ∀i"1, 2 , n , ∀t3¹, i ij u j/1

(43) (44)

n u

+ b )1 ∀j"1, 2 , n , ∀t3¹, (45) ij g i/1 a "a (t)3M0, 1N b "b (t)3M0, 1N . (46) i i ij ij This formulation shows how the equations in the active DAE change upon activation and deactivation of path constraints. The variables a and b are determined by the solution of the constrained dynamic simulation problem. This formulation does not give the complete relationship between a and b, which is in general constrained by solvability considerations and may be nonunique. In this paper we address the class of problems for which a(t) and b(t) are uniquely determined by the feasible path trajectory. We assume also that all path constraints are inactive at the initial condition, and therefore that a(t )"1 0 and b(t )"0. 0 We assume that a standard BDF integration method is used to solve the DAE. The algorithm that we perform on each step of the integration is: (1) Determine whether any of the inequality constraints activated or deactivated in the last time step. If no, go to Step (5). (2) Detect the earliest constraint activation or deactivation event that occurred: (2.1) For every active g (x, y), set one b "1 and j ij a "0. Set all other bCMb N"0 and i ij aCMa N"1. i (2.2) Reset the simulation clock to the time of the event.

W.F. FEEHERY and P.I. BARTON

1248

(3) Apply the method of dummy derivatives to derive an equivalent index-1 formulation for the current problem. (4) Consistently initialize the equivalent index-1 formulation. (5) Take one integration step with the current index-1 formulation of the problem, performing dummy pivoting as necessary. If not at the end of the simulation, go to Step (1). From a practical standpoint, there are several aspects of the above algorithm that need to be described in detail. Among these are detection of activation and deactivation of constraints, transferring initial conditions between constrained and unconstrained portions of the trajectory, and the matching between a and b. These details are discussed in the following sections. 4. Constraint activation and deactivation conditions

We consider an inequality state constraint to have deactivated when switching some a from 0 to 1 would i cause the state trajectories to move into the feasible region. We detect this situation by introducing a new set of equalities: n g

a (us!uc )# + b (g (x, y)!e )"0. i"1, 2 , n . i i i ij j j u j/1 (48) A new set of parameters Me N has been introduced j such that when b "1 and e "0 the corresponding ij j constraint g is active, and when e 40 the correj j sponding constraint has been ‘‘tightened’’ and the feasible state space decreased. Associated with each new equality (48) is the sensitivity system: Lf Lx Lf Ly Lf Lus Lf LxR # # # "0, Lx Le Ly Le Lus Le LxR Le j j j j

This section describes the methods for performing Step (2) in the constrained dynamic simulation algorithm. 4.1. Constraint activation An inequality constraint activates at the earliest point in time at which the previously inactive constraint becomes satisfied with equality. These points in time are often termed state events, because they are defined implicitly by the evolution of the system state and are not known a priori. The state events marking the constraint activation may be identified algorithmically by introducing a new algebraic variable z for j each currently inactive constraint: z "g (x, y ) . j j

(47)

These discontinuity functions are appended to the current equivalent index-1 formulation of the problem, and constraint activations are detected by advancing the simulation speculatively until a zero crossing occurs in z (t). This corresponds to a special case of the j state events handled by the algorithm described in Park and Barton (1996), which handles equations of the form (47) efficiently, and guarantees that all state events are processed in strict time order. 4.2. Constraint deactivation Detecting deactivation of inequality constraints during a simulation is somewhat more complicated than detecting activation because any uc which has i a corresponding a "0 does not influence the current i state variable trajectories. During solution of the equivalent index-1 system using the method of dummy derivatives, the corrector iteration assures that any active constraints remain satisfied within the relevant numerical tolerance, and therefore an additional condition must be considered to detect deactivation of the inequality.

a

n g Lus i# + b ik i Le j k/1 n

AC D

j"1, 2 , n g (49)

C D B

Lg T Lx Lg T Ly k k # Lx Le Ly Le j j

g

"+ b d , (50) ik jk k/1 j"1, 2 , n , i"1, 2 , n , g u where d is the Kro¨necker delta function. Note that jk although a large number of sensitivity equations are defined by equations (49) and (50), all sensitivities with respect to e are identically zero if g (x, y) is inactive. j j Provided that all b"0, all sensitivities with respect to e may be initialized to zero. Transfer conditions for these sensitivities for when constraints become active are the same as the general sensitivity transfer conditions given later in this paper. Deactivation of the inequality is detected by solving this sensitivity system to find (Lus/Le ) and detecting i j the simulation times that are implicitly defined by zero crossings of the discontinuity functions: qa"uc!us , (51) i i i Lus qb " i ∀(i, j): b "1. (52) ij ij Le j These algebraic equations are appended to the DAE and solved as detailed in Park and Barton (1997). At the state event times, the deactivation

AA

B

B AA

Lus i (0 ' (uc'us ) s i i Le j

B

B

Lus i '0 '(uc(us ) i i Le j (53)

is evaluated for all (i, j ) where Mb N"1. If this condiij tion evaluates to true, the constraint g (x, y) is conj sidered to have deactivated. A negative value of e corresponds to tightening the j constraint, and thus moving the solution trajectory into the feasible region. If the sensitivity (Lus/Le ) is i j

Dynamic optimization negative, then uc must increase for the solution trajecti ory to move away from the inequality and into the feasible region. Therefore, if uc becomes greater than i us at some point in the time domain, the specified i control will inactivate the inequality. The reasoning for the case in which (Lus/Le )'0 follows the same i j logic. 4.3. Matching controls and inequalities In general, active path constraints must be capable of defining the trajectory for some of the us . In other words, each active constraint g (x, y) must be matched j to a single us such that a "0 and b "1. The probi i ij lem is to determine which a "0 and b "1 when i ij g (x, y) is active. A necessary condition is that us must j i be input-connectable with one of the state variables in the constraint g (x, y) to which it is matched, and that j each g (x, y) must be matched to a unique us . j i It is possible that there may be multiple valid matchings in a given problem, but we restrict our attention in this paper to problems where a matching exists and is unique. 5. Transfer conditions for state variables This section describes a method for handling Step (4) of the constrained dynamic simulation algorithm. When a discontinuity activates, it is necessary to specify a set of equations which is sufficient to define a consistent initial condition for the augmented DAE after the constraint activation: J (xR ~, x~, y~, xR `, x`, y`)"0,

(54)

where the ‘‘!’’ denotes the values before the constraint activation, and the ‘‘#’’ denotes the values after activation. In some cases, these equations may describe a loss function upon activation of the constraint, i.e., frictional losses due to an object interacting with a surface. However, the usual method for determining this transfer function is to specify some subset of the state variables x and y to remain continuous. In discrete/continuous dynamic simulation it is usually assumed that all differential state variables are continuous across a discontinuity in the input functions (Barton and Pantelides, 1994). This assumption has been proven to be valid for ODEs and for most physically meaningful index-1 DAEs (in fact, all DAEs with i "0) (Brull and Palaske, 1992), but several s authors (Brull and Palaske, 1992; Gopal and Biegler, to appear; Majer et al., 1995) have noted that the assumption is not true for some index-1 and higherindex DAEs. Continuity conditions for arbitrary-index DAEs are stated in Gopal and Biegler (to appear), assuming that the index is the same on both sides of the discontinuity. The assumption is that state variables are continuous across a discontinuity in the controls provided that the corresponding underlying ODE does not depend on any derivatives of the controls that are

1249

causing the discontinuity. A successive linear programming (SLP) algorithm was proposed to find the dependencies of the variable derivatives on the derivatives of the controls that are causing the discontinuity. These continuity conditions are valid as stated only for DAEs of constant index, and need to be extended to handle the fluctuating-index DAEs that can occur in constrained dynamic simulation. Consider an optimal trajectory for the original problem (1)—(4), which in general will be composed of a sequence of unconstrained and constrained segments. Corresponding to this trajectory is an optimal control profile u (t), opt which can in general jump at the boundaries between unconstrained and constrained segments (i.e., corners). Hence, if we solve an IVP in equation (2) given u (t), applying the standard continuity conditions opt above for equation (2) whenever u (t) jumps, we can opt reproduce the unconstrained and constrained segments of the optimal trajectory without augmenting equation (2) with active path constraints and locally raising the index. This analysis indicates that at all constraint activations/deactivations, the optimum trajectory must satisfy continuity conditions for the unconstrained dynamic system (2) in response to jumps in the controls. Consider a constraint activation/deactivation at time t* during an intermediate IVP subproblem in our algorithm. At t*` the method of dummy derivatives is used to derive an equivalent index-1 system for the subsequent segment of the trajectory. For structurally well-behaved DAEs, this analysis will also explicitly indicate the degrees of freedom available to define the initial condition for the new segment, which will equal the number of differential state variables remaining after substitution of the dummy derivatives. In general, the number of degrees of freedom available will be less than or equal to the number of continuity conditions required for equation (2). For example, if equation (2) has i "0, then m conditions s x on the continuity of x must be satisfied, whereas the number of differential state variables in the augmented equivalent index-1 system derived by the method of dummy derivatives will be equal to or less than m . Hence, in general, only that subset of the x continuity conditions corresponding to the differential state variables remaining in the equivalent index-1 DAE describing the new segment can be satisfied by our constrained dynamic simulation algorithm. Trajectories generated by our constrained dynamic simulation algorithm will potentially not satisfy all the continuity conditions required for an optimal trajectory. However, this complication can be handled by allowing the continuity conditions to be violated by the IVP subproblems at intermediate iterations, and adding the continuity conditions at constraint activation/deactivation through the use of point constraints in the master NLP. Thus, the master NLP will adjust the control profiles in a manner such that at the optimum, the extra continuity conditions that cannot be enforced by the constrained dynamic simulation

1250

W.F. FEEHERY and P.I. BARTON straints with respect to the optimization parameters. There are three basic method for obtaining these gradients: finite differences, adjoint equations, and sensitivity variables. Of these, sensitivity variables are the preferred method because of their accuracy and the high computational efficiency that may now be achieved when solving the combined DAE and sensitivity system (Feehery et al., 1997; Maly and Petzold, 1996; Rosen and Luus, 1991). Associated with the DAE (2) is the sensitivity system that is obtained through differentiation with respect to the optimization parameters p: Lf Lx Lf Ly Lf Lu (p, t) Lf Lf LxR # # "! ! Lu Lp Lp LxR Lp Lx Lp Ly Lp

Fig. 3. Tank and valve.

algorithm are satisfied by the NLP. Thus, while our method is always feasible with respect to path constraints, it is not necessarily feasible with respect to continuity conditions at intermediate iterations. As a simple physical example, consider a constrained dynamic optimization in which the liquid level in the tank shown in Fig. 3 may not exceed a given threshold. The valve stem position controlling the flow into the tank is a differential state in the unconstrained DAE and exhibits a first order response to the control signal. When the inequality on the liquid level activates during a given constrained dynamic simulation subproblem, the resulting equivalent index-1 DAE indicates that the flow into the tank must equal the flow out to maintain the level at its threshold, and hence the stem position to achieve this is also prescribed algebraically. Since for an arbitrary control signal profile there is no guarantee that the stem position immediately before the constraint activation will equal that prescribed at the beginning of the constrained segment of the trajectory, it must be allowed to jump, which is clearly non-physical and forbidden by the continuity conditions for the unconstrained DAE. In fact, in this case the equivalent index-1 DAE has no degrees of freedom to define the initial condition, but continuity of the liquid level will always be redundant with the liquid level inequality, so only the stem position can potentially jump. Although the constrained dynamic simulation subproblems allow jumps, the point constraints in the master NLP will force the optimal control profile to be one in which the stem position does not jump. If the unconstrained DAE (2) is already high index (e.g., if it includes equality path constraints), then the continuity conditions stated in Gopal and Biegler (to appear) must be enforced by point constraints in the master NLP. 6. Transfer conditions for sensitivity variables Efficient control parameterization requires the gradients of the objective functions and NLP con-

(55)

where (Lx/Lp) and (Ly/Lp) are sensitivity state variables. Assuming that Eq. (2) has associated with it a consistent set of initial conditions: c (xR , x, y, u(p, t ), p, t )"0, (56) 0 0 the initial conditions of the sensitivity DAE are Lc LxR LxR Lp

K

Lc Lx # Lp Lx t"t 0

K

Lc Ly # Lp Ly t"t 0

K

t"t 0

Lc Lu (p, t ) Lc 0! . "! Lp Lu Lp

(57)

During integration of the constrained dynamic simulation problem, the time domain ¹,[t , t ] is 0 f divided into N segments [t , t ) where S k~1 k k"1, 2 , N and t 3 ¹. The number of segments is S k fixed for any one constrained dynamic simulation, but may vary from one iteration of the master NLP to the next. The switching times of the segments t may be k either the explicitly known end times of the finite elements on which the control variables are approximated, or times that are implicitly defined by the activation/deactivation of inequalities (3). The transfer conditions for the sensitivities is different for each class of switching times. During path-constrained segments, the sensitivity system of the corresponding equivalent index-1 DAE is solved in place of equation (55). 6.1. Transfer conditions at the end of finite elements The results in this section apply for the Mq NLMt N; i k i"1, 2 , N , that are the end times of the N finite FE FE elements that are used to discretize the control variables. The end times of the finite elements are in general optimization parameters in the master NLP. The total derivative of the transfer condition (54) is LJ LJ LJ dxR ~# dx~# dy~ dJ" Lx~ Ly~ LxR ~ LJ LJ LJ # dxR `# dx`# dy` LxR ` Lx` Ly` "0,

(58)

Dynamic optimization where the superscripts ‘‘!’’ and ‘‘#’’ denote the value in the limit as tPq~ and tPq` respectively. i i The dependent variables x, and y are functions of the independent variables, i.e., the parameters p , i i"1, 2 , n , and t. Therefore, p

1251

and Lx` Lx~ " #xR ~!xR ` ∀q "t . i k Ln Ln

(70)

LxR ~ LxR ~ dxR ~" dp# dt , Lp Lt k

(59)

This is the same result as that reported in (O. Rosen and R. Luus, 1991) provided that (Lx~/Lq )"0, which i occurs when q does not appear in equations (2), (56), i or the control parameterization u(p, t).

Lx~ Lx~ dp# dt , dx~" k Lp Lt

(60)

6.2. Transfer conditions at implicit discontinuities

Ly~ Ly~ dp# dt , dy~" k Lp Lt

(61)

LxR ` LxR ` dp# dt , dxR `" k Lp Lt

(62)

Lx` Lx` dx`" dp# dt , k Lp Lt

(63)

Ly` Ly` dy`" dp# dt , k Lp Lt

(64)

Lt dt " k dp . k Lp

(65)

Using the chain rule and (54) and (59)—(65), the sensitivity transfer conditions at t are k LJ LxR ~ LJ Lx~ LJ Ly # # LxR ~ Ln Lx~ Ln Ly~ Ln LJ LxR ` LJ Lx` LJ Ly # # "0 LxR ` Ln Lx` Ln Ly` Ln ∀n"p!(q : q "t ) (66) i i k since (Lt /Lp)"0 for all the parameters that are not k the end time of the current finite element, and

A

B

A

B

LJ LxR ~ LJ Lx~ #x¨ ~ # #xR ~ LxR ~ Lq Lx~ Lq i i LJ Ly LJ LxR ` # #yR ~ # #x¨ ` Ly~ Lq LxR ` Lq i i LJ Lx LJ Ly # #xR ` # #yR ` "0 Lx` Lq Ly` Lq i i ∀q "t , (67) i k since (Lx/Lt)"xR , (Ly/Lt)"yR , (LxR /Lt)"x¨ , and (Lt /Lq )"1. k i In the common case where the continuity conditions for the DAE (54) have the form

A A

B B

A A

B

B

x`"x~ ,

(68)

that is, the differential variables are assumed continuous, conditions (66) and (67) have the form Lx` Lx~ " ∀n"p!(q : q "t ) i i k Ln Ln

(69)

The transfer conditions derived in the previous section are valid when the position of the end of the time segment is explicitly known a priori. However, there is also the case when the position of the end of the time domain is an implicit function of the state and thus the optimization parameters. In the state inequality constrained simulation algorithm, the time at which the solution trajectory switches from unconstrained to constrained segments is implicitly defined by the inequality path constraints (3). The results in this section are valid for the set of subdomain end times Mt* NLMt N that are implicitly defined by the actival k tion of an inequality (3). Following the reasoning for the case where the switching time is explicitly known, we differentiate equation (3): Lg Lg dx# dy"0. Lx~ Ly~

(71)

Using the chain rule, and the analogs of equations (60) and (61) we obtain the transfer conditions

A

B

A

LJ LxR ~ Lt* LJ Lx~ Lt* #x¨ ~ l # #xR ~ l LxR ~ Lp Lp Lx~ Lp Lp

A A

B B

A A

B

B B

LJ Ly~ Lt* LJ LxR ` Lt* # #yR ~ l # #x¨ ` l Ly~ Lp Lp LxR ` Lp Lp

LJ Lx` Lt* LJ Ly` Lt* # #xR ` l # #yR ` l "0 Lx` Lp Lp Ly` Lp Lp (72)

A

B

A

B

Lg Lx~ Lt* Lg Ly~ Lt* #xR ~ l # #yR ~ l "0. Lx~ Lp Lp Ly~ Lp Lp (73) Equations (72) and (73) define a linear system that must be solved for (LxR `/Lp), (Lx`/Lp), (Ly`/Lp), and (Lt*/Lp). The latter vector is the (algebraic) sensitivil ties of the inequality activation time with respect to the parameters, and its value is required only at the constraint activation. The implication of (72) and (73) is that the sensitivity state variable trajectories may jump at an implicit discontinuity even when their respective state variables are continuous across the discontinuity. It should be noted that the results of this section are also easily extended to allow sensitivity analysis of hybrid discrete/continuous systems with state events (Gala´n et al., 1998).

W.F. FEEHERY and P.I. BARTON

1252

6.3. Transfer conditions at inequality deactivation The implicit discontinuities described above arise in constrained dynamic simulation when a constraint activates. The deactivation conditions present a special case that is described here. The results in this section are valid for the set of subdomain end times Mts NLMt N that are implicitly defined when some l k a switches from 0 to 1. i In order for the deactivation condition (53) to be true, one of equations (51) of (52) must hold at time ts . l In the case where uc"us at ts the transfer conditions i i l for the sensitivities are Equation (72) and Luc~ Lts Lus~ Lts i #uR c~ l " i #uR s~ l i Lp i Lp Lp Lp

(74)

for all a "0, which are derived in the same manner as i those in Section (6.2). In the case where (Lus/Le "0) at ts the transfer i i l conditions for the sensitivities are equation (72) and L2us~ LuR s~ Lts i # i l "0 Le Lp Le Lp j j

(75)

which are also derived in the same manner as those in Section 6.2, and taking into account of the fact that the sensitivities with respect to e are zero after the j discontinuity deactivates. However, this condition has the added complication of requiring second-order sensitivity information, namely (L2us/Le Lp). Obtaini j ing this second-order sensitivity information requires the solution of the second-order analog to equations (49) and (50). It is possible to extend the algorithm described in Feehery et al. (1997) to handle these second order sensitivities efficiently. The results of this section indicate that there are two possible behaviors for the control variable upon deactivation of an inequality. The state and parameterized trajectories of the control may cross, or else the special sensitivity may cross zero, and the control will jump. The latter case is more costly to handle in a numerical algorithm because of the use of second order sensitivities. 7. Implementation of the algorithm We have implemented state variable path-constrained optimization within the ABACUSS* largescale equation-oriented modeling system. The input language in ABACUSS has been extended to permit the representation of large-scale path-constrained dynamic optimization systems in a compact manner. An example of the input language is given in Fig. 4. Variable types are declared in the DECLARE block.

The equations describing the DAE are given in the MODEL block. The OPTIMIZATION block is used to define a dynamic optimization using the MODEL. Note that it is possible to move from a simulation to an optimization in a seamless manner. The sections in the optimization block are: PARAME¹ER. Defines the parameters that are used in the dynamic optimization that are not defined in the model. ºNI¹. Defines the MODELs to use in the current dynamic optimization. »ARIAB¸E. Defines variables that are used in the dynamic optimization that are not defined in the models. OBJEC¹I»E. Defines the objective function to be MINIMIZEd or MAXIMIZEd in terms of variables defined in the UNIT or the VARIABLE section. SE¹. Sets values for any parameters defined in the UNIT or PARAMETER section INEQºA¸I¹½. Defines any path inequalities in terms of variables defined in the UNIT or the VARIABLE section. CON¹RO¸. Indicates that the listed variables are control variables. The first function in the triple notation defines the initial guess, the second and third define the lower and upper bounds for the control, respectively. ¹IME — IN»ARIAN¹. Indicates that the listed variable are time invariant optimization parameters. The first number in the triple notation defines the initial guess, the second and third define the lower and upper bounds for the variable, respectively. INI¹IA¸. Gives additional equations that define the initial condition for the problem, possibly as a function of the optimization parameters. FINA¸. Defines point constraints that must be obeyed at the final time. SCHEDº¸E. Indicates the time domain of the dynamic optimization problem. 8. Example In this section we present an example to show the use of the fluctuating-index feasible-path method for dynamic optimization. The constrained brachistochrone problem was presented both in Bryson and Ho (1975) and in a slightly different form in kraft (1994). The problem is to find the shape of a wire on which a bead will slide between two points under the force of gravity in minimum time. The formulation of the constrained brachistochrone problem that we consider is kraft (1994): min t f

(76)

xR "v cos h,

(77)

yR "v sin h,

(78)

h(t), t

* ABACUSS (Advanced Batch And Continuous UnsteadyState Simulator) process modeling software, a derivative work of gPROMS software. ( 1992 by Imperial College of Science, Technology, and Medicine.

f

subject to

Dynamic optimization

1253

Fig. 4. Example of ABACUSS input file for constrained dynamic optimization.

vR "g sin h,

(79)

Table 1. Solution statistics for constrained brachistochrone

y5ax!b,

(80)

Initial Guess

x(t )"x , f f

(81)

[x(t ), y(t ), v(t )]"[0, 0, 0], 0 0 0

(82)

where t is the final time, g is the gravitational conf stant, and a and b are given constants. Equation (80) defines an inequality constraint on the state variables x and y, equation (81) defines a final-point constraint, and equation (82) gives the initial conditions. This problem was solved using the feasible-path control parameterization method described in this paper. The discretization chosen for the control h(t) was single linear element. There were therefore three parameters in the NLP (the two for the control variable and t ). The parameters g, a, b, and x were set to f f

h(t)"!1.558#2.645t h(t)"!1.6#2.2857t h(t)"0.0 h(t)"!1.6 h(t)"!1.0

QP

LS

CPU (s)

2 3 3 5 6

3 3 5 5 6

0.38 0.47 0.52 0.67 0.75

!1.0, !0.4, 0.3 and 1.1, respectively. The integrator RTOL and ATOL were both set to 10~7 and the optimization tolerance was 10~5. The optimal objective function value was 1.86475. The ABACUSS input file is shown in Fig. 4, and Table 1 gives solution statistics from different initial guesses for the control variable. In this table ‘‘QP’’ refers to the number of

1254

W.F. FEEHERY and P.I. BARTON

Fig. 5. State variable trajectory for constrained brachistochrone problem.

master iterations of the NLP solver, ‘‘LS’’ refers to the number of line searches performed, and ‘‘CPU’’ gives the total CPU time for the solution on an HP C160 workstation. Note that these statistics compare favorably with those reported for other similarly sized problems (Vassiliadis, 1993). Figures 5 and 6 show the state and control variable solutions of the constrained brachistochrone problem described above using the constrained dynamic optimization algorithm. The ‘‘kink’’ in the control variable profile is due to the control becoming determined by the high-index DAE during the constrained portions of the trajectory.

9. Conclusion The fluctuating-index constrained dynamic optimization algorithm presented in this paper is a new method for solving constrained dynamic optimization problems. The method allows the IVP solver to enforce the constraints on the state variables, resulting in smaller master NLPs and guaranteeing feasibility throughout the solution trajectory.

There are several issues associated with the constrained dynamic simulation algorithm that warrant further investigation. In particular, it is unclear how to handle problems where at an intermediate iteration the number of active constraints is greater than the number of controls that are input connected to the states in the active constraints. It is also possible that there is more than one possible matching between the elements of a and b and we are investigating methods for determining the optimal matching set, should one exist. From a practical standpoint, use of this algorithm may require solution of second-order sensitivity equations on some problems. While solution of these equations may be performed with the same efficient algorithm as that used in Feehery et al. (1997) for firstorder sensitivities, additional derivative information is required that may be expensive to compute. However, the second-order sensitivity information is not required if u is not permitted to be discontinuous upon s deactivation of the inequality. In this case, deactivation can only occur when uc"us. Use of this method to solve more complex constrained dynamic optimization problems can often

Dynamic optimization

1255

Fig. 6. Control variable trajectory for constrained brachistochrone problem.

result in numerical difficulties if the sequence of constraint activations and deactivations changes from one iteration of the NLP solver to the next. These difficulties are a result of discontinuities in the objective and/or constraint functions that are a result of nonexistence of the sensitivity functions when the sequence of constraint events changes (Galan et al., 1988). The method does not experience these difficulties if the initial guess for the optimization parameters is close enough to the optimal answer such that the constraint activation and deactivation sequence remains the same for all iterations of the NLP solver. We are currently investigating modifications that extend the method to handle poor initial guesses for such problems. It should be noted that the method reported in this paper and the slack variable method represent the only reported methods that guarantee that state variables remain feasible with respect to the path constraints during each IVP subproblem of the NLP, and that the solution satisfies, within the truncation error tolerance, all path constraints over the entire time interval.

References Bartholomew-Biggs, M.C. A penalty method for point and path state constraints in trajectory optimization. Opt. Control Appl. Meth. 16, 291—297 (1995). Barton, P.I. and Pantelides, C.C. Modeling of combined discrete/continuous processes. AIChE J. 40, 966—979 (1994). Bellman, R. Dynamic programming. Princeton University Press, Princeton NJ (1957). Brenan, K.E., Campbell, S.L. and Petzold, L.R. Numerical solution of initial value problems in differential-algebraic equations. SIAM; Philadelphia, PA (1996). Bru¨ll, L. and Pallaske, R. On consistent initialization of differential-algebraic equations with discontinuities. In Proc. 4th European Conference on Mathematics in Industry (pp. 213—217) Stuttgart; Teubner (1992). Brusch, R.G. and Schapelle, R.H. Solution of highly constrained optimal control problems using nonlinear programming. AIAA J. 11, 135—136 (1973). Bryson, A.E. and Ho, Y. Applied optimal control. Hemisphere, New York (1975). Denham, W.F. and Bryson, A.E. Optimal programming problems with inequality constraints. II: solution by steepest ascent. AIAA J. 2(1), 25—34 (1964).

1256

W.F. FEEHERY and P.I. BARTON

Dixon, L.C.W. and Bartholomew-Biggs, M.C. Adjointcontrol transformations for solving practical optimal control problems. Opt. Control Appl. Meth. 2, 365—381 (1981). Dreyfus, S.E. Variational problems with state variable inequality constraints. J. Math. Anal. Appl. 4, 297—308 (1962). Feehery, W.F., Banga, J.R. and Barton, P.I. A novel approach to dynamic optimization of ODE and DAE systems as high-index problems. AIChE Annual Meeting, Miami (1995). Feehery, W.F. and Barton, P.I. A differentiation-based approach to dynamic simulation and optimization with high-index differential-algebraic equations. In M. Berz, Ch. Bischof, G. Corliss and A. Griewank, (Eds), Computational differentiation: techniques, applications, and tools, (p. 239—252) SIAM. Philadelphia, PA, (1996). Feehery, W.F., Tolsma, J.E. and Barton, P.I. Efficient sensitivity analysis of large-scale differential-algebraic equations. Appl. Numer. Math. 25, 41—54 (1997). Gala´n, S., Feehery, W.F. and Barton, P.I. (1998) Parametric sensitivity functions for hybrid discrete/continuous systems. Appl. Numer. Math., submitted (1997). Gill, P.E., Murray, W. and Saunders, M.A. Large-scale SQP methods and their application in trajectory optimization. In R. Bulirsch & D. Kraft, (Eds), Computational optimal control (Chapter 1, pp. 29—42). Birkha¨user Verlag. Basel, (1994). Gopal, V. and Biegler, L.T. A successive linear programming approach for initialization and reinitialization after discontinuities of differential algebraic equations. SIAM J. Scientific Comput., to appear. Jacobson, D.H. and Lele, M.M. A transformation technique for optimal control problems with a state variable inequality constraint. IEEE ¹rans. Automat. Control 5, 457—464 (1969). Jacobson, D.H., Lele, M.M. and Speyer, J.L. New necessary conditions of optimality for control problems with state variable inequality constraints. J. Math. Anal. Appl. 35, 255—284 (1971). Kelley, H.J. A trajectory optimization technique based upon the theory of the second variation. Prog. Astronaut. Aeronaut. 14, 559—582 (1964). Kraft, D. On converting optimal control problems into nonlinear programming problems. Comput. Math. Prog. 15, 261—280 (1985). Kraft, D. Algorithm 733: TOMP — Fortran modules for optimal control calculations. ACM ¹rans. Math. Software 20(3), 263—281 (1994). Logsdon, J.S. and Biegler, L.T. Accurate solution of differential-algebraic optimization problems. Ind. Engng Chem. Res. 28, 1628—1639 (1989). Luus, R. Piecewise linear continuous optimal control by iterative dynamic programming. Ind. Engng Chem. Res. 32(5), 859—865 (1993).

Majer, C., Marquardt, W. and Gilles, E.D. Reinitialization of DAEs after discontinuities. Comput. Chem. Engng 19, S507—S512 (1995). Maly, T. and Petzold, L.R. Numerical methods and software for sensitivity analysis of differential-algebraic systems. Appl. Numer. Math. 20, 57—79 (1996). Mattsson, S.E., & So¨derlind, G. Index reduction in differential-algebraic equations using dummy derivatives. SIAM J. Sci. Statist. Comput. 14, 677—692 (1993). Miele, A., Mohanty, B.P., Venkataraman, P., & Kuo, Y.M. Numerical solution of minimax problems of optimal control, Part 2. JO¹A. 38(1), 111—135 (1982). Pantelides, C.C. The consistent initialization of differentialalgebraic systems. SIAM J. Sci. Statist. Comput. 9, 213—231 (1988). Park, T. and Barton, P.I. State event location in differentialalgebraic models. ACM ¹rans. Modeling Comput. Simulation 6, 137—165 (1996). Park, T. and Barton, P.I. Analysis and control of combined discrete/continuous systems: Progress and challenges in the chemical processing industries. AIChE Symp. Ser. 93(316), 102—114 (1997). Pesch, H.J. Real-time computation of feedback controls for constrained optimal control problems. Part 2: a correction method based on multiple shooting. Opt. Control. Appl. Meth. 10, 147—171 (1989). Petzold, L., Rosen, J.B., Gill, P.E., Jay, L.O. and Park, K. Numerical optimal control of parabolic PDEs using DASOPT. Proc. IMA ¼orkshop on ¸arge-Scale Optimization (1996). Rosen, O. and Luus, R. Evaluation of gradients for piecewise optimal control. Comput. Chem. Engng. 15(4), 273—281 (1991). Sargent, R.W.H. and Sullivan, G.R. The development of an efficient optimal control package. In Proc. 8th IFIP Conf. Optimization ¹ech. Pt. 2 (1977). Tanartkit, P. and Biegler, L.T. Stable decomposition for dynamic optimization. Ind. Engng Chem. Res. 34, 1253—1266 (1995). Valentine, F.A. The problem of Lagrange with differential inequalities as added side conditions. In Contributions to the Calculus of »ariations (pp. 407—448). Chicago: Chicago University Press (1937). Vassiliadis, V.S. Computational solution of dynamic optimization problems with general differential-algebraic constraints. Ph.D. thesis, University of London, London, UK (1993). Vassiliadis, V.S., Sargent, R.W.H. and Pantelides, C.C. Solution of a class of multistage dynamic optimization problems. 2. Problems with path constraints. Ind. Engng Chem. Res. 33, 2123—2133 (1994). Xing, A. and Wang, C. Applications of the exterior penalty method in constrained optimal control problems. Opt. Control Appl. Meth. 10, 333—345, 1989 (1989). Yen, V. and Nagurka, M. Optimal control of linearly constrained linear systems via state parameterization. Opt. Control Appl. Meth. 13, 155—167 (1992).