Solution of nonlinear boundary value problems coupled to a system of algebraic equations using quasilinearization

Solution of nonlinear boundary value problems coupled to a system of algebraic equations using quasilinearization

Nonlinear Analyru. 7?1eory. Methods F’rinrcd in Great Bntain. & Appkatiom. Vol. II. No. 6. pp. 6916%. 0362-545~87 3.00 f al Pcrgamon Journals Ltd...

389KB Sizes 0 Downloads 71 Views

Nonlinear Analyru. 7?1eory. Methods F’rinrcd in Great Bntain.

& Appkatiom.

Vol.

II.

No. 6. pp. 6916%.

0362-545~87 3.00 f al Pcrgamon Journals Ltd.

1987.

SOLUTION OF NONLINEAR BOUNDARY COUPLED TO A SYSTEM OF ALGEBRAIC QUASILINEARIZATION

VALUE PROBLEMS EQUATIONS USING

ANTHONYWEXLER Department of Physiology and Biophysics, University of Southern California, Los Angeles, California 90033, U.S.A. (Received

15 November

1985; received for publication 2 June 1986)

Key words and phrases: Quasilinearization,

boundary value problems.

INTRODUCTION

presented here enables a nonlinear two point boundary value problem to be solved when it is coupled to an associated system of nonlinear algebraic equations. The method is based on quasilinearization as presented by Bellman and Kalaba [l]. It has application for solving, among others, optimal control problems, least squares problems, parameter identification problems, and our application, which is renal physiology problems. Quasilinearization is a Newton method for solving nonlinear boundary value problems. As with most Newton methods, quasilinearization is iterative but attains quadratic convergence. The method presented here retains the essential features of quasilinearization but allows that the nonlinear boundary value problem can be coupled to a nonlinear system of finite relations that I refer to as the algebraic system of equations. Both quasilinearization and the method presented below require the evaluation of the Jacobian of the system; for an nth order system, n* partial derivatives must be evaluated at each integration step. In the past, the lack of a method for easily evaluating large numbers of partial derivatives has limited the use of quasilinearization to relatively small systems of equations. The work presented in [4] enables partial derivatives to be easily evaluated and it is currently used with the method of this paper and quasilinearization to solve large systems of differential equations. This paper is divided into two main parts. The first part is a review of the quasilinearization method. The second part is a derivation of quasilinearization with the associated algebraic system included. An algorithm for solving these problems is also presented and some test results are reported. THE METHOD

QUASILINEARIZATION

Quasilinearization, as presented by Bellman and Kalaba [l], is a method for solving nonlinear boundary value problems. These problems consists of systems of nonlinear ordinary differential equations where the boundary conditions are specified at more than one value of the independent variable. Furthermore, the boundary conditions may be nonlinear functions of the value of the dependent variables at more than one value of the independent variable. Supported by NIH Grant AM33729. 691

A.

692

WEXLER

The system of differential equations is represented as N ordinary first order differential equations of the form shown in equation (1). The boundary conditions are given by N functions of the solutions of these equations as shown in equation 2 dX z = f(f) (1) where R = a(t) = [x,(t), x,(r), . . . , xN(r)] . . . , i(fm))

b(.f(t,),

= 6 (zero vector)

(2)

where d = [br, 62,. . . , bi, . . . , bN], bi = bi(xl(tl)y . . . , IN, Al, . . . , IN, . . . , ~l(t,,J, . . . , x~t,,J), fb s t s t,, t6 and fe are the beginning and end of the range of the independent variable t. To solve this system via quasilinearization, the differential equations are linearized and successive approximations to the solution are made. Equation (1) is linearized by expanding its right-hand side in a Taylor series about the previous estimate of the solution, fj-1, while retaining the first order terms:

EL_dt

-f(_fj_1)

+

Jf(ij-l)(ij

-

Rj-1).

(3)

Equation (3) is a system of linear ordinary differential equations in 5 (the jth approximation of R) in terms Of 5_ 1 (the j - lth approximation to 3) and J,(s-t) (the Jacobian of the functions f evaluated at Rj_1). To obtain an approximation of R, we start with an initial guess f, and calculate jr. Subsequent approximations are obtained in a similar fashion by calculating ~j using ij- 1e AS 5 converges to Z’,ij approaches 5_ 1, causing the second term on the right-hand side of equation (3) to vanish. Since equation (3) is linear, ij is comprised of a particular solution pj added to a linear combination of the N complementary solutions Ek. The particular solution starts with initial conditions Xj_I(tb) and satisfies equation (4), although any suitable initial conditions can be chosen:

where: p(fb) =

fj-

1 (lb)*

Orthogonal initial conditions are now chosen for the N complementary solutions Ek which must satisfy the linear ordinary homogeneous differential equation given in equation (5). To solve the boundary condtions, it is necessary but not sufficient that the initial conditions for the N Ek solutions be linearly independent. Usually it is easier numerically to satisfy the boundary conditions if the initial conditions are also orthogonal. dEk = dt with the orthogonal

Jf(ij-

1)Ek

k = 1,2,.

.. ,n

initial conditions: Ek(fb) = [O, 0,. . .

,o, 1, 0,

. . . ) O]

where the initial condition vectors Ck(fb) have a 1 in the kth element and zeros elsewhere.

693

Solution of nonlinear boundary value problems

This system of N(N + 1) initial valued ordinary differential equations (one particular system of N equations plus N complementary systems of N equations) are solved using conventional Runge-Kutta or Adams-Moulton integration methods. Since the complementary solutions were started with orthogonal initial conditions, each of the solutions should be linearly independent. (If the problem becomes ill-conditioned, Gram-Schmidt orthonormalization may be performed on the solutions periodically during the integration.) A linear combination of these complementary solutions is now chosen so that when they are added to the particular solution to form the next Zj, the boundary condition functions d are satisfied. This linear combination is given by a vector fi with elemeits uk as shown in equation (6).

The boundary functions 6 may be nonlinear. To solve these functions and thereby calculate the vector u and consequently the next solution, 5, the Newton-Raphson successive approximation scheme is used. Combining equations (2) and (6) we see that 6 is a function of the vector d and the functions jj and Sk. 6

ij(tl)

+ k$l

. . 7I

(“kEk(fl>)t.

+ k$l

(Ukfk(fm)))

= O.

After integrating the initial conditions forward from fb to f,, the functions p and ?k are now known at all times tb s t s fe and the 6 are solely functions of the unknown vector d. We now make successive approximations to the vector d by expanding the b functions of equation (2) in a Taylor series and retaining only the first order terms as in equation (8). 1) +

b(Cj_

J,(fij_

l)(tij

-

fij-1)

=

0.

o-0

The square matrix Jb is the Jacobian of the 6 functions with respect to the d vector. Each term in this Jacobian is given by equations (9.1)-(9.3). Jb(i, k) =

2k N

=

-- dbi hh(rl) + . . .

=( =(

/1=1

N

=

(9.1)

hh(ll)

(9.2)

duk

db. (9.3)

ACk,k(tl)

h=l

bh(tl)

To obtain the fi vector, an initial approximation da is made and equation (8) is solved by Gauss-Jordan elimination for (ii l-do). Adding this to B,,gives the next approximation 6,. This process is carried on until ~j converges to ii.

QUASILINEARIZATION

WITH

AN ASSOCIATED

ALGEBRAIC

SYSTEM

The theory

As opposed to equation (l), two point boundary value problems may appear in the form given in equations (10.1) and (10.2). In (10.1) there are N ordinary first order nonlinear differential equations and, in (10.2) there are M nonlinear algebraic equations. The 2 are the

A. WEXLER

694

N dependent variables that appear on the left-hand side of equation remaining M dependent variables, and t is the independent variable.

(lO.l),

the Y are the (10.1)

0 = g(i, I)

(10.2)

where R = i(r) = [X,(C), x*(t), . . . AVWIand 7 =JW = [Y&L~~(4.. . ,YMM(OI. The boundary conditions given in equations (2) apply unchanged to equations (10.1) and (10.2). As above, we linearize the ith equation of equation (10.1) by expanding the right-hand side in a Taylor series and retain the first terms to get equation (11).

(11) Since f is a function of i and 1, the total derivative dfi/dr! must be expanded into partial derivatives. Now if we consider that equation (10.2) defines Y as functions of f, the total derivative in equation (11) can be expanded to obtain equation (12) %=fi(-fj-l>,ii-1)

Equation

+

2 [($ +jl (22))

Cxl.j -xl,j-l)].

(12)

(12) can be rewritten in vector form as equation (13). (13)

where

or in matrix form

Jr =fx +rxfv. Equation

(13) has the same form as equation (3). The difference

comes in the calculation of

Jr which we will now pursue.

The expression above for the Jacobian of 7 involves partial derivatives of 7 with respect to f and Y and partials of j with respect to ,-Z.The functional form off is given in equation (10.1) but the functional form of Y is not given explicitly and in general we will not be able to invert (10.2) to obtain j as a function of R. Fortunately, we do not need Y as a function of R, but only the partial derivatives of Y with respect to 2:. To obtain these partial derivatives, we first take the derivative of both sides of equation (10.2): h=l,M

f=l,N, Equation

(14)

(14) can be written in matrix notation as equation (15): g, + YXg, = 0

(15)

Solution of nonlinear boundary value problems

695

where g

=-

Xh.1

agh

ax

agh



1

g Yh,k

=

-. ayk

aYk

andY,k,, = c.

I

Solving equation (15) for y, yields: YL =

-gxg;‘.

(16)

Equation (16) gives us the partials of y with respect to 2 that are needed to obtain the Jacobian in equation (13) under the condition that the matrix g, be invertible for all values of the independent variable t. The algorithm

Using the theory above, a FORTRAN program was written to solve systems that can be expressed in the form of equations (10.1) and (10.2). The following steps describe this algorithm: Step 1. Initialize an array to the first guess of the solution for f and 9. Step 2. Using a nonlinear

equation solver, calculate a new value for the jj part of the solution. This was done with the Newton-Raphson method using the Jacobian gy which is also required in step 3a. Step 3. Integrate the complementary and particular solutions from t = t6 through to t = te. This was done using an Adams-Moulton integrator with a Runge-Kutta start. To do the integration, the Jacobian off must be obtained: Step 3a. Obtain all the derivatives of g with respect to f and j. Step 3b. Invert gv, negate it, and mult_iply by g, to obtain y, as in equation (16). Step 3c. Obtain all the derivatives off with respect to f and jj. Step 3d. Multiply y, by f_ and add to fi to finish the calculation of the Jacobian off. Step 4. Calculate the vector d in equation

(6) and (7) to determine what linear combination of the complementary solutions (when added to the particular solution) satisfies the boundary conditions. Once again, the Newton-Raphson method was used but this time with the derivatives of the boundary equations 6. Step 5. Use this vector to calculate the next estimate off. Step 6. Return to step 2 and repeat these steps until f converges,

CONCLUSION

This method was tested on two problems. The first was a system of two differential equations and one algebraic equation taken from Kagiwada et al. 121. The method was then employed to solve a system of 24 differential equations and two algebraic equations from Moore and Marsh [3].

696

A. WE.Y.LER

The ready calculation of first order derivatives is key to the success of this method. In the second test example above, there were 24 differential equations in the form of equation (lO.l), two algebraic equations in the form of equation (10.2). and 24 boundary condition equations in the form of equation (2). The derivatives of all these functions with respect to the 26 variables (24 x’s and 2 y’s) were easily obtained using the algorithm presented in [4]. This test example was run on a VAX 11/750 running VMS. Each iteration of the model (each execution of steps 2-5) took about 20 min of processor time when integrated over 1000 steps. The method presented above provides an algorithm for readily solving systems of differential equations coupled to systems of algebraic equations. The method has been tested and shown to use a moderate amount of computer time. Most important, this method eliminates the need to solve the algebraic system in equation (10.2) forg in order to solve the system of differential equations in equation (10.1) Acknowledgement-1

would like to thank Robert E. Kalaba for his advice and guidance. REFERENCES

1. BELLMANR. E. & !CALABA R. E., Quasilinearization and Nonlinear Boundary-Value Problems, American Elsevier, New York (1965). 2. KAGIWADA H., KALABA R., RASAKHOO N. & SPRINGARNK., Numerical Derivatives and Nonlinear Analysis, Plenum, New York (1985). 3. MOORE L. C. & MARSH D. J., How descending limb of Henle’s loop permeability affects hypertonic urine formation, Am. /. Physiof. 239, F57-F71 (1980). 4. WEXLER A. S., Automatic evaluation of derivatives, Appl. Mark Comp. (in press).