Solving complex-valued differential systems

Solving complex-valued differential systems

Wohing Complex-Valued Differential Systems* . A. Watts, M. R. kott, and M. E. JAKI~ Mathematics L%viskm2646 Sznelia lVatha1 L&oratodt3 Albuquerque, ...

1MB Sizes 25 Downloads 78 Views

Wohing Complex-Valued Differential Systems* . A. Watts, M. R. kott,

and M. E. JAKI~

Mathematics L%viskm2646 Sznelia lVatha1 L&oratodt3 Albuquerque, New Mexico 87185

Applti

ABSTRACT

Y,_~eartwepoint boundary-value problems defined in complex space arise freqkbeutly in physical studies. WC:examine initial-value procedures based on 9aperposilion principles applied to the associated real system of eqyatdons. We show how to take a.dvantage of the fact that the underlying problem is complex, thereby cutting the usual number of necessary intttgrations of the homogeneous system by onehalf. We also discuss several aspects of incorporating these ideas into a computer code utilizing arts orthonormalization p~rocedurre.

1.

INTRODUCTION

Over the past few years we have been developing software for solving two-pint boundary-value problems FSefined by real-v&4 firnctions. One code, cakd SUPORT 11, 21, has been widely dissemin&xl, and we have found t.hat many of the users of i&e c.wle ure actiudy sohing prdcms defined in compk9x space. Such pmbkms arise hquently in physkal studies; for example, the Orr-Sommerfeld cwtion which arises in h+odynamk stability araalysis [3], he drift wave qwtictas in Tokamalcs (41, and the study of inaiscid instability of the hypehlic-tangent velocity pnrrl& 15). 1’~~~alternatives are a~&labIe for solving such problems. Complex ari& mctic !muld be usecl throu&ut the !!;aJutit:x~ technique, or the iproblem co~lld be corlveP+d to a real system with ti.~ dimension doubling. This latter appr,nash apv to be a I:ither conu:‘n~~~n method of d4ing with the complex p.roM~e~n.One reason ti thr, availabiky of good numerical software which is

382

H. A. WATTS, M. R. SW’lT,

AND M. E. LORD

y pn:.vided in real arithmetic, in particular for the initial-value problem 161.We &J not necessarily advocate that this be a standard practice for solving mplex p&ems in gencsrzil.However, because it dces m frequently, we tl&& ii is worthwhile to point out a technique to recover the increase in cost due to dolbling the sixe of the system. That is, we shall exploit the special *rabcture If the associated real system and find that the effective cost of : real system canbe reduced, making it comparable to solving the r ddem dizctly Iusing compJex arithmetic. paper, we shall restrict our attenuon to li~lear differential systems, n&ng th.rt naniinear ez#uations ma.y be attacked by the quasilinearization approach d solving a sequence of linear problems [‘I]. Furthermore, we shall inteJe! ted :.n initial-vi&e pnx!edures which arr: based on superposition anal we explore a simple idea which. appears to have been i in t~~~~tatiomall work on these tm of problems. Thus we show how to ts ke advantage of the fact that the underlying problem is complex, while id3 grating the associated system of real eqMions. The essence rof the mtfer is &at it is n-q to integrate only one-half of the usual number of nous equations, thus resulting in a substantial reduction of both h storage and computing time (approaching a savings of about 58 percent). In we discuss several computational aspects of implementing these a ccmputer cockleutilizing an orthonormalization procedure {such as SVPMT) [ 1, 2, 81. Lastly, the Orr-Sommerfeld equation is used as an example to illustrate the efficacy of the new algorithm. 2. TME BOUNDARY-VA!LUEPROBLEM Consider t&l:complex linear twopoint boundaq~alue

problem

where + and s are vector functions with n components, T is an n X n matrix y is a vector with n-k function, C is an (n -k)>w matrix of rankn-k, components, ,iJ is a k X n matrix of rank k, and 6 is a vector with k components. I he vecton; a,nd matrices are, in gelleral. complex valued, while the independe at variable :r is real valued. III t.h~zremainder of the paper, a particular ordt ring of the components for the a(so&ted real system will be assumed. Thus1the deriwd system of real equations is taken to be of the form

Y’(X)=:F(x)y(x)+ g( x), Ay(a)==

a,

BdW=iL

383

tvhere ly and g are the %!rtvectors

and

are VecirlDrs with 2( f 1 - k) and 2k coxqonents, respectively. The sulbscripts R and d will refer to 3re real and imaginary parts of the complex variables. Then

(3) One can obviously derive other associated real systems, for example by alternating the real and imaginary components in defining the solution vector g. Howellrer, the above structure seems to simplify the presentation of ideas.

3. THE METHOD OF SUPERPOSITION AND ORTHONOR~VULIZATION In this section, we present a brief description of the solution technique which uses superposition and orthonormakakion principles. The initial point is assumed to be c!, so the integration proceeds from a to b. Because we are solving linear differential systems, solutions of (2) can be expressed in the foITn

y(x) = u(x)+ vc::c)c, when: v(x) is a particular solution whkh satisfies u’(x) = F(x)c(x)+g(x), Au(a) = a,

(4)

H. A. WATTS, M.R. SCOTT, AND M. E. LORD and the columns of the 2ra ><2k matrix W(X) are homogeneous

solutions which

satisfy

u’(x)= F(x)u(xj,

(6) Au(a)=O.

This is some ti+nes referred to as the reduced qMposition algorithm, it having taken advmtage of the separated boundary conditions so that only 2k ( rather than 2n ) :romogeneous solutions are requirtd. Thus a solution of (2) is obtained consis tiag of 2k linearly independent s9utions of the sysxem (6) together with a pIticular solution of the system (S), yielding 2k + 1 Lrdependent solution ve::tors which are sufficient to dehne the problem solution space. 91’ course if g(x)=0 and LY= 0, then u(x) = 0, which leaves only 2k solutions to be ~3~nerated for the homogeneous problem. The vector c 6; obtained by requiring (4) to satisfy the boundary condition at the final poirlt b, which leads to BU(b)c==P-

Bv(b).

(7)

Unfortunateiy, t is simple approa& will not always work numerically. If the width of the e i ?;envalue spectrum of F is extremely, large, it may not be wssible to mair,itain numerical linear independence cli the computed initialva!ue solutions silver the complete interval [Q, b]. Tc I circcmvent this difficulty, each timt’ the solutions are in danger of becom ng numerically linearly clependent, theIf are reorth .c?nornralirad before the in te pti0111proceeds. If {~j> denote the orthl3normalization points, the solution rq resentation becomes

!/(X)= o,,w+ Kb)c, Sor x E [Q, Urn, J. At em, the Gram-Schmidt defines

orthon :)rmalization procedure

where P, is obt Lined as an upper triangular transform 1tion matrix. Continuity

of q is achieved by matchhg the solution representations over .successive orthot~ormalizatic~nsubinten&, which ru~sultsin the c, being defined recursiwly by

Thus the solub.on process consists of the following steps: (a) the generation of staring values u(a), U(a) which satisfy the initial conditions in (S), (6); f’b) the integration sweep !rom Q to (bwhile storing the orthonormakation information at the points of o.thonormGzation and homogeneous- and particular-soUon values at all the designateld output points of interest; (c) the backward sweep of computing the cm and using the stored values of G, and U” to detemline sclution values y at the output points.

Further ;.dgorithmic and implt3mentation details, such as the important aspect of hsw to choose the orthonom&ization points, can be found in [l, 2, ‘31.

4. THE COMPLEX SOLK’ION VIA THE REAL SOLUTION APPROACH The method just describl for t;he real system (2) can also be applied directly to the complex probk:m (l)!, resulting in

where p(x) is a partkular solution of (1) and the columns of W ‘constitute k independent complex solutions to the homogeneous system associated with (1). This representation contains ‘:2n(k+ 1) elements of iinPonna&ionin tht: solution vectors, whereas the replresentation in (4) for the associated rea! system contains Zn(2k + 1) elements in the solution vectors. Hence, we see there is a possibility that solvilg Fhe associated real system in the usual way leads to unnecessary e.FiCort berg expended in the form of computing twice as many sr)lutions to the .homogelleuus equations as are really needed. Let ULS rewrite (8) a:;

=

( pR

-t WRe, -

W,e,)+i(p,

-t WJe,

+Wfler),

380

H. A. WATTS, M. R. SCOTI’, ,IND M. E. L,ORD

noting that fo: !i unplicity in notatio,rr, we have dropped the dependence upon x for the p, W’ traril~bles.Now, associating the corresponding real and imaginary componcnl s loads to a soiution of the real system (2) which is in the form of (4) with

+‘:;), u=(z), u=:( 2 ;!j

and

c=(z)*

(9)

‘The form of U suggests that we do not have to mmpute (integrate) 2k independent s&rtions to the homogeneous system (6). Rather, we need only compute k indl :pet,ldent homogeneous solutions, whi
and merely form the remaining vectors

which ;ue also :;olutions of the homogeneous equations, since

Hence, L = (2; 2) satisfies r(6). This s IOWSthat advantage can be taken of the fact that the underlying problem )f interest is complex, thereby allowing a reduction of one-half the number .~f intqptiam of the homogeneous equ &ions performed in the general (se. To ensure that this technique defirles a vallid superposition solution to the ilssociated complex problem, we need assurance that the k: colun~s of tb.e c~mplcx matrix W(x) remain independent whenever the 2k ~:~~lumnsof the associated real matrix U(r) remair, independent. This is an immediat F!consequence of the more general result: rank W = r if and only if rank U = 2;~ [IO]. ‘Thus a solution to the complex problem can in fact be achieved upon integration of the k solution columns of 2, provided the 2k VQEP~ of U ==(2 2) are independent. However, note that to keep the c&mns 2f U independent it is not sufficient to merely keep the columns of Z

independent. This aim be seen from the example

10

U=

0 0

0

0

II 1 1 &c;a-

from which is obti,Lined

1 0 0

1’

0

011 000 000

-1

0’ 00

C’learly, rank 2 := rank I:’= 2. From diffen%Itial-equation theory, we know that the soUion vectors q(x), ~,@),..., X,(X), Zk(x) are linearly independent provided the initial vettars q(u), Z@),..., Z&Z), G(u) are linearly independent. The question of how to obtain iacceptable starting vectors will be answered in Se&ion (6.From the standpoint of using simple superposition theory, this would then complete the mathematical justification of the new solution technique. However, for Mficult problems (e.g., highly unstable mathematically), it wilRbe nm t9 combine ,superposition with a reorthonormalization process in order to maintain linear independence numerically. These complicatiion5 will be dealt with in Section 7. 5.

PROPERTIES OF THE SOLUTION VECTORS

We shall be dealing tith partition according to

vectors having 2n componen’B, which we x=

U u ’

(1

where u and u each have n components. Associated wit%z, we shall define its “ mate” -U

u

) ’

where

S=

0

I”

-

&I oj

\

Nate that S is a skew symmetric matrix satisfying ST = - S and SST= STS

= _- s2 = - I,,. ‘kl\ere are several interesting and useful elementary properk associated with such vectrz:ti which we shall now enumerate. They c~xn be readily established.

H. A. WA'.fTS,M. R.SCOTI',ANDM.E.LORD

388

I. i = -. z. II. (z,i)=zTz=(u, -o)+(o,u)=O. III. (2,. z2:j= --(Z*, 22). IV. (;I,, z*)=(z*, 22). V. !‘+J = ]]5]]2. VI. %ufposez,, z~,..., 2, form an orthcmmnal set. Then ;2,, i,, . . . ,Zm also form an orthonormal set. VII. If z = Ecjz p then Z = Ecjzf VIII. Let w = u + iu be a complex vector which is associated with z, and similarly define tii. Then tz, = iw.

6.

STMIING

VECTORS

We need to obtain k + I starting vectors, defined by v and Z, which satisfy the given Mtial conditions at x = Q and for which rank(Z a = 2k. Because of the strumxe of 2 and A, from (3), if AZ = 0, then A2 = 0, Taking v as a particular solution of Au = a, it then follows that the solution defined by Equations ( 4), (9) satisfies the bunjlary data at x = Q; namely, Ay( a) = a. TO obtain the initial values for U = (2 2) computationally, we make use of the following result. Let the 2k columns of the matrix & = \q o2 - - uZk) define an orthonormal basis for the null space of A, AQ = 0. Then it is possible to pick k columns from a, LAYzL= oil, 22 = Oiz>.. . , ok = aik, SO that upondefiningZ=(z, z2 a.m zk)andZ=(zl G ~~~&),weobtainrankC = rank(2 2>= 2k [lo]. In the code we have written, the complete solution to the underdetermined system of equations Ay(a) = a is first achieved by using orthogonal Householder transformations. That is, a par-tic&u vector vP and a basis collection tc2 are obtained which satisfy AvP =: a, AQ = 0, a*8 = I, and $62 = 0. In order to obtain the starting integration vectors v and Z for the new algorithm, a computationally stable “modified Gram-Schmidt procedure” [ll, pp. 201-2041 is applied to vP and the 2k vectors of 52 along with the corresponding 2k vectors of 8. Vector interchange (pivoting) is used to pick out the k vectors defining Z and the k corresponding ma’:e vectors 2 which satisfy the desired independence criteria. In practice, it is not necessary to actually compute and use storage space for Q or 2. Rather, these vectors can be introduced one at a time until the starting vector set is completed. This is most easily accomplished when the vectors are visualized as being arranged in the order q, 6,) w2, (3,, , . . . This process is guaranteed to work and ultimately produces l

SohAng Complex-Valued Dij$matial

!iystmw

389

where P is a nonsingular upper triangular matrixof order 2k, and t is a vector wi& 2k components. It is easy to see that we preserve AU= (0, and Au = CT. Furthermore, the new starting vectors are ortltogonal. This means of choosing starting vectors fx the integration process was very natural in the boundary-value code we have constructed because the same type of algorithm is utilized by the orthc~normalization procedure. More will be said about this in the next: section. We have also examined in some de tad another possibility for selecting appropriate starting vectors. Consider the symmetric Gram matrix C(u,, tin 2,. . . , u,) whose i, j element is the vector inner product (Ui, uj). The vlectorsUs, Us,. . . ,ap, are linearly independent if and only if det G * 0. Applying this to the problem at hand, we obtain

where 1Mis the skew-symmetric m&ix defined by rn,, = (up, Gb) [lo]. Expansion of this determinant is easily accomplished for small values of k. In fact,

detG=

1

for

k = II

(1 - m2,J2

for

k = 2

for

k = 3.

i ( I-

:nt2 - ni& - rn& )”

For larger k, additicnal te!x;ls are obtained, making the expression more luomplkated with ncl apparent generd form emerging. &cause of this, selection of independent starting vectors by this scheme is not as at%ractive for goneral-purpose computing. However, for problems having a small number of boundary conditions at x = b, the above rest&s could be readily applied (e.g., t”-!e Orr-Sommerfeld equations).

7.

1VTECWTION &call

AND REORTHONORMALIZAT’ION

that the integration

aspect of the algorithm involves c3mptrting a

pa;%ular solution vector o ald k homogeneous solution sectors defined as the coIlnms of 2. From a mathematical point of view, he solution to the bs~oda.iy-value problem (2) has been defined and completely characterized foi :he superposition process, since the solution vectors 0, 2, 3re guaranteed hi rt:main independent throughout the ccurse of integration. Computationally, however, it may not be possible to achieve this because of the finite prr:cisio~4 of the computer. One way to overco.ne this difficulty is to use the - ~.h:. normalization technique described in Section 3. Ll’

2

H.A. WATTS,M.R.SCOlT,ANDM.E.LORD

3

‘Becauseof can-ying along only k integration solutions for the homogeneous equation, one might ask if it is sufficient to jr& apply the orthonorma&&ion process to Z while ignoring 2. This would surely simplify matters and result in additional .savings of storage spats. Unfortunately, this scheme is deficient for ~mputational purposes. It is ab&utely essential that the vectors of 2 be inch&d in the orthonormahzation process. OtheMrise, when numerical hnear dependence threatens, the orthogonalization scheme turns the “nearly dependent” vectors of 2 into orthogonal vectors Z,, some of which can be very r~&y identical to vectors of ZneW.‘I’his happens because 2 &o represents 5olutions to the homogeneous equation, and each z i in 2 is already orthogonal :o Z~in 2. Furthermore, the linear independence of the k vectors of 2 is c~om@ationally achievable over a longer integration interval thn is the linear mdependence of the 2k vectors of 2 and 2. Thus to ensure that the 2k c,xhr:3ns of (2, g_ ) are independent for working accurac y, it is necessary

to irrcMe the vectors of 2 when generating the new orthogonal basis coh&ion. -4s mentioned before, it is not actually necessary to compute and store the i vectors. Rather, they are defined ,and introduced *asneeded in the computat bns. Also, it turns out to be more convenient to think of the vectors as if they had been put in the order zl, & xp, &, . . . . In other words, we may view this ibs applying (implicitly) a cohnnn permutation matrix E to (2 2) so that

Ii we denote the columns of Z,, , Z,, by { and f, respectiwi;, then A is the diagonal mat-ri~ A = diag(1/ll~~l~2,1/1131112,0 g , 1/‘l&lj2, l/li&l~2}, and, by Using properpies from Section 5, P turns out to be r3nupper triangular mat& l

af the

form

Sol iting Complex-Valued Diffmentiai Systems

391

(Cclmputationally, we carq this out using the “modified Gmm-Schmidt process” with vector interchanges.) This is obviously cheaper than the orthogon&zatia ,lf 2k vectors resulting from the straightforward approach of solving the pb&lem (2). In fact, the increase in cost over orthogonalization of k vecton is iust the extra sxage for P and the cost of computing (k - l)k/2 ad&t+& imaer products, This fol!ows because of the properties of such vectors, listed in Section 5; in particular,

Thus the elements tn the even-numbered rows of B do not need to be recomputed. 8.

FINAL BOUNDARY EQUATIONS

The superposition coefficients are obtained by solving the final boundary eq;Mtion (7’),where now U(b) = (Z(b) z(b)). The existence and uniqueness of solutions toi the boundary-value problem (2) depends on the solvability of Eqislatisn (7’). But this is in direct correspondence to the situation with the cormplex problem (1). Using the complex solution form (8) in the final boundary q&ions of (l), we obtain DW(b)e :=6 - Dp(b). Since

D‘W = (D~W~ - DiW, ) + i( DOW, + D~W~) and

ranJ I)W{b) = r if and only if rank N(b) = 2r. For r = k, we have a unique so1ut-ion. For r a k, many solutions are possible if the boundary equations are con:; istent (nonuniqueness>, whereas no solution exists when the boundary equiltions arcsinconsistent (nonexistence). E’or eigenvalue problems, tie have DW(b je = 0, and nontrivial solultic~ns exist or&y for the c.ase of r < k, i.e., det[ DW( b)] = 0, which yields the TV IXd, equations ~~~det(IXV)} = 0, Im{det(DW)) = 0. For the real-system c8~e, wing the above observations, we see that DW cm Ibe ea~il,>reconstructd

l-l. A. WATTS, M. It SCOTT, AND M. E. LORD

392

from the first k &unms of BU. In general, complex arithmetic will be needed for tht: determinant evahatiu~n, after whiclh the real and imaginary parts can -bc extracted. 9.

NUMERICAL

RESUL33

We have modiied the SIJPORT code [l] ix\ the manner described in this paper so that complex boundary-value problems could be handled efficiently. In this section, we shall compare the performance of the modified code with that of the unmodified swmr version. Our example is the OrrSommerfeld equation for plane Poise&: flow,

The results we sh.ow are for the Reynolds number R = lo*, the wave number w = 1, the eigenvalue parameter X = 0.061859252 - O.O1398327i, and the velocity profile u( x ) = I -- x2. Computation was carried out on a CDC 6600. In Table 1, we present our comparisons fair several integration toterantis indicated by TOL = log ,,,(tderance) and for b&h a Runge-Mutta and an .+dams integration method. These are versiom s of the variable-step-size codes RKFJS [ 12.. 131 and STEP [14] (also described in the comparison study [6]) which have been incoxplclrated into the SUPORT package. We also show the number of orthonormaiization points, No; the number of integration steps, NIS; the number of derivative function evati.:&ions, NDE; and the computer execution times in seconds. The firstline, for each tolerance, represents the modified code, whereas the second line shows values obtained from the unmodified code. Blanks in the second line isadicate no change in the values. The points of reorthonomlalization are chosen dynamically by the code. This important aspect of how and where they arc: selected has been described in detail, in i Ii, 2,9]. In this example w(e hiave achieved reductions in exs:cution times which range from 43-50 percent. The Adams integr&ion scheme achieves a slightly better reduction than does the Runge-Kutta scheme. 3’hi.s is principally due to t h: significant change in the number of integration steps, which contributes ;o more than a factolr-of-two variation in zhc number of derivative-function evaluations for the A&ms method. One re:mrr for this is that the vector norm 114 for error monitoring and step-size control in the Adams scheme is 11.112, which depends directly on the number of xpiations being integrated, whereas the RungeKutta scherme uses II&. [In tht: boundary-value code, all eptions

393

--

Mdhsc’

TABLE 1 nx%am mn ‘riiE ORR-soMMERFeLD EQUATION'

:ixa

NO

NIS

NDE

-2

34

515 518

7778 15828

3.1 5.6

._ 4

34

1497

19672 39344

7.5 13.2

-_ 6

35

4558

64270 128712

23.7 42.7

-. 2

36

9409 1502

5920 12604

5.9 11.7

-. 4

34

2327 2481

-6

35

3705 3769

Time

-__I_

RK

Ad.ms

10.0 20.1 15146

16.4 32.4

“l?or each tolera3lce, the first line corres~nds to the modified SupoR’I:

code, whereas the second line corresponds :o the unmodified version. are integrated simuhaneously as one large system, which is 2n (2k + 1) equ&onci for the inhcmogeneous problem (2) whereas the special treatment frtr the c-tjrnplex problem results in 2n l (k + 1) equations.] Also contributing to the vati:;tion in the r!eduction rates is the fact that the Adams scheme has a substan&,lly larger overhead cost, which is a function of the number of equations king integrated. In fact, overhead costs account for Runge-Kutta winning, out on total execution costs until more than about five digits of accuracy are requested.

J.0.

C:ONCI,USION:S

WI: have discussed a more efficient wa!r of handling complex-valued ctikential equations when using superposition principles on the associated reali system. The esselice of the matter is that one-half of the usual number of inte;r.ations performed are actually required to provide the needed basis of solutions for the hojmogeneous equations. ‘1c hus we can expect up to a E+>~_x rcent reduction in computational costs when using the approach de-scribed here. The percentage reduction achkable varies with the number k elf ix undary equations at the final integration point. In fact, for inhomoge-

H. A. WATTS, Ml. R. SCOTT, AND M. E. LORD

394

neots problems, this approach leads to a reduction factor of (k + 1)/(2k+ 1) in thenumber of integrations performed. The worst case of k = 1 predicts up to approxiraately Spercent reduction. As the integration expenses become the dominant cost in solving a problem, so then does the savings approach the maximum achievable. Furthermore, this technique results in a reduction of storage for all integrationated arrays. Finally, the procedure has also been carefuBy exzunined in the context of combining superposition with an orthe nor~malizationprocess. A computer code implementing these ideas is available from the authors. RETEIWNCJIS 1

2

3 4 5 6 7

8 ‘a

10

11 11.Cl 13 14

boundary M. R. Scott and H. A. Watts, SUPOKI.- ‘4 computer code for twc@nt value problems via orthonormalization, Sandia National Laboratorie Report SAND7~198, Albuquerque, N.Mex. 1975. M. R. Scott and H. A. Watts, Computational solution of linear two-point boundary value problems via orthonormalizath, SIAMJ. Numer. Anal. 14:4Q-70 (1977). C. C. Lin, I’he Theory of Ii~&oc+c?naic !hhiZi@, Cambridge U.P., 1955. J. Smith and J. C. Whitson, On the numerical solution of the drift wave equations itbymeans oif invariant iiibedding, J. Conaput. Ph@cs &3:102-117 (1979). A. Michalke, On the inviscid instability of the hyperbolictangent velocity profile, 1. Fluid Mech. 19543-556 (1964). L. F. Shampine, H. A. Watts, and S. M. Ds.venport, Solving nonstiff ordinary differential equationsThe state of the art, SIAM ,Rtm.l&376-411 (1976). M. R. !htt and H. A. Watts, Computational solution of nonlinear twc@nt boundary vah?e problems, in Rctceedh~~ of the Fifih Spposium of Computers in Chemkul Engineeting, Vysoke Tatry, Czechoslovakia, 1977, pp. 17-27. S. D. Conte, The numerical solution of linear boundary value problems, SLAM Rev. 8:369-321(1@6]r. M. E. hrd and H. A. Waltts, Modifications of SUPOR’T,a linear boundary‘value problem solver: ParitIII--0rthonormahath improvemc::ts, Sandia National hboratorh Report SAND78-0522, Albuquerque, N.Mex , 1978. H. A. Watts, M. R. SC&t,and M. E. Lord, Solving complex valued differential systems, Sandia National Laborato