Differential elimination with Dixon resultants

Differential elimination with Dixon resultants

Applied Mathematics and Computation 218 (2012) 10679–10690 Contents lists available at SciVerse ScienceDirect Applied Mathematics and Computation jo...

227KB Sizes 6 Downloads 80 Views

Applied Mathematics and Computation 218 (2012) 10679–10690

Contents lists available at SciVerse ScienceDirect

Applied Mathematics and Computation journal homepage: www.elsevier.com/locate/amc

Differential elimination with Dixon resultants q Lu Yang a,b, Zhenbing Zeng a, Weinian Zhang c,⇑ a b c

Shanghai Key Laboratory of Trustworthy Computing, East China Normal University, Shanghai 200062, PR China Laboratory for Automated Reasoning, CICA, Academia Sinica Chengdu, Sichuan 610041, PR China Yangtze Mathematics Center and Department of Mathematics, Sichuan University, Chengdu, Sichuan 610064, PR China

a r t i c l e

i n f o

Keywords: Dixon resultant Differential resultant Differential elimination

a b s t r a c t In this paper we apply the idea of Dixon resultant to algebraic differential equations and introduce the Dixon differential resultant. We prove that a necessary condition for the existence of a common solution of two algebraic differential equations is that the differential resultant is equal to zero, which actually provides a method of elimination and reduces a system of multi-variate differential equations to a system of single-variate differential equations. This result is also generalized to the system of n differential polynomials. We give algorithms to realize our method of elimination for systems of differential equations. Our results and algorithms are demonstrated by some examples. Ó 2012 Elsevier Inc. All rights reserved.

1. Introduction The idea of using resultants to solve differential equations can be traced to 1930s. The first work on ‘‘differential resultant’’ was done by Ore [23] in 1932. Later, the divisibility and elimination theory of algebraic differential equations was developed by Riquier [24], Janet [10], Ritt [25,26] and Kolchin [15]. The notion of resultant of two nonlinear differential polynomials was introduced by Ritt [25] in 1932 under some hypotheses on the differential polynomials by using of pseudo-division. In 1997 Carrà-Ferro [2] introduced a notion of differential resultant of two ordinary differential polynomials as quotient of two determinants and proved that a necessary condition for the existence of a common solution of two algebraic differential equations is that the differential resultant is equal to zero based on Macaulay’s resultant (see [9,20,22]). Methods for using characteristic set theory to solve first order autonomous ODE and partial difference polynomial systems was proposed by Gao et al. in [8,6,31]. Other generalizations of differential resultant can be found in [3,13]. In this paper we present our joint work on computing differential resultant via Dixon’s resultant [5] for first order and higher order ODEs. Comparing with the classical resultant of Sylvester, the advantage of Dixon’s resultant is that it can do one-step elimination of a block of unknowns from a system of polynomial equations like Mccauley’s. Meanwhile, there are works showing that Dixon’s resultant is much faster than Mccauley’s for certain specific problems (cf. [16,18,17,19]). So the motivation for using Dixon’s resultant to solve differential resultants comes very naturally. Our results in this paper show that this attempt is also worth to develop for many non-trivial problems listed in Kamke’s book [11]. This paper is organized as following. In Section 2 we give a brief description on the Dixon resultant. In Section 3 We prove that a necessary condition for the existence of a common solution of two algebraic differential equations is that the differential resultant is equal to zero and we show examples reducing a system of multi-variate differential equations to a system

q Supported by the National Basic Research Program of China Grant No. 2011CB302402, National Natural Sciences Foundation of China Grant No. 61021004, and Shanghai Leading Academic Discipline Project B412. ⇑ Corresponding author. E-mail addresses: [email protected], [email protected] (W. Zhang).

0096-3003/$ - see front matter Ó 2012 Elsevier Inc. All rights reserved. http://dx.doi.org/10.1016/j.amc.2012.04.036

10680

L. Yang et al. / Applied Mathematics and Computation 218 (2012) 10679–10690

of single-variate differential equations. In Section 4 we discuss the elimination for high order system. In Section 5 we generalize the method to the system of n differential polynomials by examples. 2. A brief description to Dixon resultant Since 1779 when Bézout investigated resultants of two variables, there have developed many theories of resultant, one of which is the Dixon resultant [5,12]. As shown in [12], the following construction of Bézout resultant is given by Cayley: Given two polynomials f ðxÞ; gðxÞ of degree n, the determinant

   f ðxÞ gðxÞ   Dðx; aÞ ¼  f ðaÞ gðaÞ  is a polynomial of variables x and a. Note that Dðx; aÞ ¼ 0 as x ¼ a, implying that ðx  aÞjDðx; aÞ. Thus

dðx; aÞ :¼ Dðx; aÞ=ðx  aÞ is a polynomial of a of degree n  1, i.e.,

dðx; aÞ ¼ c1 ðxÞan1 þ c2 ðxÞan2 þ    þ cn ðxÞ; where ci ðxÞ s are polynomials of x of degree 6 n  1. If x0 is a common zero of f and g, then dðx0 ; aÞ ¼ 0 for all a, which implies that

c1 ðx0 Þ ¼ 0; . . . ; cn ðx0 Þ ¼ 0:

ð2:1Þ 0

1

n1

Obviously, (2.1) is a system of linear equations of n variables x ; x ; . . . ; x . Such a linear system has only nontrivial solutions because x0 ¼ 1. So (2.1) implies that the coefficient matrix of this linear system has a zero determinant. This matrix and its determinant are called the Bézout matrix and the corresponding Bézout resultant of f and g respectively. In 1908 Dixon generalized this idea to the case of three polynomials of two variables. In general, given k þ 1 polynomials of k variables,

f ðx1 ; . . . ; xk Þ;

gðx1 ; . . . ; xk Þ; . . . ; hðx1 ; . . . ; xk Þ:

ð2:2Þ

Then the determinant Dðx1 ; . . . ; xk ; a1 ; . . . ; ak Þ defined by

  f ðx1 ; x2 ; . . . ; xk Þ gðx1 ; x2 ; . . . ; xk Þ    f ða1 ; x2 ; . . . ; xk Þ gða1 ; x2 ; . . . ; xk Þ   f ða1 ; a2 ; . . . ; xk Þ gða1 ; a2 ; . . . ; xk Þ   .. ..   . .   f ða1 ; a2 ; . . . ; a Þ gða1 ; a2 ; . . . ; a Þ k k

 hðx1 ; x2 ; . . . ; xk Þ      hða1 ; x2 ; . . . ; xk Þ      hða1 ; a2 ; . . . ; xk Þ   .. ..   . .     hða1 ; a2 ; . . . ; ak Þ 



is also a polynomial of variables xi ; ai ði ¼ 1; . . . ; kÞ and

dðx1 ; . . . ; xk ; a1 ; . . . ; ak Þ ¼

Dðx1 ; . . . ; xk ; a1 ; . . . ; ak Þ ðx1  a1 Þ    ðxk  ak Þ

is also a polynomial, called the Dixon polynomial of f ; g; . . ., h. Let ci ðx1 ; . . . ; xk Þ; i ¼ 1; . . . ; n, where n is a positive integer, denote those coefficients of dðx1 ; . . . ; xk ; a1 ; . . . ; ak Þ, a polynomial of variables a1 ; . . . ; ak . Obviously, the system

c1 ðx1 ; . . . ; xk Þ; c2 ðx1 ; . . . ; xk Þ; . . . ; cn ðx1 ; . . . ; xk Þ

ð2:3Þ

consists of polynomials of x1 ; . . . ; xk and is called the Dixon induced system of (2.2). It is known that a common zero of (2.2) is ‘ also a common zero of (2.3). Arranging all terms of monomial x‘11 x‘22    xkk lexicographically, the system (2.3) defines a system ‘ of linear equations of variables x‘11 x‘22    xkk . Its coefficient matrix D is called the Dixon matrix of (2.2) and the corresponding determinant, denoted by resðf ; . . . ; hÞ, is called the Dixon resultant of (2.2). Obviously, if (2.2) has a common zero then resðf ; . . . ; hÞ ¼ 0. The Dixon resultant has been found as a very useful tool for elimination theory in the past 20 years. See [12,7,16,18,32,33] for recent works. An algorithm Gather-and-Sift for computing Dixon resultants with Maple program can be found in [29]. In the following computation we only need the gather procedure of the Gather-and-Sift algorithm. So we use the Maple program gps0, shown in the Appendix A. 3. Elimination for first order systems We start with the system of differential equations

F 1 ðDx; Dy; x; yÞ ¼ 0;

F 2 ðDx; Dy; x; yÞ ¼ 0;

where F 1 ; F 2 2 Q½v 4 ; v 3 ; v 2 ; v 1 . Differentiating them we get the following equations

ð3:4Þ

L. Yang et al. / Applied Mathematics and Computation 218 (2012) 10679–10690

10681

@ 1 F 1 ðDx; Dy; x; yÞDDx þ @ 2 F 1 ðDx; Dy; x; yÞDDy þ @ 3 F 1 ðDx; Dy; x; yÞDx þ @ 4 F 1 ðDx; Dy; x; yÞDy ¼ 0;

ð3:5Þ

@ 1 F 2 ðDx; Dy; x; yÞDDx þ @ 2 F 2 ðDx; Dy; x; yÞDDy þ @ 3 F 2 ðDx; Dy; x; yÞDx þ @ 4 F 2 ðDx; Dy; x; yÞDy ¼ 0;

ð3:6Þ

where @ j denotes the partial differentiation to the jth variable. One can represent the left hand sides of (3.5) and (3.6) as

F 11 ðDDx; DDy; Dx; Dy; x; yÞ ¼ 0;

F 21 ðDDx; DDy; Dx; Dy; x; yÞ ¼ 0;

ð3:7Þ

where F 11 ; F 21 2 Q½v 6 ; v 5 ; . . . ; v 2 ; v 1 . Thus, any solution ðxðtÞ; yðtÞÞ of (3.4) satisfies

8 F 1 ðDx; Dy; x; yÞ ¼ 0; > > > < F ðDx; Dy; x; yÞ ¼ 0; 2 > F 11 ðDDx; DDy; Dx; Dy; x; yÞ ¼ 0; > > : F 21 ðDDx; DDy; Dx; Dy; x; yÞ ¼ 0:

ð3:8Þ

This also implies that

ðv 6 ; v 5 ; v 4 ; v 3 ; v 2 ; v 1 Þ :¼ ðDDx; DDy; Dx; Dy; x; yÞ is a solution of the polynomial system

8 F 1 ðv 4 ; v 3 ; v 2 ; v 1 Þ ¼ 0; > > > < F ðv ; v ; v ; v Þ ¼ 0; 2 4 3 2 1 > F 11 ðv 6 ; v 5 ; v 4 ; v 3 ; v 2 ; v 1 Þ ¼ 0; > > : F 21 ðv 6 ; v 5 ; v 4 ; v 3 ; v 2 ; v 1 Þ ¼ 0:

ð3:9Þ

Regarding v 5 ; v 3 ; v 1 as variables and other three ones as parameters, we treat (3.9) as a system of 4 polynomials with 3 variables. As shown in [30, Section 29], a necessary condition for system (3.9) to have a solution is that the Dixon resultant

r1 ðv 6 ; v 4 ; v 2 Þ :¼ res½F 1 ; F 2 ; F 11 ; F 21 ; v 5 ; v 3 ; v 1 ; being a polynomial of

v 6 ; v 4 ; v 2 , is equal to zero. This implies that

r1 ðDDx; Dx; xÞ ¼ 0;

ð3:10Þ

that is, xðtÞ satisfies the differential Eq. (3.10). Similarly, another necessary condition for system (3.9) to have a solution is that the Dixon resultant

r2 ðv 5 ; v 3 ; v 1 Þ :¼ res½F 1 ; F 2 ; F 11 ; F 21 ; v 6 ; v 4 ; v 2 ; being a polynomial of

v 5 ; v 3 ; v 1 , is equal to zero. This implies that

r2 ðDDy; Dy; yÞ ¼ 0:

ð3:11Þ

This concludes the following: Theorem 1. Suppose that C 2 functions xðtÞ; yðtÞ are solutions of the system (3.4) of differential equations. Then xðtÞ; yðtÞ satisfy differential Eqs. (3.10) and (3.11) respectively. F 1; e F 2 g, We refer to r 1 ðDDx; Dx; xÞ and r2 ðDDy; Dy; yÞ as differential resultants of the system of differential polynomials f e where

e F 1 :¼ F 1 ðDx; Dy; x; yÞ;

e F 2 :¼ F 2 ðDx; Dy; x; yÞ:

This theorem suggests the following procedure for solving differential systems (3.4): F 2 and let e F 11 ; e F 21 denote the obtained two differential polynomials, which are exactly the same as Step 1. Differentiate e F 1; e the left hand sides in (3.7). Step 2. Compute the Dixon resultants

h i res e F 1; e F 2; e F 11 ; e F 21 ; DDy; Dy; y ;

h i res f1 ; f2 ; e F 11 ; e F 21 ; DDx; Dx; x ;

which lead to (3.10) and (3.11) respectively. Step 3. Solve xðtÞ; yðtÞ from the differential Eqs. (3.10) and (3.11) separately. Step 4. If it is checked that xðtÞ; yðtÞ satisfy system (3.4) then we have found a solution of this system. In what follows we apply the above algorithm to two simple examples collected in [11]. The two examples both contain two unknown functions xðtÞ and yðtÞ. In [11] elementary methods were employed to reduce them to ones with only one unknown function. Those reductions can be regarded as some elementary processes of elimination. Different elementary process will be used for different system of differential equations. In contrast, we find a systematic method of differential elimination

10682

L. Yang et al. / Applied Mathematics and Computation 218 (2012) 10679–10690

with Dixon resultants. We use those examples with explicit answers to demonstrate easily how the above algorithm works and how a system of differential equations can be eliminated mechanically. The first one is a cubic differential system (Ref. [11, Chapter 9, Item 9.1]) defined by

x0 ðtÞ ¼ xðtÞ½xðtÞ þ yðtÞ; y0 ðtÞ ¼ yðtÞ½xðtÞ þ yðtÞ: 0

ð3:12Þ

0

In [11] the system is reduced to yx þ xy ¼ 0, i.e., xy ¼ C, a constant, which makes a substitution in the first equation of (3.12) and gives the equation x0 þ x2 þ C ¼ 0, an equation without y. Our Maple program is given as follows: > > > >

# write the system of the corresponding differential polynomials [Dx + x⁄(x + y), Dy-y⁄(x + y)]: # Differentiate both to obtain 4 polynomials [Dx + x⁄(x + y), Dy-y⁄(x + y), DDx + Dx⁄(x + y)+x⁄(Dx + Dy), DDy-Dy⁄(x + y)-Y⁄(Dx + Dy)]: > # Start the program "gps0" for resultant computation, setting up the first matrix, evaluating the first determinant, Setting up the Dixon matrix, and doing Gaussian Elimination > read gps0: > # Treating the system as 4 polynomials with 3 variables y, Dy DDy, compute its Dixon resultant > gps (%,[DDy,Dy,y]): > # Display the computational results > rr:¼%: > print (rr);

It reads:

½xDx  x3  yx2 ; x3 DDx  2x4 Dx;

ð3:13Þ

in which the second polynomial does not contain y and is actually the polynomial r 1 as given in (3.10). So we consider the second polynomial and factor it as

x3 ðDDx þ 2xDxÞ; from which we obtain x ¼ 0 and a differential equation only with the unknown function xðtÞ, i.e.,

x00 ðtÞ þ 2xðtÞx0 ðtÞ ¼ 0:

ð3:14Þ 0

2

Integrating (3.14) directly implies that x ðtÞ þ x ðtÞ þ C ¼ 0, an equation in the form of variable separation, where C is an arbiR trary constant. We can see that xðtÞ is the inverse function of the function TðxÞ :¼  dx=ðx2 þ CÞ þ C 1 . From the first poly2 nomial in (3.13) we get yðtÞ ¼ ðDx þ x Þ=x ¼ C=xðtÞ. Finally, we check that these obtained functions xðtÞ and yðtÞ satisfy system (3.12). Remark that, following [12], in the above computation and hereafter we only take account of the toric solutions, that is, assume the indeterminates all are not identical to zero. Otherwise, the problem can be reduced to the trivial, simpler or easier cases. In the second example we consider a more general cubic differential system (Ref. [11, Chapter 9, Item 9.3])

x0 ðtÞ ¼ ½aðpxðtÞ þ qyðtÞÞ þ axðtÞ; y0 ðtÞ ¼ ½bðpxðtÞ þ qyðtÞÞ þ byðtÞ;

ð3:15Þ

which is the famous Lotka–Volterra system modeled from mathematical biology. In [11] there is no answer to its general solution but only a hint that ya xb ¼ CeðabbaÞt , which can be used to eliminate y in an elementary manner. Using our common algorithm, a Maple program is given as follows: > > > >

# write the system of the corresponding differential polynomials [Dx-(a⁄(p⁄x + q⁄y)+alpha)⁄x,Dy-(b⁄(p⁄x + q⁄y)+beta)⁄y]: # Differentiate both to obtain 4 polynomials [Dx-(a⁄(p⁄x + q⁄y)+alpha)⁄x, Dy-(b⁄(p⁄x + q⁄y)+beta)⁄y, DDx-(a⁄(p⁄Dx + q⁄Dy))⁄x-(a⁄(p⁄x + q⁄y)+alpha)⁄Dx, DDy-(b⁄(p⁄Dx + q⁄Dy))⁄y-(b⁄(p⁄x + q⁄y)+beta)⁄Dy]: > # Start the program "gps0" for resultant computation > read gps0: > # Treating the system as 4 polynomials with 3 variables y, Dy DDy,

10683

L. Yang et al. / Applied Mathematics and Computation 218 (2012) 10679–10690

compute its Dixon resultant > gps (%,[DDy,Dy,y]): > # Display the computational results > rr:¼%: > print (rr); It reads: 2

½x2 a2 q3 yb  x2 aq2 ab  a2 px3 q2 b þ Dxq2 axb; x2 a3 q4 bDx þ x3 a3 q4 bDDx  x3 a3 q4 bDxb þ x4 a3 q4 bab þ x5 a4 q4 bpb 2

2

2

2

2

 x4 a4 q4 bpDx þ x4 a3 q4 b Dxp  x5 a3 q4 b ap  x4 a2 q4 a2 b þ 2x3 aq4 a2 b Dx  Dx2 q4 a2 x2 b ; in which the second polynomial does not contain y and is actually the polynomial r 1 as given in (3.10). So we consider the second polynomial and factor it as 4

2

3

2

2

a2 x2 bq ðaDx2 þ xaDDx  xaDxb þ x2 aab þ x3 a2 pb  x2 a2 pDx þ bx aDxp  bx aap  bx a2 þ 2bxaDx  bDx Þ; from which we obtain x ¼ 0 and a differential equation only with the unknown function xðtÞ, i.e., 2

axðtÞx00 ðtÞ  ða þ bÞðx0 ðtÞÞ2  ðabxðtÞ þ a2 px2 ðtÞ  abpx ðtÞ  2baxðtÞÞx0 ðtÞ þ ða2 pb  abpaÞx3 ðtÞ þ ðaab  ba2 Þx2 ðtÞ ¼ 0: ð3:16Þ This completes the procedure of eliminating y. 4. Elimination for higher order systems Consider a system of two mth order differential polynomials ff1 ; f2 g, where



f1 :¼ F 1 ðDm x; Dm y; . . . ; Dx; Dy; x; yÞ;

ð4:17Þ

f2 :¼ F 2 ðDm x; Dm y; . . . ; Dx; Dy; x; yÞ

and F 1 ; F 2 are polynomials of 2m þ 2 variables. Note that a polynomial of fewer variables can be regarded as a polynomial of 2m þ 2 variables. As seen in last section we only consider variables Dj y (for some integers j). In the system ff1 ; f2 g there are m þ 1 variables but only 2 differential polynomials. Differentiating the system ff1 ; f2 g each time makes one new variable but makes two new differential polynomials. Suppose that we make k times differentiation so that we finally get a system which contains one more polynomials than variables, which is required by the construction of a Dixon resultant. Clearly, the number of polynomials is equal to 2k þ 2 but the number of variables after k times differentiation is equal to k þ ðm þ 1Þ. As required,

2k þ 2 ¼ k þ ðm þ 1Þ þ 1; from which we get k ¼ m. Therefore, differentiating m times, we get 2m þ 2 differential polynomials f1 ; Df1 ; . . . ; Dm f1 ; f2 ; Df2 ; . . . ; Dm f2 . Then we obtain its Dixon resultant

res½f1 ; Df1 ; . . . ; Dm f1 ; f2 ; Df2 ; . . . ; Dm f2 ;

y; Dy; . . . ; Dm y; Dmþ1 y; . . . ; D2m y;

which is a polynomial of x; Dx; . . . ; Dm x; Dmþ1 x; . . . ; D2m x and denoted by

r1 ðx; Dx; . . . ; Dm x; Dmþ1 x; . . . ; D2m xÞ: As shown in [30, Section 29], a necessary condition for the system

ff1 ; Df1 ; . . . ; Dm f1 ; f2 ; Df2 ; . . . ; Dm f2 g to have common zeros is that

r1 ðx; Dx; . . . ; Dm x; Dmþ1 x; . . . ; D2m xÞ ¼ 0:

ð4:18Þ

Thus we conclude the following: Theorem 2. Suppose that C 2m functions xðtÞ; yðtÞ are solutions of the system (4.17) of differential equations. Then xðtÞ satisfies the differential Eq. (4.18). yðtÞ also satisfies a similar differential equation. Let us generally consider a system of n differential polynomials ff1 ; . . . ; fn g of n variables, where

8 m1 m1 > < f1 :¼ F 1 ðD x1 ; . . . ; D xn ; . . . ; x1 ; . . . ; xn Þ;  > : fn :¼ F n ðDmn x1 ; . . . ; Dmn xn ; . . . ; x1 ; . . . ; xn Þ

ð4:19Þ

10684

L. Yang et al. / Applied Mathematics and Computation 218 (2012) 10679–10690

and each F j (j ¼ 1; . . . ; n) is a polynomials of nðmj þ 1Þ variables. Our purpose is to eliminate n  1 variables x2 ; . . . ; xn . For each j ¼ 1; . . . ; n, differentiating fj lj :¼ m1 þ    mj1 þ mjþ1 þ    P þmn times, we get lj new differential polynomials with totally nðmj þ lj þ 1Þ ¼ nð nj¼1 mj þ 1Þ variables. Thus we totally obPn Pn tain j¼1 ðlj þ 1Þ ¼ ðn  1Þ j¼1 mj þ n differential polynomials

8 9 f1 ; f2 ; ...; fn ; > > > > > > < Df ; Df2 ; . . . ; Dfn ; = 1 F :¼ > ... ... ... ... > > > > > : l1 ; l2 D f1 ; D f2 ; . . . ; Dln fn of nð

Pn

j¼1 mj

P þ 1Þ variables. Since we only eliminate x2 ; . . . ; xn and their derivatives, i.e., the ðn  1Þð nj¼1 mj þ 1Þ variables

V :¼ ðx2 ; . . . ; xn ; . . . ; Dm2 þl2 x2 ; . . . ; Dmn þln xn Þ; P P the system F can be regarded as a system of ðn  1Þ nj¼1 mj þ n differential polynomials of ðn  1Þð nj¼1 mj þ 1Þ variables, less by 1 than polynomials. Thus we can calculate the Dixon resultant

res½F; V; which gives a differential polynomial of only x1 and its derivatives, denoted by PðDÞx1 . Therefore the system is eliminated as

PðDÞx1 ¼ 0:

ð4:20Þ

Thus we conclude the following: Theorem 3. Suppose that functions x1 ðtÞ; . . . ; xn ðtÞ are C m , m :¼ m1 þ . . . þ mn , and satisfy the system (4.19) of differential equations. Then x1 ðtÞ satisfies the differential Eq. (4.20). Other xj ðtÞ s (j ¼ 2; . . . ; n) also satisfy similar differential equations. Let us consider an example of 2  2 system

x00 ðtÞ ¼ kx=r 3 ;

y00 ðtÞ ¼ ky=r 3 ;

r 2 ¼ x2 þ y2 :

ð4:21Þ

This system found in [11, Chapter 9, Item 9.16] describes the motion of 2-bodies under universal attraction, for example, the motion of the moon around the earth. The form of the system looks simple but it is complicated to eliminate y from the system. In [11] the elimination is completed in the polar coordinates. In order to demonstrate our method, we start our computation from its an equivalent form 2

ðx2 þ y2 Þ2 fðx00 ðtÞÞ2 þ ðy00 ðtÞÞ2 g  k ¼ 0;

x00 ðtÞy  y00 ðtÞx ¼ 0:

Then we have the following: > f1:¼ðx^ 2 þ y^ 2Þ^ 2  ððDðDðxÞÞÞ^ 2 þ ðDðDðyÞÞÞ^ 2Þ  k^ 2: > f2:¼D(D (x))⁄y-D(D(y))⁄x: > f3:¼D(f1): > f4:¼D(%): > f5:¼D(f2): > f6:¼D(%): > ff:¼[f1,f2,f3,f4,f5,f6]: > pp:¼subs (D(x)=Dx,D(y)=Dy,‘@@’(D,2)(x)=D2x,‘@@’(D,3)(x)=D3x, ‘@@’ (D,4)(x)=D4x,‘@@’(D,2)(y)=D2y,‘@@’(D,3)(y)=D3y, ‘@@’(D,4)(y)=D4y,D(k)=0,‘@@’(D,2)(k)=0,ff): > read gps0: > gps (pp,[y,Dy,D2y,D3y,D4y]): > rr:¼%: > vectdim (rr); It shows that there are 146 components. Then we continue to run > factor (rr[67]); [answer: 0] > factor (rr[66]): > nops (%); [answer: 7] Then > p1:¼op(1, op(7, %%)): > p2:¼op (6,%%%);

ð4:22Þ

10685

L. Yang et al. / Applied Mathematics and Computation 218 (2012) 10679–10690 4

2

showing that p2 :¼ 4k þ 68x8 D2x4  387k x4 D2x2 . > p1:¼collect (p1,[D4x,D3x,D2x,Dx],distributed): > p1:¼sort (p1,[x,D4x,D3x,D2x,Dx],plex): > F:¼subs (Dx = Diff (x,t),D2x = Diff (x,t$2),D3x = Diff (x,t$3), D4x = Diff (x,t$4),p1); showing that

F :¼ 144x000 ðx00 Þ8 ðx0 Þ3 x6  24x000 ðx00 Þ7 ðx0 Þ5 x5  540x0000 ðx000 Þ2 ðx00 Þ6 x9  135ðx0000 Þ2 ðx000 Þ2 ðx00 Þ4 x10 þ 225ðx0000 Þðx000 Þ4 ðx00 Þ3 x10 2

2

2

2

 324k x0000 ðx00 Þ7 x4  216k ðx00 Þ8 ðx0 Þ2 x2  27k ðx0000 Þ3 ðx00 Þ3 x6  162k ðx0000 Þ2 ðx00 Þ5 x5  54ðx0000 Þ2 ðx00 Þ6 ðx0 Þ2 x8 2

2

 216x0000 ðx00 Þ8 ðx0 Þ2 x7 þ 36x0000 ðx00 Þ7 ðx0 Þ4 x6  72k ðx00 Þ7 ðx0 Þ4 x  150ðx000 Þ5 ðx00 Þ3 x0 x9  288k ðx000 Þ4 ðx00 Þ3 x5 2

þ 432k ðx000 Þ2 ðx00 Þ6 x4  210ðx000 Þ4 ðx00 Þ4 ðx0 Þ2 x8 þ 360ðx000 Þ3 ðx00 Þ6 x0 x8  128ðx000 Þ3 ðx00 Þ5 ðx0 Þ3 x7 þ 432ðx000 Þ2 ðx00 Þ7 ðx0 Þ2 x7 2

 216x000 ðx00 Þ9 x0 x7  84ðx000 Þ2 ðx00 Þ6 ðx0 Þ4 x6 þ 27ðx0000 Þ3 ðx00 Þ5 x10 þ 324x0000 ðx00 Þ9 x8  216k ðx00 Þ9 x3 þ 162ðx0000 Þ2 ðx00 Þ7 x9 2

2

þ 64k ðx000 Þ6 x6 þ 72ðx00 Þ9 ðx0 Þ4 x5  8k ðx00 Þ6 ðx0 Þ6  8ðx00 Þ8 ðx0 Þ6 x4  216ðx00 Þ10 ðx0 Þ2 x6  540ðx000 Þ2 ðx00 Þ8 x8 2

2

2

þ 450ðx000 Þ4 ðx00 Þ5 x9  125ðx000 Þ6 ðx00 Þ2 x10 þ 108k x0000 ðx000 Þ2 ðx00 Þ3 ðx0 Þ2 x4  216k x0000 x000 ðx00 Þ5 x0 x4  72k x0000 x000 ðx00 Þ4 ðx0 Þ3 x3 2

0000 2 000

2 0000

00 3 0 5

000 3

2

00 2 0 5

0000 2

000 2

00 2 6

000 3

00 4 0 9

 54k ðx Þ x ðx Þ x x þ 144k x ðx Þ ðx Þ x x þ 108k ðx Þ ðx Þ ðx Þ x þ 180x0000 ðx Þ ðx Þ x x 2

þ 216x0000 ðx000 Þ2 ðx00 Þ5 ðx0 Þ2 x8  216x0000 x000 ðx00 Þ7 x0 x8 þ 72x0000 x000 ðx00 Þ6 ðx0 Þ3 x7  54k ðx0000 Þ2 ðx00 Þ4 ðx0 Þ2 x4 2

2

2

2

2

2

2

 216k x0000 ðx00 Þ6 ðx0 Þ2 x3  36k x0000 ðx00 Þ5 ðx0 Þ4 x2  24k x000 ðx00 Þ5 ðx0 Þ5 x  96k ðx000 Þ5 x00 x0 x5  48k ðx000 Þ4 ðx00 Þ2 ðx0 Þ2 x4 2

2 000

þ 288k ðx000 Þ3 ðx00 Þ4 x0 x4 þ 88k ðx000 Þ3 ðx00 Þ3 ðx0 Þ3 x3 þ 216k ðx000 Þ2 ðx00 Þ5 ðx0 Þ2 x3  216k x ðx00 Þ7 x0 x3 2

2

2

þ 24k ðx000 Þ2 ðx00 Þ4 ðx0 Þ4 x2  144k x000 ðx00 Þ6 ðx0 Þ3 x2 þ 432k x0000 ðx000 Þ2 ðx00 Þ4 x5  54ðx0000 Þ2 x000 ðx00 Þ5 x0 x9 2

 144k x0000 ðx000 Þ4 x00 x6 þ 216ðx00 Þ11 x7 : We further run > nops (F); [answer: 60] > degree (F); [answer: 18] which indicates that the program produces a large differential equation of variable x of degree 18 and 60 terms. Next, consider a 3  3 system

8 0 > < x ¼ rx þ axy þ xz; y0 ¼ by  xy; > : 0 z ¼ czð1  zÞ  dxz;

ð4:23Þ

which describes a simple model of a predator (density x) and two preys (densities y and z) and can be found from Exercise 3.65 [14, p.159], where r; a; b; c; d are parameters. Our computation can be proceeded as follows: > f1:¼ Dx + r⁄x-a⁄x⁄y-x⁄z: > f2:¼ Dy-b⁄y + x⁄y: > f3:¼ Dz-c⁄z⁄(1-z)+d⁄x⁄z: > f4:¼ D (f1): > f5:¼ D (f2): > f6:¼ D (f3): > f7:¼ D (f4): > ff:¼ [f1,f2,f3,f4,f5,f6,f7]: > pp:¼ subs (D (x)=Dx, D (y)=Dy, D (z)=Dz, ‘@@’(D,2)(x)=D2x, ‘@@’(D,2)(y)=D2y, ‘@@’(D,2)(z)=D2z, D (r)=0, ‘@@’(D,2)(r)=0, D (a)=0, ‘@@’(D,2)(a)=0, D (b)=0, ‘@@’(D,2)(b)=0, D (c)=0, ‘@@’(D,2)(c)=0, D (d)=0, ‘@@’(D,2)(d)=0, ff): > read gps0: > gps (pp,[D2z,Dz, z,D2y,Dy,y]): > rr:¼%; > vectdim (rr); [answer: 3] > factor (rr[3]): > nops (F); [answer: 470]

10686

L. Yang et al. / Applied Mathematics and Computation 218 (2012) 10679–10690

It shows a large polynomial of 470 terms. Then we continue > indets (F); [answer: b, c, d, r, x, x1, x2, x3] It means that the polynomial of 470 terms contains nothing about y; z and their derivatives, that is, the variables y and z are eliminated. The following example contains second order derivatives and their powers. Consider

x00 ðtÞ ¼ ðy00 ðtÞÞ2 þ 3y0 ðtÞ þ cx2 ðtÞ;

y00 ðtÞ ¼ 2ðx00 ðtÞÞ2  4x0 ðtÞ  cy2 ðtÞ;

ð4:24Þ

where c is a constant. Our computation can be proceeded as follows: > f1:¼D (D (x))-ðDðDðyÞÞÞ^ 2  3  DðyÞ  c  x^ 2: > f2:¼D (D(y))-2  ðDðDðxÞÞ^ 2  4  DðxÞ  c  y^ 2: > f3:¼D (f1): > f4:¼D (%): > f5:¼D (f2): > f6:¼D (%): > ff:¼[f1, f2, f3, f4, f5, f6]: > pp:¼subs (D (x)=Dx,D(y)=Dy,‘@@’(D,2)(x)=D2x,‘@@’(D,3)(x)=D3x, ‘@@’(D,4)(x)=D4x,‘@@’(D,2)(y)=D2y,‘@@’(D,3)(y)=D3y, ‘@@’(D,4)(y)=D4y,D (c)=0,‘@@’(D,2)(c)=0,ff): > read gps0: > gps (pp, [y, Dy, D2y, D3y, D4y]): > rr:¼%: > vectdim (rr); It shows 23. Then we continue to run > factor (rr[23]): > f23:¼ %: > nops (f23); [answer: 3] > op (1, f23); [showing 1459166279268040704] > op (2, f23); [showing c^42] > fa3:¼ op (3, f23); > nops (fa3); [answer: 104834] > indets (fa3); [showing c, x, Dx, D2x, D3x, D4x] The above computation, proceeded with a computer of frequency 2.8 GHz and RAM 8 GB, spends about 1152 s (about twenty minutes) and occupies 692 M RAM to eliminate y and gives a 4th order differential equation of x which consists of 104834 terms. 5. More general cases There are more difficulties in discussing the general form having n equations and ‘ unknown functions, called n  ‘ systems of differential polynomials, where n may not be equal to ‘. System (4.19) is just the case of n ¼ ‘. We first consider a special n  2 system defined by mth order differential polynomials ff1 ; . . . ; fn g, where

8 m m > < f1 :¼ F 1 ðD x; D y; . . . ; Dx; Dy; x; yÞ;  > : fn :¼ F n ðDm x; Dm y; . . . ; Dx; Dy; x; yÞ

ð5:25Þ

and F 1 ; . . . ; F n are polynomials of 2m þ 2 variables. Consider Dj ys as variables in elimination, the system contains n polynomials of m þ 1 variables. Differentiating each one k times, we obtain nk þ n polynomials

ff1 ; Df1 ; . . . ; Dk f1 ; . . . ; fn ; Dfn ; . . . ; Dk fn g with k þ ðm þ 1Þ variables y; Dy; . . . ; Dm y; . . . ; Dmþk y. In order to make a resultant, we require that

nk þ n ¼ k þ ðm þ 1Þ þ 1; from which we get

L. Yang et al. / Applied Mathematics and Computation 218 (2012) 10679–10690



mnþ2 mþ1 ¼  1: n1 n1

10687

ð5:26Þ

Therefore, if n  1 divides m þ 1 exactly, i.e.,

ðn  1Þjðm þ 1Þ;

ð5:27Þ

we similarly obtain the resultant

res½f1 ; Df1 ; . . . ; Dk f1 ; . . . ; fn ; Dfn ; . . . ; Dk fn ; y; Dy; . . . ; Dm y; . . . ; Dmþk y; which is a polynomial

r1 ðx; Dx; . . . ; Dm x; Dmþ1 x; . . . ; Dmþk xÞ:

ð5:28Þ

Then we have the following result. Theorem 4. Suppose that functions xðtÞ; yðtÞ are C mþk , where k is given by (5.26), and satisfy the system (4.19) of differential equations. If (5.27) holds, then xðtÞ satisfies the differential Eq. (5.28). yðtÞ also satisfies a similar differential equation. Obviously, condition (5.27) is satisfied in the cases of Theorems 1 and 2, where n ¼ 2. From (5.26) we easily obtain k ¼ m in the case of Theorem 2. For the general n  ‘ system of differential polynomials ff1 ; . . . ; fn g, where

8 m1 m1 > < f1 :¼ F 1 ðD x1 ; . . . ; D x‘ ; . . . ; x1 ; . . . ; x‘ Þ; ... > : fn :¼ F n ðDmn x1 ; . . . ; Dmn x‘ ; . . . ; x1 ; . . . ; x‘ Þ

ð5:29Þ

and F 1 ; . . . ; F n are multivariate polynomials, without loss of generality, consider x2 , . . . ; x‘ as variables in elimination. For each j, differentiate fj totally kj times, we obtain kj þ 1 differential polynomials with ‘ðkj þ mj þ 1Þ variables. Thus we totally obtain Pn j¼1 kj þ n differential polynomials with maxf‘ðkj þ mj þ 1Þ : j ¼ 1; . . . ; ng variables, among which only maxfð‘  1Þ ðkj þ mj þ 1Þ : j ¼ 1; . . . ; ng variables need to be eliminated. This induced system of differential polynomials is denoted by G1 . The computation of Dixon resultants requires that n X kj þ n ¼ ð‘  1Þ max fkj þ mj þ 1g þ 1; j¼1

j¼1;...;n

ð5:30Þ

where k1 ; . . . ; kn P 0 are indeterminate integers. If the Eq. (5.30) is solvable, then we can calculate the Dixon resultant

res½G1 ; V 1 ; where V 1 :¼ ðx2 ; . . . ; x‘ ; . . . ; Dm2 þk2 x2 ; . . . ; Dm‘ þk‘ x‘ Þ. It gives a differential polynomial of only x1 and its derivatives, denoted by P 1 ðDÞx1 . Thus the system is eliminated as P 1 ðDÞx1 ¼ 0. The integer Eq. (5.30) of variables k1 ; . . . ; kn P 0 is an important condition in elimination for the general differential polynomial system (5.29). In this paper we give a mechanized method to eliminate variables for systems of differential equations. The method can be programmed with an algorithm and effectively solve many problems, for instance, the ones collected in [11]. Although some of them were solved separately by using different elementary methods of integration in their different cases, those examples can demonstrate our computation intuitively. Some examples with derivatives of higher order look simple, but in the procedure of differential elimination we need to make the Dixon matrix by differentiating the differential polynomials more times, which generates some new big polynomials. Thus, a seemingly simple system may generate a very large resultant. This method can be applied to some complicated systems of differential equations with higher order derivatives and more unknown functions and finally reduces them to some differential equations each of which has exactly one unknown function. Many methods, for example Gröbner basis [1], characteristic set [26–28] and resultant [5,21], can be employed in elimination for polynomial systems. Therefore, we believe that they all can be employed to work for differential elimination. As indicated in [4], each method has both its restriction and its merit for a specified class of polynomials. It is worthy mentioning that, in contrast to other methods, the method of using Dixon resultant has two advantages: First, less computation is needed. Using the method of Gröbner basis we need to compute every elements of the basis; using the method of characteristic set, we cannot ignore any one in the ascending set of polynomials reduced in the triangularization of variables. In contrast, Dixon resultant, requiring no intermediate step-by-step procedure, eliminates all but one variables once in the procedure of Gauss elimination. Second, for many examples the sparsity of the reduced matrix, i.e., many entries vanish, can be utilized sufficiently. For these advantages Lewis and his co-authors ([16–19]) chose Dixon resultant to solve some challenging geometric problems. In this paper we employ Dixon resultants, but we expect to see works of differential elimination with Gröbner basis in the future.

10688

L. Yang et al. / Applied Mathematics and Computation 218 (2012) 10679–10690

Appendix A. GPS program with (linalg): readlib (factors): gps:= proc (tsplst,tsvlst) local i,j,k,lcvr,st,npols,nvars,tspowers,tsVr,tsVrlst,tsA,tsB,tsC,divby, subvar,subby,tsa,tsdp,tscl,tsvar,nxy,sxy,txy,nrs,trm,mu; st:¼time (); npols:¼ nops (tsplst); nvars:¼ nops (tsvlst); tspowers:¼ array (1..nvars); if not (npols = nvars + 1) then print (‘Number of equations must be number of variables plus one!’); return; fi; tsA:¼ array (1..npols,1..npols); tsVr:¼ array (1..nvars); for i from 1 to nvars do lcvr:¼ op (i,tsvlst); tsVr[i]:¼ ts||lcvr||r od; tsVrlst:¼ convert (tsVr, list); ################################################################# # Setting up the first row of the matrix. ################################################################# for i from 1 to npols do tsA[1]:¼ op (i,tsplst); od; ################################################################# # Setting up the remaining rows of the matrix. ################################################################# divby:¼ 1; for i from 2 to npols do subvar:¼ op (i-1,tsvlst); subby:¼ tsVr[i-1]; for j from 1 to npols do tsA[i,j]:¼ subs (subvar = subby, tsA[i-1,j]); od; od; for i from npols by 1 to 2 do subvar:¼ op (i-1,tsvlst); subby:¼ tsVr[i-1]; for j from 1 to npols do tsA[i, j]:¼ normal ((tsA[i, j] - tsA[i-1, j])/(subvar-subby)); od; od; print (‘Set up the first matrix’); ################################################################# #Compute the determinant of the matrix ... "dp". ################################################################# tsa:¼ normal (det (tsA)); tsdp:¼ collect (tsa,tsVrlst,distributed); print (‘Evaluated the first determinant’); ################################################################# # Setting up the actual Dixon matrix. # tscl:¼map (primpart,[coeffs (tsdp,tsVrlst,‘trs’)],tsvlst); ################################################################# tscl:¼[coeffs (tsdp,tsVrlst,‘trs’)]; coeffs (collect (tsdp,tsvlst,distributed),tsvlst,‘txy’); tsvar:¼ tsvlst; #################################################################

L. Yang et al. / Applied Mathematics and Computation 218 (2012) 10679–10690

10689

#txy:¼sort ([txy], tsF); ################################################################# nxy:¼ nops ([txy]); sxy:¼ sum (txy[q],q = 1..nxy); txy:¼ [op (sort (sxy,tsvlst,tdeg))]; nrs:¼ nops (tscl); tsB:¼ array (1..nrs,1..nxy); for i from 1 to nxy do trm:¼ op (i,txy); for j from 1 to nvars do tspowers[j]:¼0; while divide (trm,tsvlst[j]^(1 + tspowers[j])) do tspowers[j]:¼tspowers[j]+1; od; od; for j from 1 to nrs do tsB[j,i]:¼ collect (tscl[j],tsvlst[1]); for k from 1 to nvars do tsB[j,i]:¼ collect (tsB[j,i],tsvlst[k]); tsB[j,i]:¼ coeff (tsB[j,i],tsvlst[k],tspowers[k]); od; od; od; print (‘Set up the Dixon matrix’); print (‘Doing Gaussian Elimination’); tsC:¼ffgausselim (tsB); mu:¼multiply (tsC,txy); print (‘GPS is running about ’,time ()-st); RETURN (mu) end:

References [1] T. Becker, V. Weispfenning, Gröbner Bases: A Computational Approach to Commutative Algebra, Springer-Verlag, New York, 1993. [2] G. Carrà-Ferro, A resultant theory for the systems of two ordinary algebraic differential equations, Appl. Algebra Eng. Commun. Comput. 8 (1997) 539– 560. [3] G. Carrà-Ferro, Generalized differential resultant systems of algebraic ODEs and differential elimination theory, Trends in Mathematics: Differential Equations with Symbolic Computation, Birkhauser Verlag Basel, 2006. pp. 324–334. [4] Xingwu Chen, Weinian Zhang, Decomposition of algebraic sets and applications to weak centers of cubic systems, J. Comput. Appl. Math. 232 (2) (2009) 565–581. [5] A.L. Dixon, The eliminant of three quantics in two independent variables, Proc. London Math. Soc. 6 (1908) 468–478. [6] R.Y. Feng, X.S. Gao, Polynomial general solutions for first order autonomous ODEs, in: Computer Algebra and Geometric Algebra with Applications, Lecture Notes in Computer Science, 3519, Springer, Berlin/Heidelberg, 2005, pp. 5–17. [7] Mao-Ching Foo, Eng-Wee Chionh, Corner edge cutting and Dixon-resultant quotients, J. Sym. Comput. 37 (1) (2004) 101–119. [8] X.S. Gao, M.B. Zhang, Decomposition of differential polynomials with constant coefficients, Proc. ISSAC (2004) 175–182. [9] I.M. Gelfand, M.M. Kapranov, A.V. Zelevinsky, Discriminants Resultants and Multidimensional Determinants, Birkhauser, Berlin, 1994. [10] M. Janet, Sur les sistémes d’équations aux dérivées partielles, J. Math. 3 (1920) 65–151. [11] E. Kamke, Differentialgleichungen Lösungsmethoden und Lösungen, Geest and Portig K.-G., Leipzig, Akad. Verlag, 1956. [12] D. Kapur, T. Saxena, Lu Yang, Algebraic and geometric reasoning using Dixon resultants, Proc. ISSAC (1994) 99–107. [13] A. Kasman, E. Previato, Commutative partial differential operators, Physica D 152–153 (2001) 66–77. [14] W.G. Kelly, A.C. Petetson, The Theory of Differential Equations, Springer, New York, 2010. Second edition. [15] E.R. Kolchin, Differential Algebra and Algebraic Groups, Academic Press, London-New York, 1973. [16] R.H. Lewis, P.F. Stiller, Solving the recognition problem for six lines using the Dixon resultant, Math. Comput. Simul. 49 (8) (1999). 49(3) 205–220.. [17] R.H. Lewis, S. Bridgett, Conic tangency equations and Apollonius problems in biochemistry and pharmacology, Math. Comput. Simul. 8 (2003). 61(2) 101–114. [18] R.H. Lewis, E.A. Coutsias, in: Algorithmic search for flexibility using resultants of polynomial systems, Lecture Notes in Computer Science 4869, 8, Springer, 2007, pp. 68–79. [19] R.H. Lewis, Comparing acceleration techniques for the Dixon and Macaulay resultants (abstract only), ACM Commun. Comput. Alg. 8 42 (1–2) (2008) 79–81. [20] F.S. Macaulay, Some formulae in elimination, Proc. London Math. Soc. 35 (1902) 3–27. [21] F.S. Macaulay, The Algebraic Theory of Modular Systems, Cambridge Tracts in Mathematics Mathematical Physics, Cambridge University Press, Cambridge, 1916. [22] F.S. Macaulay, Note on the resultant of a number of polynomials of the same degree, Proc. London Math. Soc. 21 (1921) 14–21. [23] O. Ore, Formale theorie der linearen differentialgleichungen, J. Reine Angew. Math. 167 (1932) 221–234. [24] C.H. Riquier, Les Systémes d’Équations aux Dérivées Partielles, Gauthier-Villars, Paris, 1910. [25] J.F. Ritt, Differential Equations From the Algebraic Standpoint, Coll. Publ., 14, Amer. Math. Soc., New York, 1932. [26] J.F. Ritt, Differential Algebra, Coll. Publ., 33, Amer. Math. Soc., New York, 1950. [27] W.-T. Wu, Basic principles of mechanical theorem proving in elementary geometries, J. Syst. Sci. Math. 4 (1984) 207–235.

10690

L. Yang et al. / Applied Mathematics and Computation 218 (2012) 10679–10690

[28] W.-T. Wu, Some remarks on characteristic-set formation, Math. Mech. Res. 3 (1989) 27–29. [29] Lu Yang, X.R. Hou, Gather-and-Sift: a symbolic method for solving polynomial systems, Proc. ATCM (1995) 771–780. [30] L. Yang, J.Z. Zhang, X.R. Hou, Nonlinear Algebraic Equation System and Automated Theorem Proving, Shanghai Sci-Tech Education Publisher, Shanghai, 1996 (in chinese). [31] G.L. Zhang, X.S. Gao, Properties of ascending chains for partial difference polynomial systems, in: Computer Mathematics, Lecture Notes in Artificial Intelligence, 5081, Springer-Verlag, Berlin/Heidelberg, 2008, pp. 307–321. [32] S.Z. Zhao, H.G. Fu, An extended fast algorithm for constructing the Dixon resultant matrix, Sci. Chin. Ser. A Math. 48 (1) (2005) 131–143. [33] S.Z. Zhao, H.G. Fu, Three kinds of extraneous factors of Dixon resultant, Sci. Chin. Ser. A Math. 52 (1) (2009) 160–172.