JOURNAL OF MATHEMATICAL ANALYSIS AND APPLICATIONS ARTICLE NO.
224, 81]90 Ž1998.
AY985986
Analysis of Carleman Representation of Analytical Recursions G. BerkolaikoU Department of Physics, Bar-Ilan Uni¨ ersity, 52900 Ramat-Gan, Israel and Department of Mathematics, Uni¨ ersity of Strathclyde, G1 1XH Glasgow, United Kingdom
and S. Rabinovich and S. Havlin Department of Physics, Bar-Ilan Uni¨ ersity, 52900 Ramat-Gan, Israel Submitted by Ra¨ i P. Agarwal Received September 15, 1997
We study a general method to map a nonlinear analytical recursion onto a linear one. The solution of the recursion is represented as a product of matrices whose elements depend only on the form of the recursion and not on initial conditions. First we consider the method for polynomial recursions of arbitrary degree and then the method is generalized to analytical recursions. Some properties of these matrices, such as the existence of an inverse matrix and diagonalization, are also studied. Q 1998 Academic Press
1. INTRODUCTION In our recent work w6, 7x we found a correspondence between the logistic map and an infinite-dimensional linear recursion. This correspondence was exploited to obtain a rather general method for the mapping of a polynomial recursion to a linear Žbut infinite-dimensional . one, which was reported in w7x. However, studying the literature, we found that our work is a rediscovery of a known approach called Carleman linearization w2, 5, 9, 10x. * E-mail:
[email protected]. strath.ac.uk. 81 0022-247Xr98 $25.00 Copyright Q 1998 by Academic Press All rights of reproduction in any form reserved.
82
BERKOLAIKO, RABINOVICH, AND HAVLIN
Some questions of Carleman linearization, such as the applicability of the method for analytical recursions, are still open. In this paper we thoroughly study this topic, prove convergence of series used in the method, and reveal some useful properties of the linearization, which allows one to understand the correspondence between the original and linearized recursions. The basis of the method is the construction of a transfer matrix, T s Ti j 4`i, js0 . This allows one to represent the solution in the form yn s ˜ eT n y, where ˜ e is the row Žtranspose. vector 0, 1, 0, 0, . . . 4T , y is the column vector of initial values 1, y, y 2 , y 3, . . . 4 , and T n is the nth power of the infinite-dimensional matrix T. For introductory purposes we explain the method first for the polynomial recursions Žsee also w7x. and then prove its extension to the general case of analytical recursions.
2. POLYNOMIAL RECURSIONS Here we consider a first-order recursion equation ynq 1 s P Ž yn . ,
Ž 1.
where P Ž x . is a polynomial of degree m: m
PŽ x. s
Ý ak x k ,
a m / 0.
Ž 2.
ks0
Let y 0 ' y be an initial value for the recursion Ž1.. We denote by y the column vector of powers of y, y s y j 4`js0 , and set the vector ˜ e to be the row vector ˜ eT s d j14`js0 . It should be emphasized that j runs from 0, since in the general case a0 / 0. In this notation ˜ ey is a scalar product that yields Ž 3. ˜ey s y. THEOREM 1. For any recursion of type Eq. Ž1. there exists a matrix T s Tjk 4`j, ks0 , defined by PŽ y.
j
j
m
s
žÝ / ai y
is0
i
jm
s
Ý Tjk y k ,
j s 0, 1, . . . ,
Ž 4.
ks0
such that yn s ˜ eT n y. n
Ž 5.
The matrix power T exists for all n and all the operations in the right-hand side of Eq. Ž5. are associati¨ e.
ANALYSIS OF ANALYTICAL RECURSIONS
83
Proof. For n s 0 the statement of the theorem is valid Žsee Eq. Ž3... def We introduce the column vector y1 s y 1j 4`js0 , where y 1 s P Ž y .. Equation Ž4. implies that y1 s Ty.
Ž 6.
Now, by analogy to Eq. Ž3., we have y 1 s ˜ ey1 s ˜ eTy. Therefore, the statement of the theorem is true for n s 1 as well. Assume that Eq. Ž5. is valid for n s l for any initial value y. Then y lq1 can be represented as y lq1 s ˜ eT l y1 , where y 1 s P Ž y . is considered as a new initial value of the recursion. Then, using Eq. Ž6., one obtains y lq1 s ˜ eT l y1 s ˜ eT l Ty s ˜ eT lq1 y. Note that for j and k satisfying k G jm we have Tjk ' 0. Therefore, each row is finite Ži.e., there is only a finite number of nonzero matrix elements in each row.. this proves the existence of powers of T and associativity in Eq. Ž5.. Thus, the proof is complete. EXAMPLE 1. As one can see, in the general case elements of the matrix T have a form of rather complicated sums. However, they reduce to a fairly simple expression, when the polynomial Ž2. has only two terms. To demonstrate this, let us consider the recursion Žfor a good reference on this subject, see w1x. ynq 1 s l yn Ž 1 y yn .
with y 0 ' y.
Ž 7.
In this case one has PŽ y.
j
j
j
s Ž l y y l y2 . s
Ý l jyi Ž yl. i is0
j Ž jyi.q2 i y . i
ž/
Denoting i s k y j, we have 2j
PŽ y.
j
sl
j
Ý Ž y1. kyj ksj
j yk kyj
ž /
and the matrix elements Tjk are Tjk s Ž y1 .
ky j
j l j. kyj
ž /
Ž 8.
Thus, we recover a known result w6, 5x for the logistic mapping. Note that, as in Riordan w8x, we use the definition of binomial coefficients such that Ž ml . s 0 for the integers l and m, whenever m - 0 or m ) l.
84
BERKOLAIKO, RABINOVICH, AND HAVLIN
3. RECURSION SOLUTION FOR ARBITRARY ANALYTIC FUNCTIONS IN THE RHS In this section we consider complex maps analytic in a disk Br s z: < z < - r 4 centered at the origin. Real analytic maps are considered as a special case of complex maps. Let the series `
Ý ak z k ' f Ž z . ks0
converge absolutely in the disk Br s z: < z < - r 4 . We construct the following series absolutely convergent in the disk Br w4x: `
Ý ak z k
j
s a0j q
ks0
s
j a a jy1 z q 1 1 0
ž/
j j 2 jy2 2 a j jy1 q a a z q ??? 1 2 0 2 1 0
ž/
ž/
`
Ý Tjk z k . ks0
DEFINITION 1. The matrix T s Tjk 4`j, ks0 is said to be the transfer matrix of the analytic function f Ž z . if Ý`ks 0 Tjk z k s w f Ž z .x j. THEOREM 2. Let g Ž z . be analytic in the disk BR , f : Br ª BR be analytic in the disk Br ; let the matrices T and S be the transfer matrices of the functions f Ž z . and g Ž z .. Then the matrix ST is the transfer matrix of the composition g ( f Ž z .. Proof. First we prove the existence of the matrix product ST. For arbitrary r , 0 - r - r, one has < f Ž z .< F R y e in the closed disk < z < F r , for some e , 0 - e - R. Thus, the power w f Ž z .x j satisfies
Ž ST. i k s
`
Ý Si j Tjk
- ryk
js0
`
Ý
Si j Ž R y e .
j
js0
converges absolutely, since, by definition, the series Ý`js0 Si j z j is absolutely convergent in BR . Thus, the matrix product ST exists and, furthermore, we have `
`
Ý Ý ks0
S i j Tjk z k F
js0
`
`
Ý Ý
Si j
ks0 js0
s
`
< z
`
Ý
rk
Ý
ks0
js0
ŽRye. rk
j
zk
Si j Ž R y e .
j
85
ANALYSIS OF ANALYTICAL RECURSIONS
and the series Ý`ks 0 ŽST. i k z k converges absolutely in the disk Br . Changing the order of summation w4x, one has `
`
`
ks0
is0
ks0
`
Ý Ž ST . jk z k s Ý S ji Ý Ti k z k s Ý S ji f Ž z . s gŽ f Ž z..
i
is0 j
s w g( f x Ž z. . j
Ž 9.
Considering the limit r ª r, we infer that Eq. Ž9. holds in the disk Br . This observation completes the proof. In this way we can write down the solution of the recursion z nq 1 s f Ž z n .
with z 0 ' z.
Ž 10 .
THEOREM 3. Let f : Br ª Br be analytic with transfer matrix T. Then, for any initial point z g Br , zn s ˜ eT n z, where, ˜ eT s d i, 14`is0 and z s z i 4`is0 . Since real analytic functions can be analytically continued to the complex plane, the above analysis can be employed in the real functions case as well. For practical use instead of the condition g Ž z .: BR ª Br the stronger condition, `
Ý < bi < x k - r
for < x < - R ,
is0
can be used, where the bi are coefficients of the series decomposition of the function f. The following example presents a family of functions whose transfer matrices form is invariant under multiplication. EXAMPLE 2. We consider the function f Ž x . s axrŽ b q x .. The components of the transfer matrix T are
Tjk s
¡ ¢0
~d
j0 d k 0
q
ž
k y 1 j yk ab jy1
/
for 0 - j F k , otherwise.
Ž 11 .
86
BERKOLAIKO, RABINOVICH, AND HAVLIN
If g Ž x . s cxrŽ d q x . and S is the transfer matrix of g Ž x ., then the components of the matrix ST are
Ž ST . jk s
`
k
i y 1 j yi k y 1 i yk c d ab jy1 iy1
Ý S jiTi k s d j0 d k 0 q Ý is0
s d j0 d k 0 q
ž / ž / / Ýž / ž / /ž / ž / isj
ž ž
k y 1 j y1 cb jy1
ky1 s d j0 d k 0 q jy1
kyj
ls0
ac
aqb
a
lqj
d
j
bd
aqb
kyj l yk
.
One can see that the matrix elements Ž ST . jk have the same form as in Eq. Ž11..
4. OPERATIONS WITH TRANSFER MATRICES An interesting question to ask now is: What is the inverse of the transfer matrix, for example, of matrix Ž8.? The answer to this question and a possible method to calculate a general form of elements of the inverse matrix gives THEOREM 4. Let T be the transfer matrix of the analytic function f Ž x ., f Ž0. s 0, f X Ž0. / 0. Then Ty1 exists and is the transfer matrix of the in¨ erse function fy1 Ž x .. Note that by the inverse function fy1 Ž x . we understand the formal Taylor series which, under substitution, produces fy1 ( f Ž x . s f ( fy1 Ž x . s x. The series converges if and only if the function defined by the implicit function theorem as a solution of f Ž x . s y for initial value f Ž0. s 0 is analytic at the point y s 0. Thus, only local information about the function f Ž x . is used in the theorem. Proof. The statement of the theorem implies that T is an upper triangular matrix. Indeed, f Ž x . can be represented as f Ž x . s x f Ž x . and therefore w f Ž x .x j s x j w f Ž x .x j and Tjk s 0 for k - j. Existence and uniqueness of the inverse matrix, which is also an upper triangular matrix, is proved in w3x. To prove the theorem, we consider a formal Taylor series gŽ x. s
`
Ý T1y1k x k . ks0
Now we can form a transfer matrix S of the function g Ž x .. Then ŽST.1 j s d 1 j , since the first row of the matrix S is the same as that of the matrix
ANALYSIS OF ANALYTICAL RECURSIONS
87
Ty1 . On the other hand, Theorem 4 implies g ( f Ž x . s Ý`ks 0 ŽST.1 k x k s x. Therefore, the powers Ž g ( f Ž x .. j are equal to x j s Ý`ks 0 d jk x k s Ý`ks 0 ŽST. jk x k , which implies ŽST. jk s d jk . Thus, ST is the unity matrix, S s Ty1 and g Ž x . s fy1 Ž x .. The theorem is proved. EXAMPLE 3. For the logistic mapping f Ž x . s l x Ž1 y x . we have fy1 Ž y . s Ž1 q 1 y 4 yr l .r2 Žwe eliminate the second branch Ž1 y 1 y 4 yrl .r 2 of the inverse function because it violates the condition y1 fy1 ( f Ž0. s 0.. To get an explicit form of elements Tjk , one can notice y1 Ž . that f x is simply related to the generating function cŽ x . s Ž2 x .y1 Ž1 y '1 y 4 x . of Catalan numbers w8x. Respectively, the powers w cŽ x .x j are decomposed with the aid of ballot numbers as follows:
'
'
cŽ x .
j
s
Ý a jqky1, k x k , k
where lym lqm m lqm
a ly1, m s
ž
/
y1 are ballot numbers. This relation immediately gives Tjk s lyk a ky1, kyj , i.e.,
y1 Tjk s lyk
2k y j . 2k y j k y j j
ž
/
Writing down the orthogonality relation Ty1 T s E in matrix components, one finds the combinatorial identity k
Ý Ž y1. ky j 2 l y k k
j kyj
ž /ž
2l y k s d j, l lyk
/
Žsee also w8x.. The additional condition f X Ž0. / 1 makes possible the diagonalization of a transfer matrix. THEOREM 5. Let T be the transfer matrix of the analytic function f Ž x ., f Ž0. s 0, f X Ž0. s l / 0, l / 1. Then transfer matrices D and Dy1 exist such that T s Dy1L D, where L is a diagonal matrix with elements L jk s l jd jk .
Ž 12 .
88
BERKOLAIKO, RABINOVICH, AND HAVLIN
Proof. We rewrite Eq. Ž12. as follows: DT s L D and note that the structure of matrices T and L is rather simple and the equation gives a straightforward algorithm for calculating the matrix D. Indeed, Djk s Ž l j y Tk k .
ky1
y1
Ý Ti k Dji . is1
This relation defines off-diagonal elements of the matrix D while the diagonal ones remain arbitrary. Thus, we have a family of suitable upper triangular matrices. To prove that the family contains a transfer matrix, we fix the undetermined element D 11 to be, say, a / 0 and, as before, consider a formal series hŽ x . s
`
Ý D1 k x k . ks1
This allows one to construct a transfer matrix S for the function hŽ x . and, since L is also the transfer matrix of the function lŽ x . s l x, we can apply the reasoning from the previous proof to confirm that the equation ST s L S holds. Indeed, the first rows of the matrices on the left- and right-hand sides are equal due to the fact that S1 k s D 1 k . The rest of the equation is implied by Theorem 4. The theorem is proved. COROLLARY 1. Let f : Br ª Br be an analytic function. Then for any initial point z g Br we ha¨ e the following representation: zn s ˜ eDy1Ln Dz. EXAMPLE 4. The correspondences T ¬ f Ž x ., D ¬ hŽ x ., Dy1 ¬ hy1 Ž x ., and L ¬ lŽ x . allow us to rewrite Eq. Ž12. in the ‘‘functional’’ form f Ž x . s hy1 Ž l h Ž x . . , where hŽ x . s Ý`is1 D 1 k x k . The elements D 1 k obey the linear recursion D1 k s Ž l y lk .
y1
ky1
Ý Ti k D1 i . is1
For some special cases this recursion admits an explicit solution for D 1 k which gives us information about hŽ x .. For example, using this result, it is
89
ANALYSIS OF ANALYTICAL RECURSIONS
possible to recover the well-known representation of the logistic map for l s 4: 2
2
ž '4Žarcsin'x . / , 2
4 x Ž 1 y x . s Ž sin 2 arcsin'x . s sin
which corresponds to f Ž x . s 4 x Ž1 y x . and hŽ x . s Žarcsin'x . 2 and implies the easy-to-analyze solution 2
f n Ž x . s Ž sin 2 n arcsin'x . , where f n Ž x . is nth iteration of the function f Ž x ..
5. SUMMARY In this work we prove convergence of the series used in Carleman linearization of nonlinear analytic recursions. We establish a correspondence between analytic functions and infinite matrices of a certain type. This correspondence is proven to extend to operations with matrices. Namely, for an analytic function, satisfying certain conditions, and the corresponding matrix the following is true: the inverse matrix corresponds to the formal inverse series of the function. A similar result is obtained for the diagonalization of the matrix which corresponds to the formal representation f Ž x . s hy1 Ž l h Ž x . . of the given function f Ž x ., where l s f X Ž0. and the functions hŽ x . and hy1 Ž x . are given in the form of series. Convergence of this series is a possible subject for future research.
ACKNOWLEDGMENTS We thank Dany Ben-Avraham, Iain Stewart, and Michael Grinfeld for a critical reading of this manuscript.
REFERENCES 1. R. P. Agarwal, ‘‘Difference Equations and Inequalities,’’ Dekker, New York, 1992. 2. T. Carleman, Application de la theorie des ´ equations integrates lineaires aux systemes ´ ´ ´ ` d’equations differentielles nonlineaires, Acta Math. 59 Ž1932., 63]68. ´ ´ 3. R. G. Cooke, ‘‘Infinite Matrices and Sequence Spaces,’’ Macmillan, London, 1950.
90
BERKOLAIKO, RABINOVICH, AND HAVLIN
4. R. Courant, ‘‘Differential and Integral Calculus,’’ Vol. 1, Blackie & Son, London, 1963. 5. K. Kowalski and W.-H. Steeb, ‘‘Nonlinear Dynamical Systems and Carleman Linearization,’’ World Scientific, Singapore, 1991. 6. S. Rabinovich, G. Berkolaiko, S. Buldyrev, A. Shehter, and S. Havlin, Logistic map: An analytical solution, Phys. A 218 Ž1995., 457]460. 7. S. Rabinovich, G. Berkolaiko, and S. Havlin, Solving nonlinear recursions, J. Math. Phys. 37 Ž1996., 5828]5836. 8. J. Riordan, ‘‘Combinatorial Identities,’’ Wiley, London, 1968. 9. C. A. Tsiligiannis and G. Liberatos, Steady state bifurcations and exact multiplicity conditions via Carleman linearization, J. Math. Anal. Appl. 126 Ž1987., 143]160. 10. C. A. Tsiligiannis and G. Liberatos, Normal forms, resonance and bifurcation analysis via the Carleman linearization, J. Math. Anal. Appl. 139 Ž1989., 123]138.