CHAPTER 3
Linear Systems of Difference Equations
3.0. Introduction In this chapter, we shall treat systems of linear difference equations. Some results discussed in Chapter 2 are presented here in an elegant form in terms of matrix theory. After investigating the basic theory, method of variation of constants and higher order systems in Sections 3.1 to 3.3, we shall consider the case of periodic solutions in Section 3.4. Boundary value problems are dealt with in Section 3.5, where the classical theory of Poincare is also included. The elements of matrix theory that are necessary for this chapter may be found in Appendix A. Some useful problems are given in Section 3.6.
3.1. Basic Theory Let A(n) be an s x s matrix whose elements aij(n) are real or complex functions defined on N;o and Yn E R S (or C S ) with components that are functions defined on the same set N;o' A linear equation Yn+l = A(n)Yn
+ bn,
(3.1.1)
where b; E R and Yno is a given vector is said to be a nonhomogeneous linear difference equation. The corresponding homogeneous linear difference equation is S
Yn+l = A(n)Yn'
(3.1.2)
63
64
3.
Linear Systems of Difference Equations
When an initial vector y"" is assigned, both (3.1.1) and (3.1.2) determine the solution uniquely on the set N~o as can be seen easily as induction. For example, it follows from (3.1.2) that the solution takes the form (3.1.3) from which follows the uniqueness of solution passing through y"", because n;:~o A(i) is uniquely defined for all n. Sometimes, in order to avoid confusion, we shall denote by y(n, no, y",,) the solution of (3.1.1) or (3.1.2) having y"" as initial vector. Let us now consider the space S of solutions of (3.1.2). It is a linear space since by taking any two solutions of (3.1.2), we can show that any linear combination of them is a solution of the same equation. Let E lo E 2, ... , E, be the unit vectors of R S- and y(n, no, E i), i = 1, 2, ... , s, the s solutions having E, as initial vectors. Lemma 3.1.1. Any element of S can be expressed as a linear combination of y(n, no, E i), i = 1,2, ... , s. Proof. Let y(n, no, c) be a solution of (3.1.2) with y"" = c E R S • From the linearity of S and from c = L:=I ciE;, it follows that the vector
z, =
s
I
i=1
ciy(n, no, E i)
satisfies (3.1.2) and has c as initial vector. Then, by uniqueness, • coincide with y(n, no, c).
Zn
must
Definition 3.1.1. Let j;(n), i = 1,2, ... , s, be vector valued functions defined on N~o' They are linearly dependent if there exists constants a., i = 1,2, ... , s not all zero such that L:=I a;j;(n) = 0, for all n 2:: no. Definition 3.1.2. The vectors};(n), i if they are not linearly dependent.
=
1,2, ... , s, are linearly independent
Let us define the matrix K(n) = (f1(n)'/2(n), ... ,fs(n» whose columns are the vectors };(n). Also let a be the vector (aI, a2,"" aJT.
Theorem 3.1.1. If there exists an ii E N~o such that det K (ii) "" 0 then the vectors };(n), i = 1,2, ... , s are linearly independent.
3.1.
Basic Theory
Proof.
65
Suppose that for n
2=
no s
K(n)a =
I
aJ;(n) = O.
i=l
Since det K(n) .,e 0, it follows that a linearly dependent. •
=
0 and the functions t(n) are not
Theorem 3.1.2. If t(n), i = 1,2, ... , s, are solutions of (3.1.2) with det A(n) .,e 0 for n E N;o' and if det K(no) .,e 0, then det K (n) .,e 0 for all n E N;o' Proof.
For n
2=
no,
det K(n
+ 1) =
det(f)(n
+ 1),fzCn + 1), ... ,!sen + 1))
= det A(n) det K(n),
(3.1.4)
from which it follows that detK(n)
= C~~ detA(i))
detK(no).
•
(3.1.5)
Corollary 3.1.1. The solutions yen, no, E;), i = 1,2, ... , s, of (3.1.2) with det A(n) .,e 0 for n 2= no, are linearly independent. Proof. In this case det K (no) 3.1.1, the result follows. •
= I,
the identity matrix and by Theorem
Corollary 3.1.2. If the columns of K(n) are linearly independent solutions of (3.1.2) with det A(n) .,e 0, then det K(n) ¥- 0 for all n 2= no. Proof. The proof follows from the fact that there exists an det K(n) .,e 0 and the relation (3.1.4). •
n at
which
The matrix K(n), when its columns are solution of (3.1.2) is called Casorati matrix or fundamental matrix. We shall reserve the name of fundamental matrix for a slightly different matrix, and call K (n) the Casorati matrix. Its determinant is called Casoratean and plays the same role as the Wronskian in the continuous case. The Casorati matrix satisfies the equation K(n
Theorem 3.1.3. dimension s.
+ 1) =
A(n)K(n).
(3.1.6)
The space S of all solutions of (3.1.2) is a linear space of
66
3.
Linear Systems of Difference Equations
The proof is an easy consequence of Lemma 3.1.1 and Corollary 3.1.1. Definition 3.1.3. Given s linearly independent solutions of (3.1.2), and a vector C E R of arbitrary components, the vector valued function Yn = K(n)c is said to be the general solution of (3.1.2). S
c
Fixing the initial condition Y"", it follows from definition 3.1.3 that and
= K-1(no)y""
y(n, no, y",,) and in general, for s
E
=
K(n)K-1(no)y""
(3.1.7)
N;o' Ys = c,
y(n, s, c)
= K(n)K-1(s)c.
(3.1.8)
= K(n)K-1(s)
(3.1.9)
The matrix
(n, s)
satisfies the same equation as K(n), i.e., (n + 1, s) = A(n)(n, s). Moreover, (n, n) = I for all n ;:::: no. We shall call the fundamental matrix. In terms of the fundamental matrix, (3.1.7) can be written as y(n, no, y",,) = (n, no)y"". Other properties of the matrix are (i) (n, s)(s, t) = (n, t) and (ii) if -I(n, s) exists then -I(n, s) = (s, n).
(3.1.10)
The relation (3.1.10) allows us to define (s, n), for s < n. Let us now consider the nonhomogeneous equation (3.1.1). Lemma 3.1.2. The difference between any two solutions Yn and )in of (3.1.1) is a solution of (3.1.2).
Proof.
From the fact that
Yn+i
=
A(n)Yn + bn,
)in+! = A( n) )in+! + bn,
one obtains
Yn+1 - )in+1 which proves the lemma.
=
A(n)(Yn - )in+1),
•
Theorem 3.1.4. Every solution of (3.1.1) can be written in the form Yn = )in + (n, no)c, where y; is a particular solution of (3.1.1) and (n, no) is the fundamental matrix of the homogeneous equation (3.1.2).
Proof. From Lemma 3.1.2, Yn - )in E S and an element in this space can be written in the form (n, no)c. If the matrix A is independent of n, the fundamental matrix simplifies because lI>(n, no) = (n - no, 0). •
3.2.
67
Method of Variation of Constants
3.2.
Method of Variation of Constants
From the general solution of (3.1.2), it is possible to obtain the general solution of (3.1.1). The general solution of (3.1.2) is given by y(n, no, c) = ll>(n, no)c. Let c be a function defined on N~o and let us impose the condition that y(n, no, cn) satisfy (3.1.1). We then have
y(n
+ 1, no, cn + l ) = ll>(n + 1, nO)cn+1 = A(n)ll>(n, no)cn+t = A(n)ll>(n, nO)cn + bn,
from which, supposing that det A(n) ."r:. 0 for all n ;;::: no, we get
c, + ll>(n o, n + 1)b n. The solution of the above equation is Cn + 1
=
n = Cno +
C
n-I
I
j=no
ll>(no,j + 1)bj.
The solution of (3.1.1) can now be written as
y(n, no, Cno ) = ll>(n, no)cno + =
ll>(n, no)cno +
n-I
I
ll>(n, no)ll>(no,j + 1)bj
j=no n-I
I
ll>(n,j + 1)bj,
j=no
from which, setting Cn = Yno' we have n-I
y(n, no, Yno)
= (n, no)yno + I
j=no
ll>(n,j + 1)bj.
By comparing (3.1.5) and (3.1.9), it follows that ll>(n, no) = n~~~~ A(i) = 1. We can rewrite (3.2.1) in the form
y(n, no, Yno)
=
(3.2.1)
rr.; A(i), where
CD: A(i) )Yno + jngoCD~I A(s) )bj.
(3.2.2)
In the case where A is a constant matrix ll>(n, no) = An-no and, of course, (n, no) = ll>(n - no, 0). The equation (3.2.2) reduces to n-I
y ( n, no, Yno )
'\' An-j-1b = A n- n°Yno + j=no ~
j'
(3.2.3)
Let us now consider the case where A(n) as well as b; are defined on N±. Theorem 3.2.1. Suppose that R+, j E N±. Then,
L;=-oo IIK- I (j + 1)11 < +00 and IlbJ <
b, b e
00
Yn = is a solution of (3.1.1).
I
s=o
K(n)K-1(n - s)b n- s -
I
(3.2.4)
68
Proof.
3.
For m
E
Linear Systems of Difference Equations
N± consider the solution, corresponding to Ym n-I
= L
y(n, m, 0)
K(n)K-1(j
j=m
= 0,
+ 1)bj
and the sequence y(n, m - 1,0), y(n, m - 2, 0), .... This sequence is a Cauchy sequence since, for T > 0, e > 0 and m 1 chosen I such that I;'::mt- r IIK- (j + 1)11 < e and Ily(n, ml -
T,
Ilj~~-r K(n)K-1(j + l)b ll
0) - y(n, ml> 0)11 =
j
~ IIK(n)llb~.
It follows that the sequence will converge as m given by
Yn
=
n-I
L
K(n)K-1(j
j=-co
~ -00.
Let Yn be the limit
+ 1)bj ,
which is again a solution of (3.1.1). By setting s = n - j - 1, we obtain 00
Yn
=L S~O
K(n)K-1(n - s)bn- s- I '
In the case of constant coefficients this solution takes the form co
Yn
=L
(3.2.5)
A n_S _ 1> Sb
S~O
which exists if the eigenvalues of A are inside the unit circle.
•
Let us close this section by giving the solution in a form that corresponds to the one given using the formal series in the scalar case. Let (3.2.6) By multiplying with z", with z E C, and summing formally from zero to infinity, one has
Letting co
Y(z)
=
L Yn z n,
n~O
B(z)
=
co
L b.s" n=O
69
3.3. Systems Representing High Order Equations
and substituting, one obtains Z-I(y(Z) - Yo) == AY(z)
+ B(z)
from which
+ zB(z)
(I - zA) Y(z) == Yo
and (3.2.7) When the formal series is convergent, the previous formula furnishes the solutions as the coefficient vectors of Y(z). The matrix R(z-1, A) == (Z-IJ - A)-I is called resolvent of A (see A.3). Its properties reflect the properties of the solution Yn'
3.3. Systems Representing High Order Equations Any k t h -order scalar linear difference equation Yn+k
+ PI(n)Yn+k-1 + ... + Pk(n)Yn
can be written as a first order system in R
Y n == (
Y~:I),
Yo == (
Yn;k-I
k
,
~:
(3.3.1)
== gn
by introducing the vectors
),
a, == (
~)
(3.3.2)
~n
Y:-I
and the matrix
o o
1
o
o 1
o o
o
A(n) ==
(3.3.3)
Using this notation equation, (3.3.1) becomes Yn + 1 == A(n) Yn
+ G;
(3.3.4)
where Yo is the initial condition. The matrix A(n) is called the companion (or Frobenius) matrix and some of its interesting properties that characterize the solution of (3.3.4) are given: (i)
The determinant of A(n)-H is the polynomial (_1)k (A k + PI(n)A k-I + ... + Pk(n». When A is independent of n this polynomial coincides with the characteristic polynomial;
3.
70
(ii) detA(n)
Linear Systems of Difference Equations
and is nonsingular if (3.3.1) is really a k th
= (-1)k pk ( n )
order equation; (iii) There are no semisimple eigenvalues of A(n) (see Appendix A). This implies that both the algebraic and geometric multiplicity of the eigenvalues of A coincide. This property is important in determining the qualitative behavior of the solutions; (iv) When A is independent of n and has simple eigenvalues ZI, Z2, ... , z., it can be diagonalized by the similarity transformation A = VDV-\ where V is the Vandermonde matrix V(Zh Z2,' .. , z.) and D = diagf z., Z2, ... , zs). The solution of(3.3.4) is deduced by (3.2.1), which in the present notation becomes n-I
Yn
= (n, no) Yo + I
j= r1o
(n,} + 1)Gj •
The fundamental matrix <1>( n, no) is given by (n, no) = K(n)K- 1(no),
where the Casorati matrix K (n) is given in terms of k independent solutions fl(n)J2(n), ... Jdn) of the homogeneous equation corresponding to (3.3.4), i.e., fk(n)
f2(n
+k
- 1)
)
f,(n+k-l) .
The solution Yn of (3.3.4) has redundant information concerning the solution of (3.3.1). It is enough to consider any component of Y n for n === no + k to get the solution of (3.3.1). For example, if we take the case Yo = 0, from (3.2.1) we have n-·1
Yn =
I
(n,} + I)Gj ,
(3.3.5)
j=no
where, by (3.1.9), (n,} + 1) = K(n)K- 1(j + 1). To obtain the solution y(n + k; no, 0) of (3.3.1), it is sufficient to consider the last component of the vector Yn +1 and we,get n
Yn+k
= I
E[(n
+ I,} + 1)Gj
E[K(n
+ 1)K- 1(j + 1)Ek gj ,
j=no
==
n
I
j=no
where E k = (0,0, ... , 0, 1) T.
(3.3.6)
3.3.
71
Systems Representing High Order Equations
Introducing the functions
H(n
+ le,j) = EI(n + I)K- I (j + 1)Ek ,
(3.3.7)
the solution (3.3.6) can be written as n
I
Yn+k =
r- no
H(n
+ k,j)gj'
(3.3.8)
The function H(n + le,j), which is called the one-sided Green's function has some interesting properties. For example, it follows easily from (3.3.7) that
H(n+k,n)=l.
(3.3.9)
In order to obtain additional properties, let us consider the identity k
I
=I
i=1
(3.3.10)
EjET,
where E, are the unit vectors in R k and I is the identity matrix. From (3.3.7), one has k
H(n
+ le,j) = I ErK(n + 1)EjETK- I (j + 1)EkJ
(3.3.11)
i=1
which represents the sum of the products of the elements of the last row of k(n + 1) and the elements of the last column of k-l(j + 1). By observing that the elements in the last column of the matrix K-l(j + 1) are the cofactors of the elements of the last row of the matrix K (j + 1), it follows that
H(n+kJ')= 1 , det K(j + 1) fl(j fl(j
+ I) + 2)
fk(j fk(j
+ 1) + 2)
x det
.
ft(j+k-1) fl(n + k)
fin
+ k)
(3.3.12)
fk(j+k-l) fk(n + k)
As a consequence, one finds
H(n H(n
+ k, n)
+k -
(3.3.13)
= 1,
i, n) = 0,
i = 1,2, ... , k - 1,
(3.3.14)
and
H(n+k,n+k)=(-I)
k-l
d
det K(n + k) 1 ( k )=- ( k)' et K n + + 1 P« n +
(3.3.15)
72
3.
Linear Systems of Difference Equations
Proposition 3.3.1. The function H(n,j),for fixed], satisfies the homogeneous equation associated with (3.3.1), that is, k
I
p;(n)H(n
+k
- j,j)
= o.
;~O
Proof. It follows easily from (3.3.12) and the properties of the determinant. •
The solution (3.3.8) can also be written as Yn
n-k =
I
i v n«
(3.3.16)
H(n,j)gj,
with the usual convention L~:no = O. For the case of arbitrary initial conditions together with equation (3.3.1), one can proceed in similar way. From the solution Yn + 1
n
= K(n + I)K- 1(no)Yo + I
K(n
+ 1)K- 1(j + 1)q,
j=no
by taking the k tb component we have Yn+k
= »: K(n + 1)K-
1(n
o)Yo +
n
I
j=no
H(n
+ k,j)gj'
In the case of constant coefficients, the expression for H(n,j) can be simplified. Suppose that the roots Zi of the characteristic polynomial are all distinct. We then have, from (3.3.11), 1 1 k H(n
+ k,j) =
TI z{'+1
i~1
k
TI z{+1 det K(O)
det
i=1
k-2 ZI (n-j)+k-I
ZI k "
z:
= i=1
n-j+k-l,,( Z; "i ZI,""
det K(O)
Zk
)
=
I
k
n-j+k-I
_Z::.. .i_ _ ;=1 p'(z;)-'
(3.3.17)
where V;(ZI,"" zd are the cofactors of the jtb elements of the last row and P'(Zi) is the derivative of the characteristic polynomial evaluated at z.. In this case, as can be expected, one has H(n +k,j) = H(n + k - j, 0). By denoting with H(n + k - j) the function H(n + k - j, 0), the solution
73
3.3. Systems Representing High Order Eqnations
of the equation k
I PJln+k-i = gn i=1 such that Yi
= 0, i = 0, 1, ... , k - 1, is given by Yn =
n-k
I
(3.3.18)
H(n - j)gj'
j~O
The properties (3.3.12), (3.3.13), (3.3.14) reduce to H(k) == 1, H(k - s) s
1
=
0,
.
= 1, ... , k -1, and H(O) = --, respectively. Pk
We shall now state two classical theorems on the growth of solutions of (3.3.1) with gn = O. If lim p.In) = Pi> i
Theorem 3.3.1. (Poincare).
n-+CO
= 1,2, ... , k,
and if the
roots of k
I
i-«
i PiAk- = 0,
(3.3.19)
Po = 1
have distinct moduli, then for every solution Yn,
. Yn+l , II m - = As> Yn
n-+OO
where As is a solution of (3.3.19). Proof. Let pi(n) = Pi + lli(n) where, by hypothesis, lli ~ 0 as n ~ matrix A(n) can be split as A(n) = A + E kll T(n), where
A=
o o
1
o
1
o
The
o
o
o -Pk
o
00.
-Pk-l
-PI
11 T(n) = (-llk(n), llk-l(n), .... , -lll(n)) and E[ == (0,0, ... ,1). Equation (3.3.4) then becomes Y n + 1 = AYn + E kll T (n) Yn' Now A = VA V-I, where V is the Vandermonde matrix made up of the eigenvalues. AI, A2 , ••• , Ak of A (which are the roots of (3.3.19)) and A = diag(A), A2 , ••• , Ak ) . We suppose that IA 11 < IA 2 1 < ... < IAkl. Changing variables u(n) = V-I Y n and letting f n = V- IEk ll T(n)V, we get u(n+1)=Au(n)+r nu(n). The elements of I'(n), being linear combinations of lli(n), tend to zero as n ~ co. This implies that for any matrix norm, we have IIrn I ~ O. Suppose now that
74
3.
max
l'$i:'Sk
IUj(n)1 = lus(n)l.
The index s will depend on n. We shall show that an
no can be chosen such that for n
In fact, we know that for i < j,
IAil1+ e <
IA j
-
e
Linear Systems of Difference Equations
2': no, the
function sen) is not decreasing.
:~;: < 1. Take e > 0 small enough such that
1 and choose no so that for n
2': no, [I'
n
1100
< e. Setting sen + 1) = j,
it follows that
and Iuj(n
+ 01 ~ IAJuj(n)[ + elus(n)1 ~ (IAjl + e)lus(n)1 IUj(n + 1)12': IAjlluj(n)l- elus(n)l·
Consequently, if sen + true:
0 == j were less than lu/n + 01 IAjl us(n + 0 ~ A2
+e
I I I-
I
sen), the following would be
e
< 1,
which is in contradiction with the definition of j. For n > N suitably chosen, the function s(n) will then assume a fixed value less or equal to k. We shall show now that the ratios
IUj(n)1 jus(n)I'
(3.3.20)
j~s
tend to zero. In fact we know that for n > N,
lu(n)1
lu:(n)1 ~
. a < 1. This means
that a is an upper limit for (3.3.20). We extract for n 2': N a subsequence n1, n2, ... , for which (3.3.20) converges to a. Suppose first that j > s. Then luj(np + lus(np +
IAjlluj(np)l_ e ::lus(np)1 01- IAsl + e
01
:>
We take the limit of subsequence, obtaining a lower limit . luj(np + 01 hm p_oo Ius(np + 0 12':
This implies
IAjia - e IAs I+ e .
75
3.4. Periodic Solutions
for arbitrary small a
= O.
E.
Since
I~~II > 1, the
previous relation holds only for
In the case j < s, similar arguments, starting from IUj(np Ius(n p
+ 1)1 <: + 1)1-
IAjlluj(np)1 + e lus(np)1 IAsl- e
lead to the same conclusion that a = O. Consider now the original solution Yn' We have, considering the first two rows of Yn = Vu(n): Yn
=
Ui(n)) L uJn) + us(n) = u.(n) ( 1 + Lse s -(-) , n
tv:s
i
Us
and
One easily verifies, by using the previous results, that Yn+l · -= 11m Yn
n ....OO
\
I\s.
•
We shall now state, without proof, the following theorem due to Perron, which improves the previous one.
Theorem 3.3.2. (Perron).
Suppose that the hypotheses of Theorem 3.3.1 are verified, and moreover Pk(n) ~ 0 for n ~ no. Then k solutions f1 J2' ... J, can be found such that
. j;(n + 1) hm I'() "-+00 Ji n
3.4.
=A
j,
i = 1,2, ... , k:
Periodic Solutions
Let N be a positive integer greater than 1, A(n) real nonsingular s x s matrices and b; vectors of R S • Suppose that A( n) and b; are periodic of period N That is A(n + N) = A(n) and bn + N = b.; A period solution of period N is a solution for which Yn+N = Yn' The trivial example of periodic solution of the homogeneous equation (3.1.2) is Yn = O. For the nonhomogeneous equation (3.1.1), if there exists a constant vector ji such that (A(n) - 1)ji + b; = 0 for all n, then ji is trivially periodic. In this section, we shall look for the existence of nontrivial periodic solutions.
76
3.
Proposition 3.4.1. the relations
Linear Systems of Difference Equations
IfA( n) is periodic, then the fundamental matrix <1> satisfies
+ N, N) <1>(n + N,O)
<1>(n
=
<1>(n, 0)
= <1>(n, 0)<1>(N; 0).
(3.4.1)
Proof. The proof follows easily from the definition of <1> (see 3.2.1) and the hypothesis of periodicity of A(n). • Theorem 3.4.1. If the homogeneous equation (3.1.2) has only Yn = 0 as periodic solution, then the nonhomogeneous equation (3.1.1) has a unique periodic solution ofperiod N and vice versa.
Proof. If Yn is a periodic solution for (3.1.2) and (3.1.1), it must satisfy respectively YN = Yo = <1>(N; O)yo YN
= Yo = <1>(N, O)yo +
N-l
L
j=O
<1>(N,j
+ 1)bj ,
from which it follows that Yo must satisfy
By., = 0 BYN =
(3.4.2)
N-l
L
j=O
<1>(N,j + l)bj ,
(3.4.3)
where B = I - <1>(N, 0). The solution of each of the two equations (3.4.2) and (3.4.3) will give the initial condition for the periodic solutions of the equations (3.1.2) and (3.1.1). If (3.4.2) has the unique solution Yo = 0, it follows that det B '" 0 and this implies that (3.4.3) has a unique nontrivial solution. The converse is proved similarly. Suppose now that det B = 0 and N(B), the null space of B, has dimension k < s. This means that the equation (3.4.2) has k solutions to which will correspond k periodic solutions of (3.1.2). The problem (3.4.3) has solutions if the vector I.'::~l <1>(N,j + l)bj is orthogonal to N(B T ) . Suppose that v(l), V(2), ••• , V(k) is a base of N(B T ) , then we have V(i)T -
V(i)T <1>(N,
0)
= 0,
i
= 1,2, ... , k,
(3.4.4)
from which i = 1,2, ... , k.
(3.4.5)
77
3.4. Periodic Solutions
By imposing the orthogonality conditions we get N-I
V(i)T
I
CP(N,j
j=O
N-I
I
+ 1)bj
=
CP(N,j
+ I),
V(i)T
CP(N,j
j=O
+ 1)bj = O.
(3.4.6)
k,
(3.4.7)
Let XJi)T
=
V(i)T
i = 1,2, ... ,
so that we obtain N-I
~
'--
j=O
x(i)T J
bJ· = 0 ,
•
i = 1,2, ... , k.
(3.4.8)
From this result we get the following theorem. Theorem 3.4.2. If the homogeneous equation (3.1.2) has k periodic solutions ofperiod Nand if the conditions (3.4.8) are verified, then the nonhomogeneous equation (3.1.1) has periodic solutions ofperiod N
Let us now consider the functions defined by (3.4.7) and let xj , j be one of them. Then,
XT-l =
v T CP(N,j) = v T CP(N,j
E
«;
+ I)CP(j + l,j).
The vector xj are periodic with period N and satisfy the equation
XT-l =
(3.4.9)
x]A(j),
which is called the adjoint equation of (3.1.1). The fundamental matrix '1'( t, s) for such equation satisfies the equation
'I'(t - 1, s)
= 'I'(t, s)A(t),
'I'(s, s)
=
1.
(3.4.10)
Using (3.4.9), Theorem 3.4.2 can be rephrased as follows. Theorem 3.4.2'. If the homogeneous equation (3.1.2) has k periodic solutions of period N and if the given vee/or b = (b o , b l , ••• , bN - 1 ) T is orthogonal to the periodic solutions of the adjoint equation, then the nonhomogeneous equation (3.1.1) has periodic solutions of period N. Theorem 3.4.3. Suppose A(n) and b; periodic with period N. If the nonhomogeneous equation (3.1.1) does not have periodic solutions with period N, it cannot have bounded solutions.
78
3.
Linear Systems of Difference Equations
Proof. If (3.1.1) has no period solutions, by Theorem 3.4.1, the equation (3.1.2) has such solutions and of course the conditions (3.4.8) are not verified. Let v T be a solution of v TB = O. Then v T I.j:~l et>(N,j + 1)bj "r:. O. Consequently, for every solution Yn of (3.1.1), we have
VTYN = vTet>(N,O)yo+ v T
N-l
I
j=O
= vTyo+ v T
et>(N,j+ 1)bj
N-l
I
et>(N,j+ 1)bj •
j=O
Moreover, by considering the periodicity of b, and (3.4.1) we get
V
TY2N=V TYN+V T
N-l
I
j=O
et>(N,j+1)bj
N-l
I
= v TYo+2v T
j=O
et>(N,j+1)bj>
and in general, for k > 0, N-l
I
VTYkN=VTYO+kv T
j=O
showing that Yn cannot be bounded.
et>(N,j+1)bj
•
The matrix U"" et>(N, 0) has relevant importance in discussing the stability of periodic solutions. From
et>(n
+ N,O)
= et>(n
+ N, N)et>(N, 0)
and (3.4.1), it follows that:
et>(n
+ N,O)
=
et>(n, 0) U
(3.4.10)
and in general, for k > 0, et>~n
+ kN,O)
= et>(n, 0)
tr.
(3.4.11)
Suppose that p is an eigenvalue of U and v the corresponding eigenvector. Then, Letting et>(n, o)v
et>(n
+ N, O)v = et>(n, 0) Uv = pet>(n, O)v.
= o,
for n
;;=:
0, we get (3.4.12)
This means that the solution of the homogeneous equation having initial value vn , after one period, is multiplied by p. For this reason the eigenvalues of U are usually called muItiplicators. The converse is also true. If Yn is a solution such that Yn+N = PYn, for all n, then in particular is YN = PYo and that means UYo = PYo from which follows that Yo is an eigenvector of U.
79
3.5. Boundary Value Problems
3.5.
Boundary Value Problems
The discrete analog of the Sturm-Liouville problem is the following: (3.5.1) (3.5.2) where all the sequences are of real numbers,
rk
> 0,
ao ¢
0,
aM ¢
0 and
o:s; k :s; M. The problem can be treated by using arguments very similar to the continuous case. We shall transform the problem into a vector form, and reduce it to a problem of linear algebra. Note that the equation (3.5.1) can be rewritten as Let k = 2, .... , M -1,
Y
= (Yl , Y2, ... , YM ) T,
and
A=
al
-PI
0
-PI
a2
-P2
0
0
0
(3.5.3)
-PM-l
0
0
-:PM-l
aM
then the problem (3.5.1), (3.5.2) is equivalent to (3.5.4)
Ay = ARy.
This is a generalized eigenvalue problem for the matrix A. The condition for existence of solutions to this problem is
det(A - AR)
=
0,
(3.5.5)
which is a polynomial equation in A. Theorem 3.5.1.
Proof.
The generalized eigenvalues of (3.5.4) are real and distinct.
Let S = R- 1/2 . It then follows that the roots of (3.5.5) are roots of det(SAS - AI) = O.
(3.5.6)
Since the matrix SAS is symmetric, it will have real and distinct eigenvalues.
80
3.
Linear Systems of Difference Equations
For each eigenvalue Aj , there is an eigenvector v'. which is the solution of (3.5.4). By using standard arguments, it can be proved that if / and are two eigenvectors associated with two distinct eigenvalues, then
y
•
M
(
i j - 0. Y,i RY i) -= '"' L.. rsJ!.Ys
s=1
Definition 3.5.1. R-orthogonaI.
(3.5.7)
Two vectors u and v such that (u, Rv) = 0 are called
Since the Sturm-Liouville problem (3.5.1), (3.5.2) is equivalent to (3.5.4), we have the following result. Theorem 3.5.2.
Two solutions of the discrete Sturm-Liouville problem corresponding to two distinct eigenvalues are R-orthogonal. Consider now the more general problem
Yn+l
=
A(n)Yn
+ bn,
(3.5.8)
where Yn, b; E R S and A(n) is an s x s matrix. Assume the boundary conditions are given by N
L LJ'n, =
i=O
(3.5.9)
w,
where n, E Nt, n, < nj+1, no = 0, w is a given vector in R S and L j are given s x s matrices. Let (n,j) be the fundamental matrix for the homogeneous problem Yn+1
= A(n)Yn,
(3.5.10)
such that <1>(0,0) = 1. The solutions of (3.5.8) are given n-l
Yn
= (n, O)yo + L
j=O
(n,j + l)bj,
(3.5.11)
where Yo is the unknown initial condition. The conditions (3.5.9) will be satisfied if N
L
i=O
LJ'n, =
N
L
i=O
Lj(nj, O)yo +
N
L
i=O
n,-1
i;
L
j=O
(nj,j + l)bj
= w,
3.5.
81
Boundary Value Problems
which can be written as N
"N- I
N
I Lj
I
Lj
j~O
r:/>(nj,j + 1) T(j
+ 1, nJbj = w,
j~O
(3.5.12) where the step matrix T(j, n) is defined by
T(j, n)
By introducing the matrix Q = becomes
Qyo =
W -
I 0
= {
N
"N- I
;=0
j~O
I I
forj f ' or)
S
>
n, n.
I: Lj
the previous formula
Lj
(3.5.13)
If the matrix Q is nonsingular, then the problem (3.5.8) with boundary conditions (3.5.9) has only one solution given by
Theorem 3.5.3.
nN-l
=
y"
s=O
G(n, s)b"
(3.5.14)
where the matrices G(n,j) are defined by: G(n,j)
=
-
I
i=O
Lj
(3.5.15)
Proof. Since Q is nonsingular, from (3.5.13) we see that (3.5.11) solves the problem if the initial condition is given by N
Yo
=
Q-I W
Q-I
-
nN-l
I I j~O
j=O
Lj
(3.5.16)
By substituting in (3.5.11) one has
y"
=
-
N
"N- I
j~O
j=O
I I
Lj
nN-l
+ I
"XI [ Jo
=
L;
+ 1, n;)]bj ,
+ 1, nJbj
82
3.
Linear Systems of Difference Eqnations
from which by using the definition (3.5.14) of G(n,j), the conclusion follows • The matrix G(n, i) is called Green's matrix and it has some interesting properties. For example, (1) for fixed j, the function G(n,j) satisfies the boundary conditions L.~o LjG(nj,j) = 0, (2) for fixed j and n rf j, the function G(n,j) satisfies the autonomous equation G(n + 1,j) = A(n)G(n,j), and (3) for n = j, one has GU + 1,j) = A(j)G(j,j) + I. The proofs of the above statements are left as exercizes. (See problems 3.19, 3.20). If the matrix Q is singular then the equation (3.5.15) can have either an infinite number of solutions or no solution. Suppose, for simplicity, we indicate by b the righthand side of (3.5.13), then the problem is reduced to establishing the existence of solutions for the equation Qyo = b.
(3.5.17)
Let R( Q) and N( Q) be respectively the range and the null space of Q. Then (3.5.17) will have solutions if b E R( Q). In this case if c is any vector in N( Q) and Yo any solution of (3.5.17), the vector c + Yo will also be a solution. Otherwise if b e R( Q), the problem will not have solutions. In the first case (b E R(Q)), a solution can be obtained by introducing the generalized inverse of Q, defined as follows. Let r = rank Q. The generalized inverse QI of Q is the only matrix satisfying the relations (3.5.18) (3.5.19) where P and P l are the projections on R( Q) and R( Q*) (Q* is the conjugate transpose of Q) respectively. It is well known that if F is an s x s matrix whose columns span R(Q), then P is given by F(F* F)-l F*.
By using QI the solution
Yo
of (3.5.17) when b
(3.5.20)
e R(Q)
is given by (3.5.21)
In fact, we have Qyo = QQTb = Pb = b. A solution YM of the boundary value problem can now be given in a form similar to (3.5.14) and (3.5.15) with replaced by QT. This solution, as we have already seen, is not unique.
o:'
3.6.
83
Problems
In fact if C E N( Q), Yk = (n, O)c + Yn will also be a solution satisfying the boundary conditions since N
N
I L;)'n, = ;=0 I (n ;=0
i,
O)c +
n
I L;jn' = Qc + w = w. ;=0
When be R( Q), the relation (3.5.21) has the meaning of least square solution, because Yo minimizes the quantity II Qyo - b 112 and the sequence Yn defined consequently may serve as an approximate solution.
3.6.
Problems
3.1.
Show that the vectorsjjfn) = (1, n)T andf2(n) = (n, n 2)T are linearly independent even if the matrix K (n) = UI(n),f2(n» has determinant always zero. Can the two vectors be solutions of (3.1.2)?
3.2.
Prove that if A(n) is nonsingular for n ~ no, then -I(n, s) exists for n, s G:: no. (Hint: write (n, s) = n~:: A(i) and invert.)
3.3.
Show that if K (n) satisfies (3.1.5), with K (n) ¥- 0 for all n, its column form a set of linearly independentsolutions of (3.1.2).
3.4.
Prove that if A is a constant matrix, (n, no) = (n - no) either directly by the explicit expression of (n, no) or as a solution of (3.1.5).
3.5.
Prove the relations (3.3.14) and (3.3.15).
3.6.
Prove proposition 3.3.1 directly by using the expression (3.3.11).
3.7.
Verify that (3.3.16) satisfies equation (3.3.1).
3.8.
Verify that Yn = I~~-oo ASb n_s_1 is a solution of Yn+1 When does this solution have a meaning?
3.9.
Verify that H(n) is solution of I~=o P;)'n+k-i
=
= AYn + b-.
O.
3.10. Supposing that I;:o H(n - j)gj has a meaning, show that it is the solution of the autonomous linear scalar difference equation. When does it have a meaning? 3.11. As in the previous exercise with the function Yn = I;:'-oo H(n - j)gj, where H(n) is the solution of I:=op;H(n + k - i) = 8;\ find the conditions on the roots of p(z) in order to have Yn as the only bounded solution. 3.12. Consider the second order difference equation Yn+2 + PIYn+1 gn' Construct the function H (n) of the previous exercise.
+ P2Yn =
84
3. Linear Systems of Difference Equations
+
3.13.
Find the function H(n - j) and the solution of Yn+Z - 2(cos t)Yn+l Yn = gn, Yo = Yl = O.
3.14.
By using the onesided Green's function H defined by (3.3.17) find the inverse of the matrix:
f3
a 'Y
o
o
f3 a
'Y
When does the inverse exist for N
~
NxN
(infinite matrix)?
3.15.
Deduce the adjoint equation in the scalar case as defined in section 2.1 from the Definition 3.4.9.
3.16.
Show that
/).2Yn +
4 sin? 2~ Yn
= 0,
Yo
= 0,
YM
=E
(;eO), has no
solution. It is interesting to notice that the continuous analog of this problem, namely, Y" + 4y sin" 2~ = 0, Yo = 0, y( M) . tion ytr]
= E,
has solu-
sin(2tsin~) =
(
sin 2M sin
2~
)
E.
;z 2
= 0, y(O) = y(M) = 0, for M > 2.
3.17.
Find the solution of a 2Yn +
3.18.
Find the eigenvalues of I1 zYn + AYn = 0, Yo = 0, YM+l = O. Show that the problem is equivalent to finding the eigenvalues of the matrix: 2
A=
-1 0
-1 2
0 -1
Yn
0 0
0
-1 0
0
-1
2
MxM·
3.19.
Show that for fixed j and n ;e i. G(n,j) satisfies the homogeneous equation G(n + l,j) = AG(n,j).
3.20.
Show that for j + 1 and j in the same interval [n i GU + 1,j) = AU)GU,j) + 1.
3.21.
)::0 LiG(ni,j) = O.
Show that for fixed
i.
1,
ni -1] one has
G(n,j) satisfies the boundary conditions
3.7. Notes
85
3.22. Verify that (3.5.14) satisfies the Equation (3.5.8) and Conditions (3.5.9).
3.7. Notes Most of the material of Sections 3.1, 3.2 and 3.3 appears in several different books. The major references are Miller [115] and Hahn [65]. The Poincare and Perron's theorems can be found in Gelfond [57,58]. The periodic solutions are treated in Halanay [69], Halanay and Wexler [68] and Corduneanu [31]. The boundary value problems are treated in Fort [49], Mattheij [104, 105], Agarwal [3,4] and Hildebrand [78]. Theorem 3.5.2 has been taken from Agarwal [3].