Statistics & Probability Letters 50 (2000) 401 – 407
Random dynamical systems arising from iterated function systems with place-dependent probabilities Anna A. Kwiecinska ∗ , Wojciech S lomczynski Institute of Mathematics, Jagiellonian University, ul. Reymonta 4, 30-059 KrakÃow, Poland Received November 1999; received in revised form March 2000
Abstract We show that if an iterated function system with place-dependent probabilities admits an invariant and attractive measure, c 2000 Elsevier Science B.V. then it has the structure of a random dynamical system (in the sense of Ludwig Arnold). All rights reserved Keywords: Random dynamical system; Iterated function system; Markov operator; Invariant measure; Attractive measure
1. Introduction Under the assumption that a given IFS admits an invariant and attractive measure we construct a probability space ( ; F; P) and an ergodic mapping related to this IFS. We prove that the IFS considered has the structure of a random dynamical system, over the ergodic dynamical system generated by , with time T = N (Theorem 1). If, in addition, it is an IFS of homeomorphisms, then the time can be extended to T = Z (Theorem 2). On the one hand, this construction provides a new example of a random dynamical system, while on the other hand, it enables us to apply the theory of random dynamical systems to iterated function systems with place-dependent probabilities. As an illustration we give an example of a linear IFS with place-dependent probabilities, to which we apply the multiplicative ergodic theorem of Oseledec and compute its Lyapunov exponents and the Oseledec splitting. The IFSs with constant probabilities have been studied from the viewpoint of random dynamical systems in Arnold and Crauel (1992). In the case of constant probabilities our construction agrees with that from Arnold and Crauel (1992).
Research partially supported by KBN Grant 2 P03B 060 13.
∗
Corresponding author. E-mail addresses:
[email protected] (A.A. Kwiecinska),
[email protected] (W. Slomczynski).
c 2000 Elsevier Science B.V. All rights reserved 0167-7152/00/$ - see front matter PII: S 0 1 6 7 - 7 1 5 2 ( 0 0 ) 0 0 1 3 0 - 9
402
A.A. KwieciÃnska, W. SlomczyÃnski / Statistics & Probability Letters 50 (2000) 401 – 407
There are available constructions of one-sided RDSs with prescribed transition probabilities (see Kifer, 1986; Quas, 1991), but they are not invertible (i.e. the time cannot be extended to T = Z). 2. Preliminaries 2.1. Iterated function systems with place-dependent probabilities Let (X; d) be a metric space in which sets of nite diameter are relatively compact, so that X is locally compact and -compact. Let {Fi }Ni=1 be Borel measurable functions from X to X and let {pi }Ni=1 be a nonnegative Borel measurable partition of unity on X . For a given PN x ∈ X and a Borel subset B ⊆ X , the probability of transfer from x to B is de ned by P(x; B) = i=1 pi (x)1B (Fi (x)). Closely connected with P is the Markov operator T de ned for real-valued R PN Borel measurable functions on X by (Tf)(x) = f(y)P(x; dy) = i=1 pi (x)f(Fi (x)). Let M (X ) be the space of nite signed Borel measures on RX . P(X ) ⊂ M (X ) P are probability measures. We N R consider the operator V : M (X ) → M (X ) de ned by V(B) = P(x; B) d(x) = i=1 F −1 (B) pi (x) d(x). V is i the adjoint operator T ∗ restricted to M (X ), i.e. for each ∈ M (X ) we have Z Z f dV = Tf d: (1) A Borel probability measure is said to be invariant or stationary, if V = . It is said to be attractive if for all ∈ P(X ), Z Z n (2) f d(V ) → f d (n → ∞) for all f ∈ C(X ), where C(X ) denotes the space of bounded, real-valued, continuous functions on X . That is, V n converges to in distribution. For ∈ P(X ), let {Zn ; n = 0; 1; : : :} be the Markov stochastic process (unique up to distribution) having initial distribution and transition probability as above. For x ∈ X , let {Znx } be the process with initial distribution concentrated at x; that is, Znx = Znx . Let = {1; : : : ; N }N = {i := (i1 ; i2 ; : : :): 16ij 6N }. For x ∈ X , let Px be the probability measure de ned on cylinders of by Px ({i: jth coordinate of i is ij ; j = 1; : : : ; n}) = pi1 (x)pi2 (Fi1 x) · · · pin (Fin−1 · · · Fi1 x). The process {Znx } may be realized on ( ; Px ) as Znx (i) = Fin · · · Fi1 x. This is the interpretation of the process as a random walk in X; using an N -sided dice (which may depend on the position) to draw the transformation that will carry us to the following position. For more details see Barnsley et al. (1988). Sucient conditions for the existence of an invariant and attractive measure can be found among others in Barnsley et al. (1988) – Theorem 2:1; Elton and Piccioni (1992) – Proposition 2:2 and Lasota and Yorke (1994) – Theorem 8:1 and Corollary 8:1. 2.2. Random dynamical systems The de nitions will be given in a version for discrete time. In this case they are simpler because of the existence of generators of dynamical systems. For a more general setting see Arnold (1998). Let ( ; F; P) be a probability space, X a topological space and B its Borel -algebra generated by open sets. Let T = N (N = {0; 1; 2; : : :}). A continuous random dynamical system (random dynamical system will be later abbreviated to RDS) over a metric dynamical system (later abbreviated to DS) (( ; F; P); { n }n∈N ) is
A.A. KwieciÃnska, W. SlomczyÃnski / Statistics & Probability Letters 50 (2000) 401 – 407
403
a family of mappings {(n; !) : X → X }n∈N; !∈ of the form ( ( n−1 !) ◦ · · · ◦ (!); n¿1; (n; !) = n = 0; idX ; where the mappings (!) : X → X are such that (!; x) 7→ (!)x is measurable and x → (!)x is continuous for any xed !. Let T = Z. A continuous random dynamical system over a metric DS (( ; F; P); {(n)}n∈Z ) is a family of mappings {(n; !) : X → X }n∈Z; !∈ of the form n−1 !) ◦ · · · ◦ (!); n¿1; ( n = 0; (n; !) = idX ; ( n !)−1 ◦ · · · ◦ ( −1 !)−1 ; n6 − 1; where the mappings (!) : X → X , are such that (!; x) 7→ (!)x and (!; x) 7→ (!)−1 x are measurable and (!) is a homeomorphism for any xed !. In both cases we say that is generated by .
3. The construction We rst assume that T = N. We consider the Markov process described in Section 2.1. Moreover, we assume that there exists an invariant and attractive measure for this process, which we will denote by . Such a measure is obviously unique. We put ={1; : : : ; N }N and consider the product -algebra that will be denoted by F. We de ne a measure P on in the following way. First, for the cylinder sets of the form Ci0 ::: in = {! ∈ : !0 = i0 ; : : : ; !n = in }; where 16ij 6N , j = 0; : : : ; n and n ∈ N we set Z pi0 (x)pi1 (Fi0 x) · · · pin (Fin−1 · · · Fi0 x) d(x); P(Ci0 ::: in ) = X
then we extend it to the whole F. Having constructed the probability space we consider the “left shift” on it, i.e. the mapping : → , de ned by (!)i = !i+1 . Proposition 1. Under the above assumptions is a measure preserving and strong-mixing mapping on ( ; F; P). Proof. We will rst prove that is measure preserving. Obviously, it is enough to prove it for the cylinder sets. The mapping is P-preserving, as is invariant. Namely, we have
−1
Ci0 ···in =
N X i=1
Cii0 ···in :
404
A.A. KwieciÃnska, W. SlomczyÃnski / Statistics & Probability Letters 50 (2000) 401 – 407
Therefore, using (1), we deduce P(
−1
N X
Ci0 ···in ) = P
Cii0 ···in
=
Z X N
i=1
Z =
!
X i=1
pi (x)pi0 (Fi x) · · · pin (Fin−1 · · · Fi0 Fi x) d(x)
pi0 (x) · · · pin (Fin−1 · · · Fi0 x) d(x) = P(Ci0 ···in ):
X
The fact that is strong-mixing follows from the invariance and attractiveness of . First notice that by (1), condition (2) Z Z n f dV → f d (n → ∞) X
X
is equivalent to Z Z n T f d → f d (n → ∞) X
X
for any measure . In particular, taking as the measure concentrated at x, we get for any x ∈ X : Z f d (n → ∞): T n f(x) → X
To prove that is strong-mixing, we have to show that for all cylinder sets A; B: lim P( −n A ∩ B) = P(A)P(B)
n→∞
(see Walters, 1982, p. 41, Theorem 1:17). We denote by N the set {1; : : : ; N }. Let A = {a1 } × {a2 } × · · · × {ak } × N × N × · · · and let B = {b1 } × {b2 } × · · · × {bm } × N × N × : : : ; for k; m ∈ N. For M big enough CM = −M A ∩ B will have the form CM = {b1 } × · · · × {bm } × N × · · · × N × {a1 } × · · · × {ak } × N × · · · : For ∈ {1; : : : ; N }l , = (1 ; : : : ; l ) and x ∈ X let us denote by p (x): p (x) = p1 (x) · · · pl (Fl−1 · · · F1 x) and by F (x): F (x) = Fl : : : F1 x: We put b = (b1 ; : : : ; bm ) and a = (a1 ; : : : ; ak ). From the de nition of P we can write Z X P(CM ) = pb (x) p (Fb x)pa (F Fb x) d(x): X
∈{1;:::; N }M −m
From the de nition of T we obtain X p (Fb x)pa (F Fb x) = (T M −m pa )(Fb x): ∈{1;:::; N }M −m
(3)
A.A. KwieciÃnska, W. SlomczyÃnski / Statistics & Probability Letters 50 (2000) 401 – 407
Then by (3) (T
M −m
405
Z pa )(Fb x) →
X
pa d;
(M → ∞):
Therefore, using the Lebesgue Theorem on Dominated Convergence, we deduce Z Z Z Z pb pa d d = pa d pb d = P(A)P(B); (M → ∞); P(CM ) → X
X
X
X
which was to be proved. Remark 1. A related construction has been considered in a dierent context by Gadde (in Gadde, 1994). Remark 2. Strong-mixing implies ergodicity (see Walters, 1982) and therefore generates an ergodic dynamical system. Theorem 1. If an IFS admits an invariant and attractive measure then this IFS has the structure of a random dynamical system over the ergodic dynamical system generated by ; with time T = N. Proof. The generator of the random dynamical system is the same as in the case of IFSs with constant probabilities, i.e. : → {F1 ; : : : ; FN } ⊂ X X ! 7→ F!0 : Let us now recall the construction from Arnold and Crauel (1992). There ={1; : : : ; N }N , F is the product -algebra and P the product measure on , : → is the “left shift” and the generator of the RDS is as above. Obviously in the case of constant probabilities both constructions agree. Next, we assume that T = Z and that in addition to the previous assumptions all the Fi ’s are homeomorphisms. We will extend the ergodic dynamical system {(n)}n∈N into the past making use of a construction from Doob (1990, p. 456). ˜ ˜ F; ˜ P) ˜ and an ergodic dynamical system {(n)} Proposition 2. There exist a probability space ( ; n∈Z deÿned ˜ ˜ ˜ ˜ on ( ;F; P); such that {(n)}n∈N has the same distribution as {(n)}n∈N . Proof. We de ne the new probability space ˜ as ˜ = {1; : : : ; N }Z and F˜ as the product -algebra. Let x˜n be the nth coordinate variable of ˜ (n ∈ Z), so that x˜n (!) ˜ = in if !˜ = {: : : ; i−1 ; i0 ; i1 ; : : :}. To the !˜ set {x˜mj = ij ; j = 1; : : : ; n};
n ∈ N; 16ij 6N;
we assign the probability P{xmj +h = ij ; j = 1; : : : ; n}; where h ∈ N is chosen such that mj + h¿0;
j = 1; : : : ; n:
Obviously, as is P-preserving, the above probability does not depend on h satisfying the above condition. Kolmogorov’s theorem ensures that these nite-dimensional distributions determine the probability measure P˜ ˜ on F.
406
A.A. KwieciÃnska, W. SlomczyÃnski / Statistics & Probability Letters 50 (2000) 401 – 407
˜ F; ˜ P) ˜ and from Proposition 1, it follows easily that both We put (˜!) ˜ i+1 = !˜ i . From the de nition of ( ; −1 are strong-mixing mappings and therefore ˜ is the generator of an ergodic dynamical system with ˜ and ˜ −1 ˜ is the “right shift” on the space . time T = Z. In this case, ˜ is the “left shift” and ˜ ˜ has the same distribution as {(n)} . From the construction {(n)} n∈N n∈N Theorem 2. If an IFS of homeomorphisms admits an invariant and attractive measure; then this IFS ˜ with has the structure of a random dynamical system over the ergodic dynamical system generated by ; time T = Z. ˜ given by the formula ˜ Proof. The generator of the RDS ˜ over {(n)} n∈Z is the mapping ˜ : ˜ → {F1 ; : : : ; FN } ⊂ X X !˜ 7→ F!˜ 0 : ˜ is Remark 3. The assumption that is an attractive measure was only necessary to prove that (resp. ) ˜ ergodic. If we had only the invariance of , the above construction would give an RDS (resp. ) over the ˜ metric DS {(n)}n∈N (resp. {(n)} n∈Z ). 4. An example Let us consider an iterated function system consisting of two linear mappings on R2 ! ! ! 1 0 x1 x1 F1 : 7→ 1 x2 x2 0 2 and F2 :
x1 x2
! 7→
1 3
0
0
1
!
x1
!
x2
with probabilities p1 (x; y) and p2 (x; y) continuous, piecewise linear functions, de ned in the following way: p let r = x2 + y2 , p1 (x; y) is equal 12 if r = 0, then grows linearly to 34 for r = 1, then decreases linearly to 14 for r = 3, then increases linearly to 34 for r = 5, then decreases linearly to 14 for r = 7 and so on. We put p2 (x; y) = 1 − p1 (x; y). The important properties of so de ned pi ’s are being Lipschitz and bounded: 1 3 4 6pi (x; y)6 4 ; i = 1; 2. It is easy to verify that the√ average contractivity assumption from Theorem 2:1 of Barnsley et al. (1988) is satis ed for q = 2 and r = 415 , so all the assumptions of Theorem 2:1 are satis ed. This yields the existence of an attractive invariant probability measure . One can easily check that the measure 0 (i.e. the measure concentrated at 0) is invariant. It is thus the unique invariant and attractive measure. A simple computation shows that the measure P on all the cylinder sets is equal to: P(Ci0 ···in ) =
1 2n+1
for any i0 ; : : : ; in ∈ {1; : : : ; N }. The RDS with T = Z generated by F1 and F2 is linear and therefore the multiplicative ergodic theorem (MET) of Oseledec holds (see Arnold, 1998). The integrability assumption of the MET, log+ k ˜ (·)±1 k ∈ L1 (P),
A.A. KwieciÃnska, W. SlomczyÃnski / Statistics & Probability Letters 50 (2000) 401 – 407
407
is obviously satis ed, since k ˜ (!)k ≡ 1 and k ˜ (!)−1 k takes two values: 2 and 3. Using the strong law of large numbers we can compute the Lyapunov exponents: 1 = 12 log 12
and
2 = 12 log 13 :
˜ The Oseledec splitting is deterministic: The Lyapunov exponents are constant due to the ergodicity of . E1 = R(0; 1)
and
E2 = R(1; 0):
Acknowledgements This work was partially done while the rst author stayed at Tokyo Metropolitan University and therefore wishes to thank prof. Kunio Nishioka and other members of the Mathematics Department for providing excellent working conditions. References Arnold, L., 1998. Random Dynamical Systems. Springer, Berlin. Arnold, L., Crauel, H., 1992. Iterated function systems and multiplicative ergodic theory. In: Pinsky, M.A., Vihstutz, V. (Eds.), Diusion Processes and Related Problems in Analysis, Vol. II, Stochastic Flows, Progress in Probability 27. Birkhauser, Boston, pp. 283–305. Barnsley, M.F., Demko, S.G., Elton, J.H., Geronimo, J.S., 1988. Invariant measures for Markov processes arising from iterated function systems with place-dependent probabilities. Ann. Inst. Henri Poincare 24 (3), 367–394. Doob, J.L., 1990. Stochastic Processes. Wiley Classics Library. Wiley, New York. Elton, J.H., Piccioni, M., 1992. Iterated function systems arising from recursive estimation problems. Probab. Theory Relat. Fields 91, 103–114. Gadde, E., 1994. Stable IFSs with probabilities. An ergodic theorem. University of Umea , Department of Mathematics, Research Reports 1994, No. 10. Kifer, Y., 1986. Ergodic theory of random transformations. In: Progress in Probability and Statistics 10, Birkhauser, Boston. Lasota, A., Yorke, J.A., 1994. Lower bound technique for Markov operators and iterated function systems. Random & Computational Dynamics 2 (1), 41–77. Quas, A., 1991. On representation of Markov chains by smooth maps. Bull. London Math. Soc. 23, 487–492. Walters, P., 1982. An Introduction to Ergodic Theory. Graduate Texts in Mathematics 79. Springer, New York.