J. Math. Anal. Appl. 408 (2013) 623–637
Contents lists available at ScienceDirect
Journal of Mathematical Analysis and Applications journal homepage: www.elsevier.com/locate/jmaa
Continuous random dynamical systems Katarzyna Horbacz Institute of Mathematics, Silesian University, 40-007 Katowice, Bankowa 14, Poland
article
info
Article history: Received 29 October 2012 Available online 29 June 2013 Submitted by Yu Huang Keywords: Dynamical systems Markov operators Stability Biological models
abstract We study a dynamical system generalizing continuous iterated function systems and stochastic differential equations disturbed by Poisson noise. The main results provide us with sufficient conditions for the existence and uniqueness of an invariant measure for the considered system. Since the dynamical system is defined on an arbitrary Banach space (possibly infinite dimensional), to prove the existence of an invariant measure and its stability we make use of the lower bound technique developed by Lasota and Yorke and extended recently to infinite-dimensional spaces by Szarek. Finally, it is shown that many systems appearing in models of cell division or gene expressions are exactly as those we study. Hence we obtain their stability as well. © 2013 Elsevier Inc. All rights reserved.
1. Introduction In this paper, we propose a new model generalizing random dynamical systems [11] and continuous iterated function systems [13]. Random dynamical systems take into consideration some very important and widely studied cases, namely dynamical systems generated by learning systems [2,15,16], [23], Poisson driven stochastic differential equations [10,22,31,32], iterated function systems with an infinite family of transformations [20,33,34], random evolutions [8,26], randomly controlled dynamical systems, and irreducible Markov systems [35]. A large class of applications of such models, both in physics and biology, is worth mentioning here: shot noise, photoconductive detectors, the growth of the size of structural populations, and the motion of relativistic particles, both fermions and bosons (see [7,12], [17,19]). The generalized stochastic process was introduced in the recent model of gene expression by Lipniacki et al. [24,3]. On the other hand, it should be noted that most Markov chains may be represented by continuous iterated function systems, which turned out to be a very useful tool in the theory of cell cycles, for example, in a general d-dimensional model for the intracellular biochemistry of a generic cell with a probabilistic division hypothesis (see [20]). Recently, iterated function systems have been used in studying invariant measures for the Ważewska partial differential equation which describes the process of the reproduction of red blood cells [20,21]. Similar nonlinear first-order partial differential equations frequently appear in hydrodynamics [27]. So-called irreducible Markov systems introduced by Werner (see [35]) are used for the computer modeling of different stochastic processes. The aim of this paper is to study stochastic processes whose paths follow deterministic dynamics between random times, jump times, at which they change their position randomly. Hence, we analyze stochastic processes in which randomness appears at times t0 < t1 < t2 < · · ·. We assume that a point x0 ∈ Y moves according to one of the dynamical systems Πi : R+ × Y → Y from some set {Π1 , . . . , ΠN }. The motion of the process is governed by the equation X (t ) = Πi (t , x0 ) until the first jump time t1 . Then we choose a transformation qs : Y → Y from a family {qs : s ∈ Θ = [0, T ]}, and define x1 = qs (Πi (t1 , x0 )). The process restarts from that new point x1 and continues as before. This gives the stochastic process
E-mail addresses:
[email protected],
[email protected]. 0022-247X/$ – see front matter © 2013 Elsevier Inc. All rights reserved. http://dx.doi.org/10.1016/j.jmaa.2013.06.050
624
K. Horbacz / J. Math. Anal. Appl. 408 (2013) 623–637
{X (t )}t ≥0 with jump times {t1 , t2 , . . .} and post-jump positions {x1 , x2 , . . .}. The probability determining the frequency with which the dynamical systems Πi are chosen is described by a matrix of probabilities [pij ]Ni,j=1 , pij : Y → [0, 1]. The maps qs are randomly chosen with place-dependent absolutely continuous distribution. We are interested in the evolution of distributions of these random dynamical systems. We formulate criteria for stability and the existence of an invariant measure for such systems. There is a substantial literature devoted to the problem of stability and of the existence of an invariant measure for Markov processes [25]. Different classes of Markov process have been studied therein, for example, random dynamical systems based on skew product flows [1]. Our model is not such a system. It is similar to the so-called piecewise-deterministic Markov process introduced by Davis [4]. There are some stability results for such a system based on the theory of Meyn and Tweedie [25]. However the method of proving the existence of an invariant measure used by Meyn and Tweedie is not well adapted to general Polish spaces. In fact, it is difficult to ensure that the process under consideration satisfies all the ergodic properties on a compact set. On the other hand, the assumption of compactness is restrictive if we want to apply our model in physics and biology. Then the phase space is usually one of the function spaces, and it is difficult to ensure that the ergodic properties adhere to a compact set. Our work is based on the theory of concentrating Markov operators on a Polish space (see [28]). The results of this paper are related to our previously published papers [9,11,14]. The simplest case when Θ is equal to the finite set {1, . . . , K } and qs : s ∈ Θ are randomly chosen with discrete distribution is considered in [11]. Continuous iterated function systems are considered in [13,20]. The examples below show that our model generalizes some very important and widely studied objects, namely dynamical systems generated by continuous iterated function systems and dynamical systems generated by Poisson driven stochastic differential equations. Example 1.1 (Continuous Iterated Function Systems). We will consider a stochastically perturbed dynamical system xn+1 = S (xn , sn ) for n = 0, 1, 2, . . . .
(1.1)
We make the following assumptions. (1) The function S : Y × [0, T ] → Y is continuous. (2) The sn are random variables with values in [0, T ] and the distribution of the sn conditional on yn = y is given by Prob(sn < t |yn = y) =
t
p(y, u)du for 0 < t ≤ T , 0
where p : Y × [0, T ] → R+ (R+ = [0, ∞)) is a measurable function such that T
p(y, u)du = 1
for y ∈ Y .
0
From a statistical point of view, the evolution of system (1.1) is described by the sequence of distributions
µn (A) = prob(xn ∈ A) for n = 0, 1, 2, . . . ,
(1.2)
where A denotes an arbitrary Borel subset of Y . It can be proved (see [5,6]) that
µn+1 (A) =
T
Y
1A (S (x, t ))p(x, t )dt µn (dx). 0
Defining an operator P by P µ(A) =
T
Y
1A (S (x, t ))p(x, t )dt µ(dx), 0
Eq. (1.2) may be rewritten as
µn+1 = P µn n = 0, 1, . . . . It is evident that a continuous iterated function system is a particular example of a continuous random dynamical system. Consider a dynamical system of the form I = {1}, Π1 (t , x) = x for x ∈ Y and qs (x) = S (x, s) for x ∈ Y , s ∈ [0, T ]. Moreover, assume that p1 (x) = 1 and p11 (x) = 1 for x ∈ Y . Example 1.2 (Poisson Driven Stochastic Differential Equation). Let (Ω , F , P) be a probability space, I = {1, . . . , N }. Consider the stochastic differential equation dX (t ) = a(X (t ), ξ (t ))dt + b(X (t ))dp(t ) for ≥ 0
(1.3)
with the initial condition X (0) = x0 ,
(1.4)
K. Horbacz / J. Math. Anal. Appl. 408 (2013) 623–637
625
where a : Y × I → Y , b : Y → Y are Lipschitzian functions, Y is a separable Banach space, {p(t )}t ≥0 is a Poisson process, and {ξ (t )}t ≥0 , ξ (t ) : Ω → I is a stochastic process describing random switching at random moments tn . This is a particular example of a continuous random dynamical system in which qs (x) = q(x) = x + b(x), s ∈ [0, T ], and, for every i ∈ I, Πi (t , x) = vi (t ) are the solutions of the unperturbed Cauchy problems
vi′ (t ) = a(vi (t ), i) and vi (0) = x,
x ∈ Y.
(1.5)
It is easy to check that µn = P n µ, where P is the transition operator corresponding to the above stochastic equation, given by
P µ(A) =
j∈I
Y ×I
λe−λt 1A (q(Πj (t , x)), j)pij (x)dtdµ(x, i)
(1.6)
R+
for A ∈ B (Y × I ) and µ ∈ M1 . 2. Auxiliary results Let (Y , ϱ) be a Polish space, i.e. a separable complete metric space. We denote by B(x, r ) the open ball with center at x and radius r. For any set A ⊂ Y , cl A, diamϱ A, and 1A stand for the closure of A, the diameter of A, and the indicator function of A, respectively. We denote by B (Y ) the σ -algebra of Borel subsets of Y , by M = M (Y ) the family of all finite Borel measures on Y , and by Ms the space of all finite signed Borel measures on Y . We write M1 = M1 (Y ) for the family of all µ ∈ M such that µ(Y ) = 1. The elements of M1 are called distributions. As usual, B(Y ) denotes the space of all bounded Borel measurable functions f : Y → R and C (Y ) the subspace of all continuous functions. Both spaces are considered with the supremum norm ∥ · ∥0 . For f ∈ B(Y ) and µ ∈ Ms , we write
⟨f , µ⟩ =
f (x)µ(dx). Y
We introduce in Ms the Fortet–Mourier norm ∥ · ∥ϱ (see [5,6]) given by
∥µ∥ϱ = sup{|⟨f , µ⟩| : f ∈ Fϱ } for µ ∈ Ms , where Fϱ is the set of all f ∈ C (Y ) such that |f (x)| ≤ 1 and |f (x) − f (y)| ≤ ϱ(x, y) for x, y ∈ Y . We say that a sequence {µn }n≥1 , µn ∈ M , converges weakly to a measure µ ∈ M if lim ⟨f , µn ⟩ = ⟨f , µ⟩
n→∞
for every f ∈ C (Y ).
It is well known (see [5,6]) that the convergence in the Fortet–Mourier norm ∥ · ∥ϱ is equivalent to weak convergence. We introduce the class Φ of functions ϕ : R+ → R+ satisfying the following conditions. (i) ϕ is continuous and ϕ(0) = 0. (ii) ϕ is nondecreasing and concave, i.e., n
αk ϕ(yk ) ≤ ϕ
k=1
n
αk yk ,
where αk ≥ 0,
k=1
n
αk = 1.
k=1
(iii) ϕ(x) > 0 for x > 0 and limx→∞ ϕ(x) = ∞. We denote by Φ0 the family of all functions satisfying conditions (i) and (ii). A necessary and sufficient condition for a concave function ϕ to be subadditive on (0, +∞) is that ϕ(0+) ≥ 0. From this result, we immediately obtain the triangle inequality for ϱϕ = ϕ ◦ ϱ. Thus for every ϕ ∈ Φ the function ϱϕ is again a metric on Y . For notational convenience, we write Fϕ and ∥ · ∥ϕ instead of Fϱϕ and ∥ · ∥ϱϕ , respectively. In our considerations, an important role is played by the inequality
w(t ) + ϕ(at ) ≤ ϕ(t ) for t ≥ 0,
(2.1)
where w ∈ Φ0 is a given function and a ∈ [0, 1). The inequality may be studied by classical methods of the theory of functional equations (see [18]). Lasota and Yorke [23] precisely discuss the cases for which functional inequality (2.1) has a solution belonging to Φ , and prove the following. Proposition 2.1. Assume that a function w ∈ Φ0 satisfies the Dini condition ϵ
0
w(t ) t
dt < ∞ for some ϵ > 0.
Let a ∈ [0, 1). Then inequality (2.1) admits a solution in Φ .
(2.2)
626
K. Horbacz / J. Math. Anal. Appl. 408 (2013) 623–637
We say that a vector (p1 , . . . , pN ), where pi : Y → [0, 1], is a probability vector if N
pi (x) = 1
for x ∈ Y .
i=1
Analogously, a matrix [pij ]i,j , where pij : Y → [0, 1] for i, j ∈ {1, . . . , N }, is a probability matrix if N
pij (x) = 1
for x ∈ Y and i ∈ {1, . . . , N }.
j =1
We say that a metric ϱˆ is equivalent to ϱ if the classes of bounded sets and convergent sequences in the spaces (Y , ϱ) ˆ and (Y , ϱ) coincide. Obviously, if (Y , ϱ) is a Polish space and ϱ, ϱˆ are equivalent, then the space (Y , ϱ) ˆ is still a Polish space. An operator P : M → M is called a Markov operator if P (λ1 µ1 + λ2 µ2 ) = λ1 P µ1 + λ2 P µ2
for λ1 , λ2 ∈ R+ and µ1 , µ2 ∈ M
and P µ(Y ) = µ(Y )
for µ ∈ M .
It is easy to prove that every Markov operator can be extended to a linear operator on the space of all signed measures Ms . A linear operator U : B(Y ) → B(Y ) is called dual to P if
⟨Uf , µ⟩ = ⟨f , P µ⟩ for f ∈ B(Y ) and µ ∈ M.
(2.3)
A measure µ∗ is called invariant (or stationary) with respect to P if P µ∗ = µ∗ . A Markov operator P is called asymptotically stable if there exists a stationary measure µ∗ ∈ M1 such that lim ∥P n µ − µ∗ ∥ϱ = 0
n→∞
for every µ ∈ M1 .
(2.4)
A sequence of distributions {µn }n≥1 (µn ∈ M1 ) is called tight if, for every ε > 0, there exists a compact set K ⊂ Y such that µn (K ) ≥ 1 − ε for every n ∈ N. We say that a Markov operator P : M → M is tight if, for every µ ∈ M1 , the sequence of iterates {P n µ}n≥1 is tight. A Markov operator P : M → M is called essentially nonexpansive if there exists a metric ϱˆ equivalent to ϱ such that P is nonexpansive with respect to the norm ∥ · ∥ϱˆ , i.e.,
∥P µ1 − P µ2 ∥ϱˆ ≤ ∥µ1 − µ2 ∥ϱˆ for µ1 , µ2 ∈ M1 .
(2.5)
An operator P is called concentrating if, for every ε > 0, there exist a set A ∈ B (Y ) with diamϱ A ≤ ε and a number θ > 0 such that lim inf P n µ(A) > θ n→∞
for µ ∈ M1 .
(2.6)
Proposition 2.2. If P is an essentially nonexpansive and concentrating Markov operator, then P is asymptotically stable. The proof can be found in [28] in the case when Y is a Polish space. It should be noted that the definition of asymptotic stability consists of two almost independent statements: the existence of an invariant measure µ∗ and convergence condition (2.4). It turns out that, even if the set A in condition (2.6) depends on the choice of initial measures, then the proof in Lasota and Yorke [23] carries over to a Polish space and leads to the following result. Proposition 2.3. Let P be a nonexpansive Markov operator. Assume that P satisfies the lower bound condition: for every ε > 0 there is a number ∆ > 0 such that, for every µ1 , µ2 ∈ M1 , there exist A ∈ B (Y ) with diamϱ A ≤ ε and n0 ∈ N for which P n0 µi (A) ≥ ∆ for i = 1, 2.
(2.7)
Then lim ∥P n µ1 − P n µ2 ∥ϱ = 0 for µ1 , µ2 ∈ M1 .
n→∞
In the setting of Polish spaces it might be difficult, or even impossible, to prove that a given Markov operator is concentrating. We now describe results concerning asymptotic stability of Markov operators on infinite-dimensional spaces obtained by Szarek [28] and based on the concept of tightness and the well-known Prohorov theorem. He introduced the class of globally and semi-concentrating Markov operators, and gave conditions ensuring the existence of an invariant measure for nonexpansive Markov operators. It is important to emphasize that the nonexpansiveness in this consideration is crucial: Szarek [30] constructed an example which shows that nonexpansivity cannot be omitted.
K. Horbacz / J. Math. Anal. Appl. 408 (2013) 623–637
627
We denote by Cε (Y ), ε > 0 (C ε for abbreviation) the family of all closed sets C for which there exists a finite set {z1 , z2 , . . . , zn } ⊂ Y such that C ⊂ ni=1 B(zi , ε). An operator P is called semi-concentrating if, for every ε > 0, there exist C ∈ Cε (Y ) and θ > 0 such that lim inf P n µ(C ) > θ n→∞
for µ ∈ M1 .
(2.8)
Remark 2.1. A concentrating Markov operator is semi-concentrating. For µ ∈ M1 we consider the limit set:
L(µ) = ν ∈ M1 : there exists {nk } ⊂ {n} such that lim ∥P nk µ − ν∥ϱ = 0 k→∞
(2.9)
and
L(M1 ) =
L(µ).
(2.10)
µ∈M1
The following results are proved in [29]. Proposition 2.4. Let P be a nonexpansive and semi-concentrating Markov operator. Then (a) P has an invariant measure; (b) L(µ) ̸= ∅ for arbitrary µ ∈ M1 ; (c) L(M1 ) is tight. Let A ∈ B (Y ). We say that a measure µ ∈ M is concentrated on A if µ(Y \ A) = 0. We denote by M1A the set of all probability measures concentrated on A. An operator P is called globally concentrating if, for every ε > 0 and every bounded Borel set A, there exist a bounded Borel set B and a number n0 ∈ N such that P n µ(B) ≥ 1 − ε
for n ≥ n0 and µ ∈ M1A .
A continuous function V : Y → [0, +∞) is called a Lyapunov function if lim
ϱ(x,zo )→∞
V ( x) = ∞
for some z0 ∈ Y . Proposition 2.5. Let P be a Markov operator, and let U be its dual. Assume that there exists a Lyapunov function V , bounded on bounded sets, such that UV (x) ≤ aV (x) + b
for x ∈ Y ,
where a, b ∈ R+ and a < 1. Then P is globally concentrating. Moreover, for every ε > 0 there exists a bounded Borel set B ⊂ Y such that lim inf P n µ(B) ≥ 1 − ε n→∞
for µ ∈ M1 .
Define
E (P ) = ε > 0 : inf lim inf P n µ(A) > 0 for some A ∈ Cε (Y ) . µ∈M1 n→∞
(2.11)
Remark 2.2. If a Markov operator P is globally concentrating, then E (P ) ̸= ∅. Remark 2.3. If inf E (P ) = 0, then P is semi-concentrating. By Propositions 2.3 and 2.4, we obtain the following. Theorem 2.1. A nonexpansive semi-concentrating Markov operator satisfying a lower bound condition (2.7) is asymptotically stable. 3. Continuous random dynamical systems Let (Y , ∥ · ∥) be a separable Banach space, R+ = [0, +∞), and I = {1, . . . , N }. We first define our system. Let Πi : R+ × Y → Y , i ∈ I, be a finite sequence of semidynamical systems, i.e.
Πi (0, x) = x,
Πi (s + t , x) = Πi (s, Πi (t , x)) for s, t ∈ R+ , i ∈ I and x ∈ Y ,
and let q : Y × [0, T ] → Y be a continuous transformation.
628
K. Horbacz / J. Math. Anal. Appl. 408 (2013) 623–637
We are given probability vectors pi : Y → [0, 1], i ∈ I, and a matrix of probabilities [pij ]i,j∈I , pij : Y → [0, 1], i, j ∈ I. Let (Ω , Σ , P) be a probability space, and let {tn }n≥0 be an increasing sequence of random variables tn : Ω → R+ with t0 = 0 and such that the increments 1tn = tn − tn−1 , n ∈ N, are independent and have the same density g (t ) = λe−λt , t ≥ 0. We will consider continuous random dynamical systems xn+1 = q Πξn (tn+1 − tn , xn ), ηn
for n = 1, 2, . . . ,
(3.1)
where ξn : Ω → I, ηn : Ω → [0, T ], and yn : Ω → Y are random variables related by
P(ξ0 = i|x0 = x) = pi (x),
(3.2)
P(ξn = k|xn = x and ξn−1 = i) = pik (x), and yn = Πξn−1 (tn − tn−1 , xn−1 ). The distribution of ηn conditional on yn = y is given by
P(ηn < s|yn = y) =
s
p(y, u)du for 0 < s ≤ T ,
(3.3)
0
where p : Y × [0, T ] → R+ (R+ = [0, ∞)) is a measurable function such that T
p(x, u)du = 1
for x ∈ Y .
0
In what follows, we denote the system by (Π , q, [pij ], p). Assume that {ξn }n≥0 and {ηn }n≥0 are independent of {tn }n≥0 , and that for every n ∈ N the variables η1 , . . . , ηn−1 , ξ1 , . . . , ξn−1 are also independent. We make the following assumptions, which we assume to hold throughout this paper. The function q : Y × [0, T ] → Y satisfies additionally the Lipschitz-type inequality
∥q(x, s) − q(y, s)∥ ≤ β(x, s)∥x − y∥ for x, y ∈ Y , s ∈ [0, T ],
(3.4)
where β : Y × [0, T ] → R+ is a Borel measurable nonnegative function such that T
β(x, s)p(x, s)ds ≤ γ
for x ∈ Y .
(3.5)
0
We assume moreover that there are constants L ≥ 1 and α ≥ 0 such that
pij (y)∥Πj (t , x) − Πj (t , y)∥ ≤ Leα t ∥x − y∥
for x, y ∈ Y , i ∈ I , t ≥ 0,
(3.6)
j∈I
and the functions p : Y × [0, T ] → R+ and pij : Y → [0.1], i, j ∈ I, satisfy the following conditions:
|pij (x) − pij (y)| ≤ ψ(∥x − y∥) for x, y ∈ Y , i ∈ I ,
j∈I
(3.7)
T
|p(x, s) − p(y, s)|ds ≤ θ ∥x − y∥ for x, y ∈ Y and some θ > 0, 0
where the function ψ ∈ Φ0 is continuous function such that ε
ψ(t ) t
0
dt < +∞ for some ε > 0.
Finally, we assume that there exists x∗ ∈ Y such that
e−λt ∥Πj (t , x∗ ) − x∗ ∥ dt < ∞ R+
sup ∥q(x∗ , s) − x∗ ∥ < ∞,
s∈[0,T ]
for j ∈ I (3.8)
K. Horbacz / J. Math. Anal. Appl. 408 (2013) 623–637
629
and there exists i0 ∈ I such that inf pii0 (x) : i ∈ I , x ∈ Y
> 0.
(3.9)
It is easy to see that {xn }n≥0 is not a Markov process. In order to use the theory of Markov operators, we must redefine the process {xn }n≥0 in such a way that the redefined processes become Markov. For this purpose, consider the space Y × I endowed with the metric ϱ given by
ϱ (x, i), (y, j) = ∥x − y∥ + ϱc (i, j) for x, y ∈ Y , i, j ∈ I ,
(3.10)
where
ϱc (i, j) =
c, 0,
if i ̸= j, if i = j,
(3.11)
and the constant c will be chosen later. Then the stochastic process {(xn , ξn )}n≥0 , (xn , ξn ) : Ω → Y × I has the required Markov property. To begin our study of the stochastic process {(xn , ξn )}n≥0 , consider the sequence of distributions
µn (A) = P (xn , ξn ) ∈ A for A ∈ B (Y × I ), n ≥ 0. It is easy to see that there exists a Markov–Feller operator P : M → M such that
µn+1 = P µn for n ≥ 0. The operator P is given by the formula P µ(A) =
T
j∈I
Y ×I
0
+∞
λe−λt 1A q Πj (t , x), s , j pij (x)p Πj (t , x), s ds dt µ(dx, di),
(3.12)
0
and its dual operator U is given by Uf (x, i) =
T
j∈I
0
+∞
λe−λt f q Πj (t , x), s , j pij (x)p Πj (t , x), s ds dt ,
(3.13)
0
where λ is the intensity of the Poisson process which governs the increment 1tn of the random variables {tn }n≥0 . The operator P given by (3.12) is called a transition operator for this system. The first result ensures the existence of an invariant distribution for the transition operator P. Theorem 3.1. Assume that the system (Π , q, [pij ], p) satisfies conditions (3.4)–(3.8). If Lγ +
α < 1, λ
(3.14)
then the operator P defined by (3.12) has an invariant measure. The proof of Theorem 3.1 is based on Proposition 2.4. Therefore we have to show that the operator P is essentially nonexpansive and semi-concentrating. These properties are interesting in their own right, and will be stated separately in the next two lemmas. Lemma 3.1. If conditions (3.4)–(3.7) and (3.14) are satisfied, then the operator P given by (3.12) is essentially nonexpansive. Proof. Let ψ ∈ Φ0 be given by condition (3.7). Define ψ : R+ → R by
ψ(t ) = ψ(t ) +
λ Lθ t for t ≥ 0. λ−α
It is evident that ψ ∈ Φ0 and it satisfies the hypotheses of Proposition 2.1; thus there exists ϕ ∈ Φ such that the inequality
ψ(t ) + ϕ(at ) ≤ ϕ(t ) for t ≥ 0 holds with a=
λLγ < 1. λ−α
Since ϕ ∈ Φ , we may choose c ∈ R+ such that ϕ(c ) > 2. Consider the metric ϱ (see (3.10)) with this choice of c, i.e.,
ϱ((x, i), (y, j)) = ∥x − y∥ + ϱc (i, j) for x, y ∈ Y , i, j ∈ I .
(3.15)
630
K. Horbacz / J. Math. Anal. Appl. 408 (2013) 623–637
Fix f ∈ Fϕ . To complete the proof, it is enough to show that
|Uf (x, i) − Uf (y, j)| ≤ ϕ(ϱ((x, i), (y, j))) for (x, i), (y, j) ∈ Y × I ,
(3.16)
where the operator U is given by (3.13). Since ϱc (i, j) = c for i ̸= j, ϕ(c ) > 2, and |f | ≤ 1, then, for i ̸= j, condition (3.16) is satisfied. On the other hand, for i = j, we have T
|Uf (x, i) − Uf (y, i)| ≤
≤
0
T 0
T
T
λe−λt |pij (x) − pij (y)|p Πj (t , x), s ds dt 0
+∞
T 0
j∈I
+∞
0
+
λe−λt ϕ ∥q Πj (t , x), s − q Πj (t , y), s ∥ pij (x)p Πj (t , x), s ds dt 0
j∈I
λe−λt |pij (x)p Πj (t , x), s − pij (y)p Πj (t , y), s | ds dt
+∞
0
j∈I
+∞
0
j∈I
+
λe−λt |f q Πj (t , x), s , j − f q Πj (t , y), s , j |pij (x)p Πj (t , x), s ds dt
0
j∈I
+
+∞
λe−λt pij (y)|p Πj (t , x), s − p Πj (t , y), s | ds dt . 0
Using consecutively (3.4), (3.6), the Jensen inequality, (3.7) and (3.15), we obtain
|Uf (x, i) − Uf (y, i)| ≤ ϕ
j∈I
+
|pij (x) − pij (y)| +
≤ϕ
+∞
λe
−λt
+∞
0
λe−λt β(Πj (t , x), s)∥Πj (t , x) − Πj (t , y)∥pij (x)p(Πj (t , x), s) ds dt 0
+∞
j∈I
j∈I
T
λe−λt pij (y)θ∥Πj (t , x) − Πj (t , y)∥ dt 0
αt
γ Le ∥x − y∥dt + ψ ∥x − y∥ +
0
+∞
λe−λt θ Leαt ∥x − y∥dt
0
≤ ϕ a∥x − y∥ + ψ ∥x − y∥ ≤ ϕ ∥x − y∥ .
Lemma 3.2. Assume that the system (Π , q, [pij ], p) satisfies conditions (3.4)–(3.8) and (3.14). Then the operator P given by (3.12) is semi-concentrating. Proof. Define V (x, i) = ∥x∥ for (x, i) ∈ Y × I . Let us first show that there exist a, b ∈ R+ , a < 1, such that UV (x, i) ≤ aV (x, i) + b
for (x, i) ∈ Y × I .
By (3.13) and the definition of V , we have UV (x, i) ≤
T
j∈I
+
∥q(Πj (t , x), s) − q(Πj (t , x∗ ), s)∥λe−λt pij (x)p Πj (t , x), s ds dt
0
0
T
j∈I
+
+∞
0
∥q(Πj (t , x∗ ), s) − q(x∗ , s)∥λe−λt pij (x)p Πj (t , x), s ds dt 0
+∞
T
j∈I
+∞
0
∥q(x∗ , s)∥λe−λt pij (x)p Πj (t , x), s ds dt , 0
(3.17)
K. Horbacz / J. Math. Anal. Appl. 408 (2013) 623–637
631
where x∗ is given by condition (3.8). Further, using (3.4)–(3.6) and (3.8), we obtain UV (x, i) ≤
T
β(Πj (t , x), s)λe−λt pij (x)p Πj (t , x), s ∥Πj (t , x) − Πj (t , x∗ )∥ ds dt 0
0
j∈I
+∞
+∞
λe−λt sup ∥q(Πj (t , x∗ ), s) − q(x∗ , s)∥dt + sup ∥q(x∗ , s) − x∗ ∥ + ∥x∗ ∥
+
s∈[0,T ]
0
≤ γ
s∈[0,T ]
+∞
λe−λt pij (x)∥Πj (t , x) − Πj (t , x∗ )∥dt + b 0
j∈I
λLγ ∥x − x∗ ∥ + b λ−α λγ L UV (x, i) ≤ ∥x − x ∗ ∥ + b ≤ a∥x∥ + b, λ−α ≤
(3.18)
where
λγ L , λ−α T
a=
b=
0
j∈I
+∞
λe−λt sup ∥q Πj (t , x∗ ), s − q(x∗ , s)∥ds dt + sup ∥q(x∗ , s) − x∗ ∥ + ∥x∗ ∥ s∈[0,T ]
0
s∈[0,T ]
and b = b + a∥x∗ ∥. From (3.8) and the fact that the set I is finite, it follows that b is finite. Since a < 1, the proof of (3.17) is complete. By Proposition 2.5, we conclude that there exists a bounded set A ⊂ Y × I such that inf lim inf P n µ(A) > 0,
µ∈M n→∞
which implies that E (P ), given by (2.11), is not empty. We now claim that inf E (P ) = 0. Suppose, contrary to our claim, that ε = inf E (P ) > 0. Since α ≥ 0, by (3.14) we have γ L < 1. Choose γ0 < γ , δ > 0, and t∗ > 0 such that
γ0 (1 + δ)Leαt∗ < 1. Finally, choose ε0 > ε such that
ε = γ0 (1 + δ)Leαt∗ ε0 < ε.
By the definition of E (P ), there exists A ∈ Cε0 such that
κ = inf lim inf P n µ(A) > 0.
(3.19)
µ∈M1 n→∞
Without loss of generality, we can assume that A=
m
B(zk , ε0 ) × I .
(3.20)
k=1
We now define Cε =
m
B q(Πj (t , zk ), s), ε × I .
j∈I t ∈[0,t∗ ] s∈[0,T ] k=1
Fix µ ∈ M1 . From (3.19) and (3.20), it follows that there exists k(n) ∈ {1, . . . , m} such that P n µ B(zk(n) , ε0 ) × I ≥
κ m
.
For x ∈ B(zk(n) , ε) and t < t∗ , we define J (x, t ) = {j ∈ I : ∥Πj (t , x) − Πj (t , zk(n) )∥ ≤ (1 + δ)Leα t ∥x − zk(n) ∥}. Since
j∈I
pij (x) = 1
for i ∈ I ,
(3.21)
632
K. Horbacz / J. Math. Anal. Appl. 408 (2013) 623–637
from (3.6),
pij (x) ≥
j∈J (x,t )
δ 1+δ
for i ∈ I .
From (3.5), it follows that, for every x ∈ Y ,
{t ∈[0,T ]:β(x,t )>γ0 }
p(x, t )dt ≤
γ < 1. γ0
(3.22)
Hence, setting σ = (γ0 − γ )/γ0 and Tx = {t ∈ [0, T ] : β(x, t ) ≤ γ0 }, we obtain
p(x, t )dt ≥ σ .
(3.23)
Tx
Fix x ∈ B(zk(n) , ε0 ) and t < t∗ . Set J1 = J (x, t ). Let j ∈ J1 . Then, for every s ∈ S1 = TΠj (t ,x) , we have
∥q Πj (t , x), s − q Πj (t , zk(n) ), s ∥ ≤ β(Πj (t , x), s)∥Πj (t , x) − Πj (t , zk(n) )∥ ≤ β(Πj (t , x), s)(1 + δ)Leαt ∥x − zk(n) ∥ ≤ γ0 (1 + δ)Leαt∗ ε0 = ε. Thus q Πj (t , x), s , j ∈ Cε
and P
n +1
µ(Cε ) ≥
t∗
B(zk(n) ,ε0 )×I
0
j∈J1
λe−λt pij (x)p Πj (t , x), s dtP n µ(dx, di)
s∈S1 j∈J 1
γ δ (1 − e−λt∗ )P n µ B(zk(n) , ε0 ) × I . ≥ 1− γ0 (1 + δ)
Combining this with (3.21) gives
lim inf P n µ Cε ≥ 1 −
n→∞
γ δκ (1 − e−λt∗ ), γ0 (1 + δ)m
but µ ∈ M1 was arbitrary and ε < ε , which is impossible.
The next result gives sufficient conditions for asymptotic stability. Theorem 3.2. Under the hypotheses of Theorem 3.1, if, moreover, we assume that there exists η > 0 and δ > 0 such that for every x ∈ Y there exists a time τx ∈ [0, T ] satisfying p(x, t ) = 0 for 0 ≤ t < τx , p(y, t ) > η
for τx ≤ t ≤ T , |y − x| ≤ δ
(3.24)
and p(x, ·) : [τx , T ] → R+ is continuous. If, in addition, supx∈X τx < T , then the Markov operator P given by (3.12) is asymptotically stable. Proof. By Theorem 3.1, the operator P admits an invariant measure. By virtue of Theorem 2.1, it is sufficient to show that for given ε > 0 there exists κ > 0 such that, for every two measures µ1 , µ2 ∈ M1 , there exist a Borel measurable set A ⊂ Y × I with diamϱϕ A < ε and an integer n˜ such that P n˜ µk (A) ≥ κ
for k = 1, 2.
By Proposition 2.4, the set L(M1 ) is tight. Thus there exists a compact set F ⊂ Y × I such that
µ(F ) ≥
4 5
for every µ ∈ L(M1 ).
K. Horbacz / J. Math. Anal. Appl. 408 (2013) 623–637
633
We introduce some further notation. Namely, for s ∈ [0, T ]n , i ∈ I n and τ ∈ Rn+ (i.e., s = (s1 , . . . , sn ), i = (i1 , . . . , in ) and τ = (τ1 , . . . , τn )), we set q1 (x, s1 ) = q(x, s1 ) qn (x, s1 , . . . , sn ) = q(qn−1 (x, s1 , . . . , sn−1 ), sn ) qn ◦ Πn (i, τ, s, x) = q Πin τn , qn−1 ◦ Πn−1 (i1 , . . . , in−1 , τ1 , . . . , τn−1 , s1 , . . . , sn−1 , x) , sn ;
dτ = dτ1 · · · dτn ,
ds = ds1 · · · dsn .
n −1 Next, for n ≥ 2, consider the probabilities Pn : Y × I n+1 × R+ ×[0, T ]n−1 → [0, 1] and P n : Y × I n × Rn+ ×[0, T ]n → [0, 1] given by
Pn (x, i, i1 , . . . , in−1 , in , τ1 , . . . , τn−1 , s1 , . . . , sn−1 )
= pii1 (x)pi1 i2 q1 (Πi1 (τ1 , x), s1 ) · · · · · pin−1 in qn−1 ◦ Πn−1 (i, τ, s, x) and
P n (x, i1 , . . . , in−1 , in , τ1 , . . . , τn−1 , τn , s1 , . . . , sn−1 , sn )
= p Πi1 (τ1 , x), s1 pΠi2 τ2 , q(Πi1 (τ1 , x), s1 ), s2 · · · · · p Πin (τn , qn−1 ◦ Πn−1 (i, τ, s, x)), sn , where s = (s1 , . . . , sn−1 ), τ = (τ1 , . . . , τn−1 ), i = (i1 , . . . , in−1 ). Since α ≥ 0, condition (3.14) implies that γ < 1. Let n ∈ N be such that
γ n · diamϱ F <
ε 2
.
(3.25)
By continuity and compactness, there exists δ > 0 such that
∥qn ◦ Πn (i, τ, s, x) − qn (x, s)∥ <
ε
(3.26)
8
for every i ∈ I n , s ∈ [0, T ]n , τ ∈ [0, δ]n and x ∈ FY , where FY = {x ∈ Y : (x, i) ∈ F for some i ∈ I }. Given x ∈ Y , define
O(x) = z ∈ FY : ∥qn (x, s1 , . . . , sn ) − qn (z , s1 , . . . , sn )∥ <
ε 8
for s ∈ [0, T ]n .
(3.27)
Clearly, O(x) is an open neighborhood of x. Let z1 , . . . , zm0 ∈ FY be such that F ⊂ G, where G=
m0
O(zl ) × I .
l=1
Let µ1 , µ2 ∈ M1 . Set µ = (µ1 + µ2 )/2. Since L(µ) ̸= ∅ (see Proposition 2.4), there exist a subsequence {nk } of {n} and a measure ν ∈ L(µ) such that P nk µ → ν (weakly). There exist n0 ∈ N, l1 , l2 ∈ {1, . . . , m0 } and i1 , i2 ∈ I such that P n0 µk (Vk ) ≥
1 2m0 N
for k = 1, 2,
(3.28)
where
Vk = O(zlk ) × {ik },
k = 1, 2.
Write for simplicity x = zl1 and y = zl2 . Consider a pair (x, y) ∈ F 2 . From condition (3.24), there follows the existence of
τx , τy such that
p(x, s) > η
for τx ≤ s ≤ T
p(y, s) > η
for τy ≤ s ≤ T .
and
Without any loss of generality, we may assume that τy < τx . Since
T
τx
β(x, s)p(x, s)ds ≤ γ < 1,
634
K. Horbacz / J. Math. Anal. Appl. 408 (2013) 623–637
there exists s1 > τx such that
β(x, s1 ) ≤ γ . Thus, according to (3.4), we have
∥q(x, s1 ) − q(y, s1 ) ≤ γ ∥x − y ∥ . By an induction argument, we may construct a sequence (s1 , . . . , sm ), si ∈ [τi , T ], where
τ1 = max{τx , τy } and
τi = max{τqi−1 (x,s1 ,...,si−1 ) , τqi−1 (y,s1 ,...,si−1 ) } for i = 2, . . . , m, such that
∥qm (x, s1 , . . . , sm ) − qm (y, s1 , . . . , sm )∥ ≤ γ m ∥x − y∥.
(3.29)
Fix i1 = i0 such that (3.9) holds. By continuity of Πi , there exist t 1 and δ1 such that
∥Πi0 (t , x) − Πi0 (t 1 , x)∥ ≤ δ for |t − t 1 | ≤ δ1 . From condition (3.24), there follows the existence of τΠi (t 1 ,x) , τΠi (t 1 ,y) such that 1 1 p(Πi1 (t , x), s) > η
for max{τΠi (t 1 ,x) , τx } ≤ s ≤ T , |t − t 1 | ≤ δ1 1
p(Πi1 (t , y), s) > η
for max{τΠi (t 1 ,y) , τy } ≤ s ≤ T |t − t 1 | ≤ δ1 . 1
and
By an induction argument, we may construct for x, y ∈ F a sequence
τj = max{τj , τΠij (t j , qj−1 ◦ Πj−1 (i, t, s, x)), τΠij (t j−1 , qj−1 ◦ Πj−1 (i, t, s, y))}, where s = (s1 , . . . , sj−1 ), t = (t 1 , . . . , t j−1 ), such that
P n (x, i1 , . . . , in−1 , in , u1 , . . . , un−1 , un , s1 , . . . , sn−1 , sn ) ≥ ηn
(3.30)
for |ui − t i | ≤ δ = min{δj : j = 1, . . . , n} and si ≥ τi . Define
ε
A = B qn (x, s1 , . . . , sn ),
4
ε ∪ B qn (y, s1 , . . . , sn ), × {i0 }, 4
where i0 is given by condition (3.9). δ]n , by (3.26) and (3.27), we have From (3.25) and (3.29), it follows that diamϱϕ A < ε . For x ∈ O(x), i ∈ I n and τ ∈ [0,
ε ∥ qn ◦ Πn (i0 , τ, s, x) − qn (x, s)∥ ≤ ∥ qn ◦ Πn (i0 , τ, s, x) − qn (x, s)∥ + ∥qn (x, s0 ) − qn (x, s)∥ ≤ . 4
This gives qn ◦ Πn (i, τ, s, x, i0 ) ∈ A for x ∈ O(x), i ∈ I n , and τ ∈ [0, δ]n .
Combining this with (3.9), (3.12) and (3.28), we obtain P n0 +n µk (A) =
Y ×I
j=(j1 ,...,jn )∈I n
n
R+
1A qn ◦ Πn (j, τ, s, x), jn
[0,T ]n
· Pn x, i, j, τ1 , . . . , τn−1 , s1 , . . . , sn−1 · P n x, j, τ, s λn e−λ(τ1 +···+τn ) dτ ds P n0 µk (dx, di) ≥ σn λn e−λ(τ1 +···+τn ) 1A qn ◦ Πn (i0 , τ, s, x), i0 · P n x, j, τ, s dτ ds P n0 µk (dx, di)
Vk
≥ (γ σ )
n
[0, δ]n δ
0
[0,T ]n
n λe
−λτ
dτ
P n0 µk (Vk ) ≥
(γ σ )n (1 − e−λδ )n
2m0 N
for k = 1, 2,
K. Horbacz / J. Math. Anal. Appl. 408 (2013) 623–637
635
where i0 = (i0 , . . . , i0 ) ∈ I n ,
σ = inf pii0 (x), x∈Y , i∈I
and consequently the right-hand side does not depend on µk for k = 1, 2.
4. Biological consequences Example 4.1 (Cell Division). Equations similar to (3.1) and (3.3) are discussed in the mathematical theory of the cell cycle [20]. For example, in [20], Lasota and Mackey considered the following model. Let Y = Rd+ . They assume that each cell contains d substances, whose masses are denoted by the vector y(t ) = (y1 (t ), . . . , yd (t )), wherein t denotes the age of the cells, which is the time that has elapsed since the birth of the cell. We assume further that the evolution of the vector y(t ) is given by the formula y(t ) = S (x, t ), where S (x, 0) = x. Here, S : Y × [0, T ] → Y is a given function. Let the initial value of substances x = y(0) in the nth generation be denoted by yn , and the mitotic time smax in the nth generation by sn . We assume that in every generation the distribution of mitotic times is given by Prob(sn < t |yn = y) =
t
p(y, u)du
for 0 < t ≤ T .
(4.1)
0
Then the vector y(sn ) = S (yn , sn ) represents the amount of intracellular substance just before mitosis in the nth generation. At division, we assume that each daughter cell receives exactly half of the component constituents of the mother cell, so 1
S (yn , sn ) for n = 0, 1, 2 . . . . 2 We assume that yn+1 =
∥S (x, t ) − S (y, t )∥ ≤ eα(x)t ∥x − y∥. Substituting β(x, t ) = T
1 2
(4.2)
(4.3)
exp α(x)t into (3.5) gives
p(x, s)eα(x)s ds ≤ 2γ .
0
As before, we assume that p : Y × [0, T ] → R+ is positive for τx < t < T and vanishes for 0 ≤ t ≤ τx . Further, p is lower continuous, satisfies (3.7), T
|p(x, s) − p(y, s)|ds ≤ θ ∥x − y∥ for x, y ∈ Y and some θ > 0, 0
and S : Y × [0, T ] → Y is continuous. From Theorem 3.2, we immediately obtain the following result, due to Lasota, Mackey [20]. Theorem 4.1. If inequality (4.3) is satisfied, and if T
sup x
(4.4)
T
∥S (0, s)∥p(x, s)ds < ∞,
sup x
p(x, s)eα(x)s ds ≤ 2,
0
0
then the dynamical system (4.1), (4.2) is asymptotically stable. The factor of 2 that appears on the right-hand side of (4.4) is consequence of the fact that the process of cell division produces two daughter cells, and it is quite important for the eventual stability of the system. Example 4.2 (A Model of Stochastic Gene Expression). The model introduced in [24] involves three classes of process: allele activation/inactivation, mRNA transcription/decay, and a protein translation/decay process. It is assumed that, due to binding or dissociation of protein molecules, each of the gene’s alleles may be transformed, independently of the remaining ones, into an active state or into an inactive state.
636
K. Horbacz / J. Math. Anal. Appl. 408 (2013) 623–637
In [24], the authors considered the following model of stochastic gene expression: dx1 dt dx2 dt
= γ (t ) − x1 (4.5)
= r (x1 − x2 ),
where x1 (t ) is the number of mRNA molecules, x2 (t ) is the number of protein molecules at time t, γ (t ) ∈ {0, 1} is a discrete random variable, and r is the protein degradation rate. System (4.5) generates a stochastic process which is an example of a continuous random dynamical system. For fixed i ∈ {0, 1}, let us consider the following system of ordinary differential equations: dx1 dt dx2 dt
= i − x1 (4.6)
= r (x1 − x2 ),
with initial condition x = (x1 , x2 ) ∈ R2 , where r > 0 is a given constant. Its solution is given by
Πi (t , x) = iv + eMt (x − iv), where v = (1, 1) and M =
−1
0 , −r
r
and so 0
eMt = e−t − e−rt r r −1
e−rt
e −t e −t t
r = 1,
e −t
r ̸= 1
and eMt =
pij =
1, 0,
0 e −t
i+j=1 i + j ̸= 1.
We note that
Π0 (v − x, t ) = v − Π1 (x, t ). Let {tn } be an increasing sequence of random variables, with t0 = 0 and such that the increments 1tn = tn − tn−1 , n ∈ N are independent and have the same density g (t ) = λe−λt , t ≥ 0. Consider a stochastic process {γ (t )}t ≥0 such that
γ (t ) = γn for tn−1 ≤ t < tn , where
P{γn = j|γn−1 = i} = pij . At t = 0, the solution of process (4.5), X (t ) = (x1 (t ), x2 (t )), starts at the point (x, i) ∈ R2 × {0, 1}. Next, we define X (t ) =
(Πi (t , x), i), t < t1 . . (Πi (t1 , x), 1 − i), t = t1
After time t1 , we restart the whole procedure with (x, i) replaced by the new initial condition X (t1 ), so that the process moves along the integral curves of one of systems (4.6) until the time t2 of the second jump, and so on. Thus the solution of (4.5) is now given by X (t ) = Πγn (t − tn , X (tn )) for tn ≤ t < tn+1 .
K. Horbacz / J. Math. Anal. Appl. 408 (2013) 623–637
637
References [1] L. Arnold, Random Dynamical Systems, Springer-Verlag, Berlin, 1998. [2] M.F. Barnsley, S.G. Demko, J.H. Elton, J.S. Geronimo, Invariant measures arising from iterated function systems with place dependent probabilities, Ann. Inst. H. Poincaré 24 (1988) 367–394. [3] A. Bobrowski, Degenerate convergence of semigroups related to a model of eukaryotic gene expression, Semigroup Forum 73 (2006) 343–366. [4] M.H.A. Davis, Markov Models and Optimization, Chapman and Hall, London, 1993. [5] R.M. Dudley, Probabilities and Metrics, Aarhus Universitet, 1976. [6] S. Ethier, T. Kurtz, Markov Processes, Wiley, New York, 1986. [7] K.U. Frisch, Wave propagation in random media, stability, in: A.T. Bharucha-Reid (Ed.), Probabilistic Methods in Applied Mathematics, Academic Press, 1986. [8] R.J. Griego, R. Hersh, Random evolutions, Markov chains and systems of partial differential equations, Proc. Natl. Acad. Sci. USA 62 (1969) 305–308. [9] K. Horbacz, Random dynamical systems with jumps, J. Appl. Probab. 41 (2004) 890–910. [10] K. Horbacz, Asymptotic stability of a semigroup generated by randomly connected Poisson driven differential equations, Boll. Unione Mat. Ital. (8) 9-B (2006) 545–566. [11] K. Horbacz, Invariant measures for random dynamical systems, Dissertationes Math. 451 (2008). [12] K. Horbacz, J. Myjak, T. Szarek, On stability of some general random dynamical system, J. Stat. Phys. 119 (2005) 35–60. [13] K. Horbacz, T. Szarek, Continuous iterated function systems on polish spaces, Bull. Pol. Acad. Sci. Math. 49 (2) (2001) 191–202. [14] K. Horbacz, T. Szarek, Randomly connected dynamical systems on Banach spaces, Stoch. Anal. Appl. 19.4 (2001) 519–543. [15] M. Iosifescu, R. Theodorescu, Random Processes and Learning, Springer-Verlag, New York, 1969. [16] S. Karlin, Some random walks arising in learning models, Pacific J. Math. 3 (1953) 725–756. [17] J.B. Keller, Stochastic equations and wave propagation in random media, Proc. Sympos. Appl. Math. 16 (1964) 1456–1470. [18] M. Kuczma, B. Choczewski, R. Ger, Iterative Functional Equations, Cambridge Univ. Press, New York, 1990. [19] T. Kudo, I. Ohba, Derivation of relativistic wave equation from the Poisson process, Pramana—J. Phys. 59 (2002) 413–416. [20] A. Lasota, M.C. Mackey, Cell division and the stability of cellular population, J. Math. Biol. 38 (1999) 241–261. [21] A. Lasota, T. Szarek, Dimension of measures invariant with respect to Ważewska partial differential equations, J. Differential Equations 196 (2) (2004) 448–465. [22] A. Lasota, J. Traple, Invariant measures related with Poisson driven stochastic differential equation, Stochastic Process. Appl. 106.1 (2003) 81–93. [23] A. Lasota, J.A. Yorke, Lower bound technique for Markov operators and iterated function systems, Random Comput. Dyn. 2 (1994) 41–77. [24] T. Lipniacki, P. Paszek, A. Marciniak-Czochra, A.R. Brasier, M. Kimel, Transcriptional stochasticity in gene expression, J. Theoret. Biol. 238 (2006) 348–367. [25] S. Meyn, R. Tweedie, Markov Chains and Stochastic Stability, Springer-Verlag, Berlin, 1993. [26] M.A. Pinsky, Lectures on Random Evolution, World Scientific, 1991. [27] G. Prodi, Teoremi Ergodici per le Equazioni della Idrodinamica, C.I.M.E, Roma, 1960. [28] T. Szarek, The stability of Markov operators on Polish spaces, Studia Math. 143 (2000) 145–152. [29] T. Szarek, Invariant measures for nonexpansive Markov operators on Polish spaces, Dissertationes Math. 415 (2003). [30] T. Szarek, Feller processes on non-locally compact spaces, Ann. Probab. 34 (2006) 5. [31] T. Szarek, S. Wedrychowicz, Markov semigroups generated by Poisson driven differential equation, Nonlinear Anal. 50 (2002) 41–54. [32] J. Traple, Markov semigroup generated by Poisson driven differential equations, Bull. Pol. Acad. Sci. Math. 44 (1996) 161–182. [33] J. Tyrcha, Asymptotic stability in a generalized probabilistic/deterministic model of the cell cycle, J. Math. Biol. 26 (1988) 465–475. [34] J.J. Tyson, K.B. Hannsgen, Cell growth and division: a deterministic/probabilistic model of the cell cycle, J. Math. Biol. 23 (1986) 231–246. [35] I. Werner, Contractive Markov system, J. Lond. Math. Soc. (2) 71 (2005) 236–258.