Mathematics and Computers North-Holland
in Simulation
30 (1988) 231-242
EVOLUTION OF THE nTH PROBABILITY IN STOCHASTIC SYSTEMS
231
DENSITY AND ENTROPY
FUNCTION
Riccardo RIGANTI Dipartimento
di Matematica,
Politecnico di Torino, 10129 Torino, Italy
The time evolution of dynamical systems with random initial conditions is considered, by deriving the n th order probability density of the stochastic process which describes the response of the system, and the entropy function related to the said distibution. A constructive theorem is proved, which enables the explicit calculation of the nth order probability density in terms of the statistics of the initial conditions. Some monotonicity properties of the entropy are derived, and the results are applied in two examples. The same analysis can be applied to the study of the probabilistic response of dynamical systems with constant random parameters and deterministic initial conditions.
1. Introduction Dynamical systems with random initial conditions are encountered in many fields of scientific interest, such as statistical mechanics, kinetic theory, biological sciences and various engineering applications. They are described by ordinary, deterministic differential equations for the state variable x, with initial conditions x0 which are known in probabilistic terms by means of a well-defined probability distribution P,,( x0) of the state variable at the initial state. We will assume that the solution of the evolution equation exists uniquely in some time interval T, for every realization of the initial condition x0, and that the evolution map x = &x0 has been determined in some exact or approximated form by application of known methods for the treatment of stochastic differential equations, see for instance [3,5]. The dynamical response of the system is then represented by a stochastic process x( w, t), t E T, w E (a, F, p), defined in T and in a suitable probability space (3, F, p). The statistical measures of such a response are often confined to the evaluation of the expectation, of the variance and of the density distribution of x( o, t) at a given time t. Nevertheless, they may not be sufficient to describe the behaviour of a stochastic system. In fact, in many cases an adequate description of the response requires the knowledge of the statistics of x( w, t) at discrete times t,, t,, . . . , t, E T, which implies the determination of the n th order probability distributions of the stochastic process. This paper deals with the explicit determination of the said n th order probability density of the response x( w, t), in terms of the assumed statistical properties of the initial conditions x0(w). The degree of uncertainty associated with such a statistical description of the solution process is also evaluated by deriving the evolution in time of the Shannon entropy function related to the n th order distribution, and some monotonicity properties of the entropy will be derived by studying the evolution of the Jacobian of the transformation map c#+. 037%4754/88/$3.50
0 1988, Elsevier Science Publishers
B.V. (North-Holland)
R. Riganti / Stochastic systems
232
The analysis developed here is mainly concerned with scalar stochastic processes but it can be easily extended to the study of the nth order statistics of multi-dimensional dynamic responses. The results are illustrated through simple examples, and can also be applied for studying the response of stochastic systems with deterministic initial conditions, which are described by ordinary differential equations where the randomness enters into the equation through a random parameter with known probability density, constant with respect to the time.
2. Computation of the nth probability density Consider a dynamical system whose evolution is described by the stochastic process x(w, t)=~~xO=h[x,(w),
t]:T4+DXcR,
~ETcR+,
WE@
F, /A)
(1)
where h is a known, deterministic function of time t and of the random initial condition x0(w) = x( w, t = 0) : D + D, c R joined to a known probability density PO(xO). Let [ti, t,, . . . , t,], t E T, and xi(a)
=x(0,
ti) =/2(x,(w),
ti),
i= 1, 2 ,..., n
n = 1, 2,. . . , be a finite set of
(2)
be the sequence of random variables, defined by the map &, characterizing the properties of the stochastic process (1) at fixed values t,, t,, . . . , t, of t. As known from the theory of stochastic processes, the nth order statistics of x( w, t) are completely specified by the joint probability density Pn(xi, . . . , x,; t,, . . . , t,) of the sequence (2), which is called the nth order probability density of x(w, t). The random variables (2) are completely correlated, in the sence that a given realization of the initial condition x0(w) completely defines, at a fixed time ti, the whole set of xi(w) through the relations x; = &(x0;
t,) = h(x,,
t = t,),
i=l,2
>**.>n.
(3)
These relations can be regarded as the parametric equations of a line y of R”, and define a set of conditions which must be satisfied for every set [ti] E T by the joint probability density P,(+...,x,; ti,..., t,) of the sequence (2). It means that P,, will be defined on the line y of parametric equations (3), with x0 E DO as a random parameter. The evaluation of the said n th order distribution, in terms of the probability density P,,( x0) of the initial state, can be obtained, under suitable conditions to be satisfied by the functions h and PO, by applying the theorems which follow. Theorem 2.1. If the initial condition x0(o) is a continuous random variable, and h,(x,; ti) are continuous together with their first derivatives with respect to each realization of x,,( w) E D,,, then the n-th probability density of the process (1) is given by
(4) xi= hi(xo;
t,),
i= 1, 2 ,..., n.
R. Riganti
/ Stochastic systems
233
Proof. Let f(xl,. . . , x,) be an arbitrary function of the random variables (2), continuous with respect to its arguments. Owing to (3), it is a composite function of x0(w), continuous in D,, and its first-order moment at a given set [t,] of the time t E T can be calculated as ~[fl(t,,...,
t,) = J $h,(x,;
tl>>. . . > kho;
~,>)4&>
dx,.
(5)
On the other hand, the definition of E[f] yields E[f](t,
,...,
t,) =
=
,...,
x,)Pn(xl
,...,
x,;
t, ,...,
t,)
dx, . ..dx.,
f(xl ,..., JY
x,)P,(x,
,...,
x,;
t, ,...,
t,)
ds
J
j-(x,
(6)
where the line y is defined by the relations (3) and ds = dx,((dh,/
dx,)2 + . . . + (d/r,,/ dx,)2)1’2.
By changing the integration variable, equation (6) turns to be
ml(t,,...,
LJ = JDo.f(h,(x,; d,..., J&j; t,)) xf’n(h,(x,; d,...,hn(xo;
t,); t,,...
and equating with the right member of (5) yields &..,h,(xo;
t,))p(x,;
&..,fn)
dx,,=O
(7)
with
p=J'o(x,,)-C&(x,;
tJ,...,kbo;
t,>; b..,tn)
(8)
Since the function f is arbitrary and p is continuous with respect to x0 as a consequence of the assumptions made for x0(w) and hj( x o; ti), it follows from (7) that p(x,; t,, . . . , t,) must be identically null in Do for every set [t,, . . . , t,] of t E T. Taking into account the expression (8) of the function p, equation (4) follows, and the theorem is proved. 0 The above theorem supplies the procedure for the calculation of P,. Namely, given a finite number of points xok of the domain Do, application of (4) yields the corresponding points on the line y and the values which are assumed on y by the probability density P,,. The explicit expression of P,, can be determined under the additional condition that the evolution x = $+x0 defines a one-to-one mapping. This is shown by the following theorem.
R. Riganti / Stochastic systems
234
Theorem
2.2. If,
in addition to the conditions of Theorem 2.1, the functions hi( x0; ti) are also invertible with respect to each realization of x0(w), then the n-th probability density of process (1) is given by P,(x 1,‘“,
xn; tl,...,
t,) =P,(x,=h-‘(x,;
tl))
on the line y which is defined by x;=hj(h-‘(x,;
tI), tj)=gj(x,;
t,, t,),
j=2,
3,...,n
(10)
whereas P, = 0 for all the values of (x1, ___, x,,) which do not satisfy equation (10). Proof.
Inversion of the map & at times ti, x0=
h,‘(x,;
tI) = h,-‘(x2;
t2) = .a.
= h,-‘(x,;
(11)
t,)
and substitution of the first equation into (3) yield the (n - 1) relations between x1 and xi which ti) and are given by (10). Moreover, owing to the existence of the inverse functions hi-‘(xi; using relations (lo), we have 2
-l/2
* c
ii
i
dx,
ii
= =p
dx,
dh,-’
-l/2
dx
dx, =p
2
I
)
dh,-’ O
dx,
Substituting into the first of equations (4) yields the following expression of the density:
P,(x
l,‘“,
x,; t1,...,tJ
= Po(~~=hi-‘(Xi;
t,)) dh-%;i td Ill + !2(
z)2]-“’
which, considering the equivalence between (11) and (lo), proves the theorem.
q
Equation (9) shows that the n th order probability density of the process x( w, t) can be computed from the knowledge of its first-order density at a given time t = t,. In fact, for n = 1, (9) is reduced to the expression of the first-order probability density
Remark.
Plc%
tl) = P,(x,
= h-‘(x,;
rl))
dh-‘(xl; dx,
tl)
R. Riganti / Stochastic systems
235
which represents a well-known result in the theory of stochastic processes. Therefore the same equation may be rewritten as
&b
xn; fl,...,fJ
l,“‘,
=P,(x,;
‘l)(l+;*(g?y’2, x,=gj(xl;
t,, tj)
which shows that the n th order statistics of x( o, t) can be obtained in terms of P, and of the evolution map f#+. The extension of the above results to multi-dimensional processes is a matter of more extended calculations, and it can be substantially_ the above analysis the derivatives of the functions hiP1(xi; t;) by the .I-‘(
xi,
tj) = det[ a&,-‘/
x( w, t) = $+,x0E 0, c IF?“, performed by- replacing in _ _ Jacobians
axi]
of the inverse map x,, = GUIlx,, calculated at times t,, t,, . . . , t,. Example 2.3. In the motion of a particles ensemble, each particle has a constant velocity, proportional to its initial displacement x,,. The field of the particles displacements may be described by the stochastic process x(w, t) = h(x,(w),
t) = x,(w)(l
+ ct),
(12)
t E [o, + co)
where x0(w) E Do defines the distribution of the initial displacements, probability density P,,(x,,), and c is a positive deterministic constant. Inversion of the map x,, - x, at times t,, . . . , t, yields xo=h,-!(xi; x,=gj(xl;
ti)=xi/(l+cti),
i=l,2
t,, tj)=x,(l+ct~)/(l+ct,),
,...,
by means of a given
03)
n,
j=2,3,...,n
(14
and consequently &(x1,...,&;
tl,...,fn)
= P()(x, = Xi/(1 + ct1)) ll/(l =&(x,=x,/(1
+ct,))jC(1
+ ct,) 1 1 + i: [ j=2 ( z)2]-1’2
(15)
+ctl)y2 i
on the line y of R n which is defined by the (n - 1) relations (14). Example 2.4. Consider differential equation
dx cax-bX2 dt
the population
>
t E
dynamics model described
by the Malthus-Verhulst
[o,+co)
with a > 0 and b E IFA’ deterministic parameters. If the initial conditions are random, the solution of (16) is the stochastic process x(w, t)=h(xo(w),
t)=ax,/(aexp(-at)+bx,(l-exp(-at)))
where x0(w) = x( w, t = 0) E Do is a random variable with known probability
(17) density P,(x,).
236
R. Riganti / Stochastic systems
Given a set [ti], i = 1, 2,. .., n, of t E R + and assuming value X0 * =
the functions
exp( -uti)/b(l
-a
hi(xo, * ti) = h(x,,
x0= hi-‘(x,; and the random
ti) =ax,
variables
xi(o)
that the domain
D,, does not include
the
- exp( -at,)), t = t,) are continuous
and invertible
exp(-at,)/(u+bx,(exp(-at,) = h,(x,(w),
ti) are completely
with respect
to x0:
- l)),
i= 1, 2,...,
correlated
through
n,
(18)
the relations
ax, exp( -at,) Xj=S,(Xl;
t,,
tj)
=
uexp(-ut,)+bx,(exp(-at,)-exp(-at,))’
J=2’3’P’*‘n* (19)
Then the n th order density of x( w, t) is calculated by differentiating and g,(x,; t,, tj) with respect to x1 and by making use of Theorem ~n(xl,...,xn;
the functions h,-‘( x,; tl) 2.2, with the result
4,...,L)
=po(x0 = h,-‘(x,;
u*exp(
td) (a
+
bx,(exp(
-at,) -at,)
- 1))2 -l/2
exp( -2u(t, x
+ t,))
I+& i
(20) j=2 (a exp( -ut,)
+ bx,(exp(
-at,)
- exp( -Ut,))j4
= 0 elsewhere. for all the values of (x1,. . . , x,) satisfying (19), whereas P,, The time evolution of the first- and second-order densities calculated from (20) is shown in Figs. 1 and 2, at some values of times t and (t,, t2), respectively, by assuming a = 1, b = 0.2 and
P,(x;t )
---
b = 0.2
-b=l
Fig. 1. First-order probability densities of the stochastic process (17).
R. Riganti / Stochastic systems
237
I
P2(x1,x2;t&)
b=0.2
Fig. 2. Second-order
probability
densities
of the process
(17), at f1 = 1 and at some values of t2.
x0(w) distributed in D, = [0, 11 with probability density &(x0)
= 14Oxi(l-
xJ3.
(21)
In Fig. 1, the comparison of the results with b = 0.2 and b = 1 shows that the probabilistic evolution of the response is qualitatively different in the two cases. The measure of D,, the domain of all admissible values of x( w, t), increases with time when b = 0.2, and it decreases when b = 1; nevertheless, it does not necessarily imply and analogous behaviour of the dispersion of the distributions, or measure of the ‘lack of information’ about the system. This remark suggests the opportunity of getting a deeper knowledge of the probabilistic nature of the response, through the analysis of the entropy functions of the above distributions, which is developed in the next section. 3. Entropy of the nth distributions Following Shannon [4], the degree of uncertainty associated with the stochastic x(w, t) at times t,, t,, . . . , t, may be measured by the entropy function WW..,
L) a -
/
Pn(xl,...,xn; x In P,(x, ,...,
tl,...,
L)
x,;
t, ,...,
t,) dx, +. . dx,.
process
(22)
If x 1,. . . , x, depend on the same random variable x0(w), as it occurs for the response we are considering, and if the hypotheses of Theorem 2.1 are satisfied, then using expression (4) of P,, we obtain
(23)
238
R. Riganti / Stochastic systems
where
Hoa
-
Jpobo)ln Pobo)dxo
(24
DO
is the entropy of the initial state x0(o) of the system, and hi( x a; ti) are defined by the evolution map c#+,,calculated at t = ti. Thus, the entropy associated with the n th order probability density of the response can be calculated in terms of the statistics of the initial condition x0(w), and the map c#+defines its evolution at a given set of times [t,, . . . , t,]. According to the discussion in [6], the entropy (22) is a real-valued function of the n independent variables t,, . . . , t,, in general non-monotonic, which may be negative, and it goes to - co in the limiting case when P,, is a multi-dimensional delta function (that is, if the response is deterministic). For any given realization of x0, the derivatives in the integral of (23) are ordinary functions of the time independent variables ti:
dhi(Xo, ti) dxo
vx,EQ):
=Xi(ti;
x0),
i=l,
2 )...)
n.
Assuming that xi are differentiable with respect to ti, and that all the derivatives vanish simultaneously, the partial derivatives of H( t,, . . . , t,) are given by
dxi/dti
do not
-1
I
Pcdxd
-
-20.
dxo
and consequently VX,ED,,
t,>o:
&(t,;
dx.
x,)-&f~O I
3H
(25)
at,
Equation (25) states a sufficient condition in order that the entropy function (23) be monotonically increasing (or decreasing) with respect to the n independent time variables ti E T. In particular, by setting n = 1, (23) reduces to
H(t)=H,+Lolndh(x,,;0 t)
I
dx
I
pobo>dxo
and defines the time evolution of the entropy associated with the distribution of the stochastic process (1). Analogously, the entropy H(t) process x( o, t) = ~~rxo(w) E 0, G R” will be given by
H(t) = Ho +
/$
I J(x,; t)
Ipo(xo)dxoy
HO
a-
L?(xo)
first order probability for a multi-dimensional
ln Po(xo> dxo (27)
where Jbo;
is the (m x m) Jacobian
of the direct
map
x0 + x,. Therefore,
within
the class of dynamic
R. Riganti / Stochastic systems
239
systems which is considered here, the time evolution of the Jacobian behaviour of the entropy function (27). Namely, the condition
dJ(t; x,,)
t>O:
Vx,ED,,
dt
(28) determines
the
/J(t; x0) 20
implies a monotonic behaviour (increase or decrease, respectively) of the entropy function H(t) for every t > 0. In this connection, a useful approach to verify the above condition consists in considering the Jacobian of the inverse map +t-1 (if it exists), 1
t) = det
J-‘(x,
J(X0; d x,=+<-‘x
and solving its evolution equation, see for instance [1,2]. Let us apply the above results to determine the entropy distributions derived in the examples of the preceding section.
functions
of the probability
Example 3.1. The time evolution of the entropy associated with the process (12) is obtained by applying (26) H(t)
= Ho + ln(l + ct)
and the entropy of the n th order probability density (15) at the times t,, . . . , t, is deduced from (23) H(t,,...
,t,)=H,+$ln
C(l+Cti)*. i i
i
Since c > 0, both the entropy functions above are monotonically increasing with respect to their independent time variables. This may be verified by considering that the quantity xi(t,)(dxi/dt,) = c/(1 + ct,), which is independent of x0, is positive for ti = t and for each tj E T, and therefore the conditions (25) and (29) are satisfied. Example 3.2. In the population dynamics model described by the process (17), the entropy function H(t) at the time t is obtained by inserting the derivative
A!!-=a2
exp(
-at)/(
a
exp( -at)
+ bx,(l
- exp( -at)))2
dx0
into (26), with the following result: H(t)=Ho+21n
a+&-2
In] a + bx,(exp(at) J DO
- 1) 1P,(x,)
dx,.
Analogously, the entropy function associated with the response at the times t,, . . . , t, is obtained as H(t 1,“‘,
t,)
=H+$a4 0
DO 1nC
J
[ i (a exp( -at,)
exp( - 2at,) + bx,(l
- exp( -ati)))4
dxo. (31) 1Po(xo)
240
R. Riganti / Stochastic systems
1
0
2
3
4
t
Fig. 3. Entropy function of the first order density of the process (17), for different values of the coefficient b.
Equation (30) gives the entropy of the first-order probability densities which are shown in Fig. 1. This entropy, normalized with respect to its initial value H, (see (24)), is plotted versus t in Fig. 3 for different values of the parameter b, and assuming a = 1 and PO(xO) as in (21). It shows a monotonic behaviour of the entropy for b < 0 (where dH/dt > 0) and for b 2 1 (where dH/dt < 0), while for b E (0, 1) the entropy reaches a maximum at a finite value of t > 0. The above results may be commented as follows. When b < 0, the evolution of the response is characterized by an increasing degree of uncertainty of its dynamical states (the system “goes towards chaos “) and, on the contrary, when b >, 0 it evolves towards a “pure ” (i.e. deterministic) state, to be reached at t + + 00. Nevertheless, this trend is not monotonic if b E (0, l), because the asymptotic deterministic state is reached after an initial increase of the uncertainty associated to the probabilistic nature of the response. Analogous considerations can be developed with regard to the entropy (31) and, in particular, to the entropy function H( t,, t2) associated with the second-order probability density of the response, which is shown in Figs. 4 and 5 for b = 0.2 and b = 1, respectively. Concluding, let us remark that the above analysis, which aims to provide a more detailed knowledge of the probabilistic behaviour of dynamical systems with random initial conditions, may also be applied to the study of systems with deterministic initial conditions, whose evolution equation is characterized by the presence of a random parameter ar(w) with known probability density P,(a). In this occurrence, the analysis must be developed by considering the map a -+ x, defined by in (1). In so doing, the x( w, t) = $+a = h[a(w), t; x,], instead of the map x0 + x, introduced results obtained above hold by replacing the initial state x0 by the random parameter a(w). In particular, it must be observed that the evolution of the entropy functions will now be derived with respect to a reference constant quantity H,CC -
which is a measure parameter (Y(0).
J 0,
P,(a)
In P,(a)
of the entropy
da,
associated
with the probabilistic
distribution
of the known
R. Riganti / Stochastic systems
H(t,,t,) “0
b = 0.2
Fig. 4. Entropy function of the second-order density of the process (17), for b = 0.2.
1
H(t,,t,) “0
b=l
Fig. 5. Entropy function of the second-order density of the process (17), for b = 1.
241
242
R. Riganti / Stochastic systems
Acknowledgment This paper has been realized within the activities of the Italian G.N.F.M., under project AITM, and with partial support of M.P.I.
Council
for
Research,
References [l] N. Bellomo, E. Cafaro and G. Rizzi, On the mathematical modelling of physical systems by ordinary differential stochastic equations, Math. Comput. Simulation 26 (1984) 361-367. [2] N. Bellomo and G. Pistone, Time-evolution of the probability density under the action of a deterministic dynamical system, J. Math. Anal. Appl. 77 (1980) 215-224. [3] N. Bellomo and R. Riganti, Nonlinear Stochastic Systems in Physics and Mechanics (World Scientific, Singapore, 1987). [4] C. Shannon and W. Weaver, The Mathematical Theory of Communication (University of Illinois Press, Urbana, 1949). [S] T.T. Soong, Random Differential Equations in Science and Engineering (Academic Press, New York, 1973) [6] A. Wehr, General properties of entropy, Reu. Modern Phys. 50 (1978) 221-260.