Mathematics and Computers in Simulation XXIV (1982) 507-514 North-Holland Publishing Company
507
STOCHASTIC VERSUS DETERMINISTIC G.S. LADDE Department of Mathematics, Universi~ of Texas at Arlin~on, Arlington, TX 7601~ ~ ~A.
M. SAMBANDHAM School of Mathematics, Georgia Institute of Technology, At~nt~ GA 30332, ~S.A. One of the fundamental problems in stochastic modeling of dynamic systems is to what extent the study of stochastic analysis differs with corresponding deterministic analysis. This work focuses the attention of finding estimates on the variation of stochastic solutions and the corresponding solution of the mean of the dynamic system. This is analyzed in the context of random polynomials and differential equations. Certain analytic and computational study is made with regard to random differential equations and algebraic polynomials.
i.
INTRODUCTION
A system of differential equations as a dynamic model of a system of n interacting elements, is often based on the underlining assumptions about dynamic laws of the system. The dynamic laws can be decomposed into two categories, namely, (i) deterministic dynamic laws and (ii) stochastic dynamic laws. The deterministic dynamic laws that describe a system, determine the evolution of the system, completely. On the other hand, the stochastic dynamic laws that describe a system of n interacting elements, do not determine evolution of the system with the certainty. In the mathematical modeling of dynamic processes, one can approximate stochastic mathematical model by a deterministic mathematical model. This can be achieved by approximating stochastic dynamic laws by means of corresponding deterministic dynamic laws. This kind of approximations leads us to the problem of verification of the fact that to what extent the deterministic mathematical model differs with the corresponding stochastic model. Problems of this nature for stochastic differential equations [7], for stochastic boundary value problems [1,2], for integral equations [3,4] and for random polynomials [5] have been investigated. In this article we analyze analytically and numerically to what extent the study of random polynomials and stochastic differential systems differ from the corresponding deterministic problems We organize our article as follows. In section 2 we discuss few analytical results for random polynomials and stochastic differential systems. In section 3 we present few numerical results for random polynomials and random differential equations. Section 4 contains some of our observations.
The numerical examples and figures in section 3 are gotten by using subroutines contained in the IMSL scientific library [8] and the CALCOMP software. 2.
ANALYTIC RESULTS
2.1 Random Algebraic Polynomials The mathematical study of the dynamic of n species that is represented by elementary differential equations, reduces to the study of algebraic polynomials. Therefore, in this section, we propose to present some mathematical results concerning random algebraic polynomial and corresponding its mean deterministic polynomial. Consider a random polynomial of degree form (2.1.1)
n
of the
Fn(Z,~)=a0(w)+al(~)z+...+an(~)zn
where z 6 C, C stands for a set of complex numbers; the coefficients ai(w) for i=0,1,..n, are complex-valued random variables defined on a complete probability space (~,A,~) with an(U ) + 0 with probability one. Any ~ E C that satisfies Fn(Z,W) = 0 is called a zero of (I). We note that the zeroes of the polynomials are the solutions of the corresponding algebraic equation, Fn(Z,~) = O. Christensen and Bharucha-Reid [5] obtained analytical and numerical estimates for the difference between the sample roots of (2.1.1) and the roots of the averages of (2.1.1), whenever the samples are independent and the coefficients are independent and identically distributed and symmetric about their means. We find here similar estimates when the random coefficients a. i are dependent random variables. For each a = (a0,a I ..... a n)
in
C n+l
let
denote the roots of
Fn(Z,~).
that
Var(a)
EIa I < m
0378-4754/82/0000-0000/$02.75 © 1982 IMACS/North-Holland
and
El(a) ..... ~n(a)
Further we assume
< =.
G,S. Ladde, M. Sambandham / Stochastic versus deterministic
508
If each
a= (ao,al,...,a n)
~l(a),...,~n(a)
in
Cn+l
denote the roots of
Further we assume that
Ela I < ~
~Ikl
Dk=
let
k0
Fn(Z,~).
and
kI kn_ I a 0 ~ al...~ an_ 1
Var(a) < ~. a k = a0k0 alkl
If
N
independent samples of
a tj), j = 1,2,...,N, roots
a
are taken, say,
we see that the values of
~i j = ~i(a(J)), for
k'. = ko'.kI. "'" k n-l"'
i= 1,2,...,n;
j = 1,2,...,N will be independent for fixed i and variables j and dependent for fixed j and variable i. We note that n-i (2.1.2) I~ 2 1 + ~ ]a i n " i=O
We assume that (2.1.3)
ai
kn-l, • ''an_ 1
are such that
EI~I:EIK~I!E(I+Ia~I)
I+...+]a~I)) I <~
and
Let IA and IA, denote the indicator function of H A and its complements HA, for 0 < A < R. Then we can write (2.1.11) E(~(a)) : E(IAE(a))+E(IA,~(a)). using (2.1.10) and (2.1.11), we estimate (2.1.7) in the following theorem. We note that the sequence {x i} = {zi-E(ai)} are dependent random variables.
Var(~J) : Var($ll) <--El~iI2i n=l <__E(I+ y. la~l)l 2
Theorem 2.1.1. Let
N
(2.1.5)
I f(J)--> E(~I) with probability 1 j =1 : k by Lindeberg-Feller central limit Theorem
and
~l(a),~2(a) ..... ~n(a)
be the
n
roots of
the random polynomial (2.1.1). Suppose that the dependent random variables be such that (2.1.3), (2.1.4) are satisfied. Assume that ~(a) is on HR, the polydisc centered on E(a) with 0 < A < R. Then
N
(2.1.6)
~ (~J-E(~Jk))//N Var(~kZ)--> N(0,1) j=l in distribution for every fixed k.
(2.1.12)
IE~(a)-~(E(a))l< ER(A),
where ER(A) = E(IA, + E(IA,)
+ M
~(E(a)).
In fact, we want to esti-
mate (2.1.7)
Im(~(a)-~(E(a))l. i I In the following we denote ~ by
~.
center at
Since
that is
HR= {zECm: [zi-E(ai) [ < R li=0,1 ....,n-l}.
Define (2.1.9) where
H A={zeCn:
Using Taylor expansion we get ~(z) =
HA C H R
and
$(z)
is analytic, apply-
ing Taylor series we have 1 E(IA$(a))= ~ ~,Dk~(E(a))E(IAX k) k=0
[zi-E(ai) [ < A li=O,l,2,...,n-l}.
0 < A < R.
(2.1.10)
ma~ (R+E 1apl ), 0 <_p<__n-I
(z0-E (an_ l) ) I • Proof:
(2.1.8)
ll-[(E(a))I
IXl = I(z0-E(a0))(Zl-E(al))...
To estimate (2.1.7), we assume ~ is analytic on HR, where H R is a suitably large polydisc E(a),
lapI)
~ E(IA( XI~R)k), k=l
M < 1 +
I
pare it to
max 0 <_p<_n-i
The strong law of large number (2.1.5) gives a method of approximating the mean of the solution of a random equation. The error that we made in this kind of approximation, is exhibited by the central limit property (2.1.6). 1 In view of this, we would like to determine E(~i(a)) and com-
~ ~Dk~(E(a))(z-E(a)) k, k=0 "
(2.1.13)
+ ~ I--TDk~(E(a))E(IAxk) k=l k. Using cauchy estimate, namely (2.1.14)
where k = (k0,kl,... ,kn_l) n-i Ikl = i!0ki ,
= ~(E(a))E(Z A)
IDk~(E(a))] ! Mk~R-IkI
in (2.1.13) we get
G.S. Ladde, M. Sambandham / Stochastic versus deterministic (2.1.15)
E(IA$(a)) < ~(E(a))E(IA) + M [ E(IA(~R-~)k).
Now from (2.1.ii), (2.1.13) and (2.1.15) and using the Cauchy estimate, namely (2.1.16)
~(a) < i+ m~XlapI ,
we obtain Im(~(a)) - ~(E(a)) I < E(IA, m ~ i a p I ) + E(IA,) + M where
Ii-~(E(a))I
~ E(IA(~) k=l
k)
M < i+ m~x(R+ ElapI)
IXI < I(z0-E(a 0)(zl-m(al)) .... (Zn_l-E(an_l)) I Corollary 2.1.1. If in the Theorem 2.1.1 aidentically distributed an~ and symmetric, then (2.1.18)
are independent and x i are independent
IE~(a)-~(E(a))I ! ER(A),
where ER(A) = 2 max E(IA,Iai]) i + E(IA,)(II- ~(E(a))[) n n + M((12+I 3) -12 ),
f
Hereafter, the notations and definitions are adopted from [6]. Without further mention we assume that all the inequalities and relations involving random quantities are valid with probability i (w.p. i). Now we assume the following hypotheses: (H I ) R E M[R+x Rn ,R[a ~ n ]] and R is almost sure sample continuous in f E C[R+xRn,R n], fx
x
for fixed
t E R+
exists and fxED[R+xR~R n2].
(H~) The random function F in (2.2.1) satisfies suitable regularity conditions so that the initial value problem (2.2.1) has sample solution process existing for t ~ t o . The above conditions imply that x(t,~) = x(t,tN,xo(m)) is a unique solution of (2.2.4), and f~rtNer that x(t,~) is sample continuously differentiable with respect to (t0,x0). Let
~(t,t0,x0(m))
be the fundamental solution
of the variational system associated with (2.2.4). Further assume that nm (H 3) V(t,x) E C[R+×R ,R ], Vx exists and is continuous for
(t,x) E R+ × R n.
We now state the following lemmas. See [7]. Lemma 2.2.1.
Let the hypotheses
For details
(HI) - (H 3) be
satisfied and
13 = E(IAt2/(l-t2)),
sample solution of (2.2.4) and y(t,w) = y(t,t0,Y0(~),~) be the sample solution of
This corollary is essentially due to Christensen and Bharucha-Reid [5].
(2.2.3) for
In this section, we present some results with reard to random differential equations. These results will be appearing in [7]. For our discussion we consider the initial value problem (2.2.1) y'(t,~) = F(t,y(t,~),~),Y(t0,~ ) = y0(~) and the corresponding mean system of differential equations
(2.2.5)
F(t,y(t,~),~) - E[F(t,y(t,~),m)]. In our presentation we will be using the following initial value problem also: x '= f(t,x), X(to,m ) =yo(~) = Xo(~ )
Then t
V(t,y(t,~)) = V ( t , x ( t , ~ ) ) + f
V(t,x(t,s,
y(s,m))) ~ (t,s,y(s,~))R(s,y(s,~),~)ds. The next lemma gives the expression for the d i ~ ference between the solution of the perturbed System (2.2.3) and the solution of the mean systme of equations (2.2.2). Lem~a 2.2.2. Suppose that all the hypotheses of Lemma 2.2.1 be satisfied. Then (2.2.6)
V(t,y(t,m)-m(t)) = V(t,x(t,~)-m(t)) t + f Vx[t,x(t,s,y(s,m))-x(t,s,m(s))] to (t,s,y(s,~))R(s,y(s,m),~)ds,
From (2.2.1) and
(2.2.3) y'(t,m) = f(t,y(t,m)) +R(t,y(t,~),m), Y(t0,~) = y0(~), where R(t,y(t,~),~) =
t > O.
be the
to
(2.2.2) .m'(t) = f(t,m(t)),m(t0) = m 0 = E(Y0(~)) , where f(t,z) = E[F(t,z,~)]. (2.2.2) we have
x(t,~) = x(t,t0,x0(~))
--
Random differential equation
(2.2.4)
is as defined in (2.2.2).
12 = E(IA/(l-t2)) ,
t = Izi- E(ai) I/R.
2.2
where
509
where
m(t) =m(t,t0,m 0)
is a solution of (2.2.2).
A particular case of Lemma 2.2.2 is illustrated in the following remark which will be very much useful in studying the statistical properties of solution processes. Remark 2.2.1. If (2.2.6) we obtain
V(t,x) = IIx112
then from
510
G.S. Ladde, M. Sambandham / Stochastic versus deterministic t
fly(t,~)-m(t)ll 2 =llx(t,~)-m(t)U 2 + f [ x ( t , s , y ( s , ~ ) ) to x(t,s,m(s))] ~ (t,s,y(s,m))R(s,y(s,m),~)ds.
we present a particular Illustration
2.2.1. Let 2 V(t,x) = llxll and ll~(t,s,y(s,~))ll
-
(2.2.9) To develop theorems on the estimations tions we assume the following. (H 4)
llVx[t,x(t,s,y(s,~))
of solu-
(2.2.10)
! C(lly(s,~)-m(s)II )g(s,~), C 6 C[R+,R+]
g 6 M[R+,R[~,R+]]
and nondecreasing
on
R+,
and is sample Lebesgue
in-
tegrable.
(H5) where R+; b 71
a is differentiable
exists and it is non-decreasing
continuous
on
dH(s) ds
H -I
t = [K(llx0(m )-m011 ) +tfo g(s,w)ds] 2, since
i h(s)
h(s) = C(b-l(s)). if
Now by an appli-
2.2.1, we get
(2.2.12) t b (lly (t ,m) -m(t)fl ) < H -I [Kllx 0 (~) -mdl +tf0 g (s ,~) ds ]
< H-I[~ g(s,~)ds+H(N(~))], to w.p. i,
N(t,~) = a(llx(t,~)-m(t)il),
Moreover,
g(s,~) E M[R+,R[~,R]].
cation of Corollary
b(iiy(t,~)-m(t)il)
<__ 211x(t,s,y(s,~))
- x(t,s,m(s))llg(s,m), where
(HI) - (H5) are satis-
- x(t,s,m(s)))~(t,s,y(s,~))
R(s,y(s,~),~)ll
and
R+ .
Suppose the hypothesis f led. Then
and
From the boundedness assumption on the solution processes (2.2.1), together with (2.2.9), (2.2.10), one can find g such that
on
Theorem 2.2.1.
where
flV(t,x)fl <_ 2flxfl.
(2.2.11) llVx(t,x(t,s,y(s,~))
b(llxn) ! IIV(t,x)ll ! a(Llxll), a,b ~ C[R+,R+];
< K.
Note that (H4) is feasible provided that the solution processes of (2.2.1) are bounded w.p. ]. This can be tested by using the boundedness resuits in [7]. From (2.2.9) we obtain
- x(t,s,m(s))]
%(t,s,y(s,~))R(s,y(s,m),m)il
where
example.
is a concave function,
then
E[b(lly(t,~)-m(t)ll)] !H-I[E( ~ g(s,w)ds) + to E(H(N(t,m)))].
H(s) = s I/2, . By noting the fact that 2 b(u) = u = a(u), (2.2.12) reduces to the following form: 2 (2.2.13) lly(t,~)-m(t)fl ![Kllx0(~)-m011 + t 2 f g(s, )ds] . to Taking expectation get r
on both sides of (2.2.13) we
Elly(t,w) -m(t)ll2 <__E[Kllx0(~) - mJJ+t}og(S,~)ds]2 In the following corollary we prove another inequality which gives the estimates in terms of the initial conditions for (2.2.2) and (2.2.4). Corollary
= E[K211x O(m) - m011 2 + 2Kllx0(~)
-m011~g(s,~)ds + (~ g(s,~)ds)2]. to
2.2.1.
Suppose that all the hypotheses are satisfied and ll~(t,s,y(s,~))ll
of Theorem 2.2.1
! K,
where
K
(2.2.7)
b(IIy(t,~)2m(t)ll) !H-l[H(N(t0,m)) + t $ g(s,~)ds], to N(to,m) = a(Kllx0(~)-moli).
where
(2.2.14)
is a po@itive constant.
H -I
Then
Moreover
if
is concave then
(2.2.8)
E(b(lly(t,m)-m(t)ll))!H-l[E(H(N(t0,m)) + t E( f g(s,m)ds)]. to
To illustrate the feasibility of assumption (H4) , and the fruitfulness of the above result,
Remark 2.2.2. We remark that by selecting various conditions on the process g(s,~) and ilx0(~) - m011 one can obtain more attractive estimates.
For example:
(i) Taking the square root and then expectation on both sides of (2.2.13) we obtain t Eiiy(t,m) - m(t)II <_E[Kllx0(m) - m011 ] + E[f g(s,~)ds] . (2.2.15) to (ii) When llxN(~)-mNll and g(s,~) are independent randomVproces§es, from (2.2.14) we get
Elly(t,~) - m(t)U 2 <_ K2E(llx0(~)-m0112) t (2.2.16) + 2K E(llXo(~)-mo[I)E( t ! g ( s , ~ ) d s ) u t t + E[S S (g(s,~)g(u,~))dsdu]. t0t 0
G.S. Ladde, M. Sambandham / Stochastic versus deterministic Further, if g(s,~) is any stationary gaussian process and E g(s,~) is a constant (which we can take to be zero) then E(g(s,~)g(u,m)) depends only on s - u. Therefore, if Eg(s,m) = 0 then in this case (2.2.16) reduce to
Ooq
y(t ,~) - m(t)II 2 < K2E(llxo_moll )2 + ~ ~ c(u-s) duds -t0to where E(g(s,~)g(u,m)) = c(u-s). Eli
From the above estimates, one can discuss the stochastic vs. deterministic problem in the mathematical modeling of real world dynamic processes. 3.
NUMERICAL RESULTS
3.1
Random Algebraic Polynomials
(-9
- 1I . 6 0
r
We develop a computer code to generate a sample of random algebraic polynomials of the form a 0 + ala +...+ anZn where a 0 is N(-I,o); al,...,an_ 1
are
N(0,o)
T
and correlation be-
Figure 2
tween any two random variables is 0.5 and a is unity. The roots are calculated and stored against the expected polynomial z n - i. Further, the roots of the deterministic sample coefficients are also calculated. We have used GGNML to genuate random variables and ZRPOLY to find out the roots. These sample statistics of roots are graphed on the CALCOMP plotter.
THE ROOTS OF THE AVERAGE POLYNOMIAL REAL PART .9479776 .9385939 .7929141 .5444317 .2418973 -.0678546 -.3957972 -.6833067 -.8846311 -.9729657 -.9729657 -.8846311 -.6833067 -.3957972 -.0678546 .2418973 .5444317 .7929141 .9385939
The numerical example we present involves a polynomial of degree 19 and sample of size i0. Figure 1 and 2 illustrate the distribution of the roots of the sample polynomial and the average polynomial, respectively. In figure 2 denotes the roots of the average polynomial and * denotes the average roots.
0 04
(.9
-1.60 I
o .00
~l
REAL 0 I
T Figure I
511
IMAG. PART 0.0000000 .3304983 .6154910 .8274597 .9768263 1.0020031 .9160294 .7380051 .4762088 .1703126 -.7103126 -.4762088 -.7380051 -.9160294 -1.0020031 -.9768263 -.8274597 -.6154910 -.3304983
Table 3.3.1: The roots of the average polynom~l
1.60 I
G.S. Ladde, M. Sambandham / Stochastic versus deterministic
512
3.2 AVERAGE ROOTS REAL PART IMAG. PART .9353027 0.0000000 .9416229 .3299450 .7950052 .6133575 .5434202 .8275003 .2416081 .9723315 -.0675997 1.0043470 -.3948250 .9150530 -.6857177 .7372559 -.8823378 .4740675 -.9715570 .1651164 -.9715570 -.1651164 -.8823378 -.4740675 -.6857177 -.7372559 -.3948250 -.9150530 -.0675997 -1.0043470 .2416081 -.9723315 .5434202 -.8275003 •7950052 -.6133575 •9416229 -.3299450 Table 3.1.2:
STANDARD DEVIATION .0768116 .0397804 .0354159 .0214688 .0453742 .0370409 .0356639 .0347107 .0304207 .0394782 .0394782 .0304207 .0347107 .0356639 .0370409 .0453742 .0214688 ,0354159 .0397804
The average of the roots
Random differential equation
To show the scope of the analytical results in [7], we present some examples. We developed a computer code to generate samples of random differential equations, (3.2.1)
l
y'(t,~) = a(t,~) y(t,~), Y(t0,w) = y0(~),
Lm'(t) = E(a(t,~))m(t),
E(Y0(~)) = m0,
and
{
(3.2.2)
y'(t,~) = - a(t,~) y3(t,~), Y(t0,~) = y0(~),
m'(t) =
E(a(t,~))m3(t),
E(Y0(~)) = m 0.
For our calculations we have taken a(t,~) to be uniformly distributed in (0, 0.5). GGUBFS generates a sample of i0 random variables. For each random variable DVERK solves the differential equation (3.2.1) and (3.2.2) at t = 0.i, 0.2,...,1.0 when y(t~,~) = i. and stored. Further the solutions ~f the mean differential equation is also calculated.
DEFLECTION OF THE ROOTS .0126749 .0030791 .0029874 .0010123 .0045041 .0023577 .0013779 .0025248 .0031376
.0053837 .0053837 .0031376 .0025248 .0013779 .0023577 .0045041 .0010123 .0029874 .0030791
From [7] we note that for (3.2.1) and (3.2.2) the following estimates hold. E(ly(t,~)-m(t) I) N K[E(Ix0(~)-m 01) (3.2.3)
ft + E
and E(ly(t,~)-m(t)] )2 ! K2m[]x0(~)-m0l (3.2.4)
+
Table 3.1.3: The difference between the mean of the solution and solution of the mean. A
E(R(A))
.1397 .1472 .1547 .1622 .1697 .1772 .1847 .192~ .1997 .2072" .2147 .2222 .2297 .2372 .2447 .2522 .2597 .2672 .2747 .2822
.3044 .3015 .2990 .2969 .2953 .2941 .2934 .2932 .2936 .2945 .2959 .2979 .3005 .3036 .3073 .3115 .3163 .3218 .3277 .3343
Table 3.1.4: The value of for a given A.
ER(A)
la(s,~)-Ea(s,~) Ids] to
it
la(s,m)-Ea(s,m)las] 2. to
In our calculations we have taken K = 2 for linear equations (3.2.1) and K = i for nonlinear equations (3.2.2).
0.i 0.2 0,3 0.4 0.5 0.6 0.7 0.8 0.9 1.0
Mean of the Solutions
Solutions of the Mean
1.0237 1.0480 1.0731 1.0988 1.1252 1.1524 1,1803 1.2091 1.2386 1.2690
1.0236 1.0478 1.0726 1.0980 1.1239 1.1505 1.1777 1.2055 1.2340 1.2632
Absolute Difference .0000 .0002 .0004 .0008 .0013 .0019 .0026 .0035 .0046 .0058
Table 3.2.1: The mean of the solution, solution of the mean and the absolute difference between them for the equations (3.2.1). in (2.1.12)
G.S. Ladde, M, Sambandham / Stochastic versus deterministic t 0.i 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0
Ely(t,m)
- m(t) I
.0089 .0182 .0280 .0383 .0490 .0602 .0720 .0842 .0971 .1105
upper bound
t
.0174 .0348 .0522 .0696 .0870 .1044 .1218 .1392 .1566 .1740
0.i 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0
Table 3.2.2: EIy(t,~) - m(t) I and analytical upper bound in (3.2.3) for the equations (3.2.1).
t
Ely(t,~ ) - m(t) l2
Ely(t,~)
.0001 .0004 .0009 .0018 .0029 .0044 .0062 .0085 .0113 .0147
Table 3.2.3: E[y(t,~) - m(t)l 2 and analytical upper bound (3.2.4) for the equations (3.2.1).
0.i 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0
.9775 .9567 .9374 .9193 .9023 .8864 .8714 .8571 .8436 .8308
.9774 .9563 .9365 .9179 .9003 .8837 .8680 .8532 .8390 .8255
upper bound
.0081 .0152 .0214 .0269 .0318 .0361 .0400 .0434 .0466 .0493
.0087 .0174 .0261 .0348 .0435 .0522 .0609 .0696 .0783 .0870
upper bound .0004 .0015 .0033 .0058 .0091 .0131 .0178 .0233 .0294 .0363
mean of Solution of the Solution the mean
- m(t) I
Table 3.2.5: Ely(t,~) - m(t) I and the analytical upper bound in (3.2.3) for the differential equations (3.2.2).
t 0.i 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0
513
Absolute Difference .0001 .0004 .0009 .0014 .0020 .0027 .0033 .0040 .0046 .0053
Table 3.2.4: The mean of the solution, Solution of the mean and the absolute difference between them for the equations (3.2.2),
Ely(t,~ ) - m(t) I2
0.i 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0
upper bound
.0001 .0003 .0006 .0009 .0012 .0016 .0019 .0023 .0026 .0030
.0001 .0004 .0008 .0015 .0023 .0033 .0045 .0058 .0074 .0091
Table 3.2.6: Ely(t,w) - m(t) I2 and analytical upper bound in (3.2.4) for the equations (3.2.2). 4.
OBSERVATIONS
4.1
Random Algebraic Polynomial
From the numerical results of Section 3 we obtain the following: (I) As the degree of the polynomial increases the roots tend to concentrate around the circumference of the unit circle. In other words, the limit theorem due to ~paro and ~ur [9], namely: "Let the coefficients ak(~), k = 0,1,...,n of random algebraic polynomial F (z,~) be independent and identically distributed complexvalued random variables and assume E{max(O,log lakl )} < ~ for k = 0,i ..... n. Also, if for real e and B satisfying satisfying 0 ! ~ < B < 2~ and positive define
6
c = {zec: 1-6 < Iz I < 1+6}
and B = {zEC: ~ < avg z < B}~ then we shall have as (i) (ii) where
Nn(C,W)/n --> i Nn(B,W)/n -->
n --> in probability,
(B-~)/2 ~
in probability,
we
514
G.S. Ladde, M. Sambandham / Stochastic versus deterministic
Nn(A,m) A
is the number of real roots of the set
for the realization
Fn(Z,~)" ,
[7]
Ladde, G. 8. and Sambandham, M., Error esti mates of solutions and mean of solutions of stochastic differential systems, J. Math. Physics (To appear).
[8]
International Mathematical Libraries, Edition (8).
[9]
Sparo, D. I. and ~ur, M. G., On the distribution of roots of random polynomials (Russian), Vestnik. Maskov. Uni. Ser. I. Mat. Meh (1962) 40-48.
holds when-
ever a are dependent random variables also (it needs analytical proof). (2) As the sample size increases, the averange of the roots of the sample approaches the roots of the average polynomial. 4.2
Random Differential
Equation
The observations concerning random differential equations are as follows: (i) As the sample size increases, the average of the solutions of the sample approaches the solutions of the average equation. (2) The Analytical upper bounds of the average and variance of the absolute mean difference between solution and mean of solutions are slightly higher than the calculated values. * Research partially supported by U. S. Army Research Grant #DAAG-29-81-G-O008 ** Research supported by U. S. Army Research Grant #DAAG29-77-G-0164 and Government of India #6-21/79-NS-5. REFERENCES: [i]
Chandra, Jagdish; Ladde, G. S. and Lakshmikantham, V., On the fundamental theory of nonlinear second order stochastic boundary value problems, J. Stochastic Analy. Appl., i, No. 1 (1983).
[2]
Chandra, Jagdish; Ladde, G. S. and Lakshmikantham, V., Stochastic analysis of compressible gas lubricated slider bearing problem, Technical Report #171, Dept. of Math., Univ. of Texas, Arlington, Texas 76019.
[3]
Christensen, M. J. and Bharucha-Reid, A.T., Numerical solution of random integral equations I, Fredholm equations of the second kind, J. Integral Equations 3 (1981) 217229.
[4]
Christensen, M. J. and Bharucha-Reid, A.T., Numerical solution of random integral equations II, Fredholm equations with degenuate kernels, J. Integral Equations 3 (1981) 333-344.
[5]
Christensen, M. J. and Bharucha-Reid, A.T., Stability of the roots of random algebraic polynomials, Comm. Stat. Simu. Compu. B. 9(2) (1980) 179-192.
[6]
Ladde, G. S. and Lakshmikantham, V., Random differential inequalities: (Academic Press, New York, 1980).
and Statistical