Limit laws of a sequence determined by a random difference equation governing a one-compartment system

Limit laws of a sequence determined by a random difference equation governing a one-compartment system

MATHEMATICAL 325 BIOSCIENCES 13,325-333 (1972) Limit Laws of a Sequence Determined by a Random Difference Equation Governing a One-Compartment Syst...

572KB Sizes 0 Downloads 45 Views

MATHEMATICAL

325

BIOSCIENCES 13,325-333 (1972)

Limit Laws of a Sequence Determined by a Random Difference Equation Governing a One-Compartment System A. S. PAULSON The University of Tennessee, Knoxville, Tennessee 37916 AND

V. R. R. UPPULURI* Oak Ridge National Laboratory, Oak Ridge, Tennessee 37830 Communicated

by R. Bellman

ABSTRACT Random sequences which embody an additive and multiplicative structure are of both theoretical and practical interest. The limiting behavior of such sequences sheds light on the retention of a contaminant in an organism after a long period of time. We consider the theoretical aspects of the limiting behavior of these sequences. We give general conditions for the existence of limiting laws of the sequence {Y,} determined by Y,, = U, + V,,Y,_,, where {(U,, Vn)} is a given sequence of random variables. In the special case when {Cr,}, IV,,} are mutually independent a general result is given. Further, necessary and sufficient conditions for the existence of a solution to the functional equation satisfied by the Laplace transform of the limiting law are given. Some illustrations of the solutions of the functional equation are discussed.

1. INTRODUCTION

In this paper we shall consider

the random

difference

equation

Y” = u, + VnY,_i,

(1)

where Y,, = 0, and {(U,, I’,)> IS . a sequence of mutually independent random variables with common distribution function G(u, U) = P[Ui < U, Vi d v] and U,, and V, have marginal distributions G,(u), G,(v) respectively. Here Y, has distribution function F,(v) = P[ Y, < y]. A study of Eq. (1) may arise in a number of ways. If 0 < ]p] < 1 and P[V, = p] = 1,

(2) then (1) reduces to the classical autoregressive scheme which has been extensively studied in econometric contexts [3, Chapter 141. Thus we are * Research sponsored in part by U.S. Atomic Energy Commission with the Union Carbide Corporation.

under contract

Copyright 0 1972 by American Elsevier Publishing Company, Inc.

326

A. S. PAULSON

AND V. R. R. UPPULURI

naturally led to (1) if there are departures from the assumption (2). In this case there is considerable interest in the limit laws-if any-of the sequence { Y,,} defined by (1) since it embodies an additive-multiplicative structure rather than the strict additive structure which obtains when (2) holds. These limit laws can be expected to be rather different from those induced by ( Y,} when (2) holds. Equation (1) also arises in the study of the retention of a substance in a system when the substance is periodically introduced in random quantities and the system periodically eliminates a random proportion of this substance. We are then interested in the behavior of the amount of substance in the system after a “ long ” period of time. For an application to the retention of radioactive materials when PIU1 = l] = 1 see [5]. We considerably extend some of their results. More specifically, let Y,_, be the amount (in appropriate units) of a given substance present in a system at the end of epoch n - 1, n = 1,2, . . . with Y, = 0. Suppose an amount U,, of this substance is introduced during the time interval (n - 1, n] and during this same interval a modification of the amount Y,_, to V,, Y,,_l takes place, where the random variables U,, and V, are not necessarily independent of each other. Hence the total amount of the substance present at epoch II is described by (1). Concretely, suppose the substance considered is mercury and the system is a human being; then (1) may be used to describe the total amount of mercury present at the nth epoch under the conditions of ingestion and elimination. In this case considerable interest will be attached to lim P[ Y, > c], where c is a critical level (constant). Hence our investigation will principally involve limit laws of { Y,,} determined by (1) and their properties. 2.

EXISTENCE

OF LIMIT

LAWS

In this section we give conditions which ensure the existence of limit laws of {Y,} determined by (1) and derive a functional equation satisfied by the characteristic function of the limit law. The functional equation is of interest in its own right. Let the characteristic function 4,(t) of F,,(v) determined by (1) be defined by b,(t) = E[eitYn] 03 = f citydF,(y),

(3)

where i = J - 1 and t is an arbitrary real number. (We shall not assume here that U,, and V, are nonnegative.) Y,, is said to converge in law (or distribution) to a random variable Y, say, if there exists a distribution

RANDOM DIFFERENCE

327

EQUATlONS

function converge

such that Fn+F in every point of continuity of F. Y, is said to in probability to Y if, for arbitrary E > 0, lim P[] Y, - Y] 2 a] = n-tm 0. We now state and prove

THEOREM 1. Let {(u,, VA> b e a sequence qf mutually independent variables with common distribution function G(u, v) = PIUI < u, sucfz that for some a > 0, E(Ulla < co, El V,la < 1. Then there distribution F such that {F,,(y)} defined by (1) converges to F in every continuity qf F. Proof. Using the defining we find

relation

(1) and taking conditional

random VI < V] exists a point of

expectations

VnYn-J1lUn~ VJI = E[exp(itU,)~,-l(V,t)l,

A,(t) = E[E{exp[it(U,

+

since (cl,,, V,) is independent of Y,_, . Consider real t and any h = 0, 1, 2, . . .,

(4)

now for fixed but arbitrary

k&+,,(t) - 4,(Ol = IE(exp[l(itu,+,>~,+,-,(V,+,t)l)

--E{expC(itU,)~,-,(T/,t)l}l. (5) = is R2

expitut+,+,-l

(vt) - A-

since (U,,+h, V,,,,,) has the same distribution, thesis. Hence (6) becomes I&,+/z(f) - A(t)I where the expectation we get

,(vt)ldG(u,

v)l.

(6)

G(u, v), as (U,, V,) by hypo-

< El4n+h--1(Vf)

-

A-,V’t)l,

is with respect to GJv). Using (6) and iterating

(7) in (7)

I&+,*@)- Al0>l G m/wn+h * *. Vhflf) - db(Vn* . * V1t)l = W,V’-n+, . . . V,,+lt) - 11,

(8)

since &,(t) = 1 by (1). Note that

db(t jl$h5)=E{exp[ityh(j~~h 6)]) where the expectation

is with respect to the distribution

F,,. For notational

h+l

convenience

let

n

Vj = Z,. But for arbitrary

E > 0 we have by Markov’s

j=n+h

inequality

f’[IZ,y,l 2 E] <

;EIZ,,VY,I'.

(9)

328 By the moment

A.S.PAULSONANDV.R.R.UPPULURI inequality

[I, p. 1551

ElY,l* < EIU,I” + ElV,l”ElY,_,l”.

(IO)

Since by hypothesis there exists CI > 0 such that El U,(” < co and El V,J” < 1, it follows from (10) that El Y,J* may be uniformly bounded. Further, ElZ,l” = (ElV,I”)” and so it follows by (9) that Z,,Y,+O in probability; hence for any t, ItI < K, K > 0, and h = 1,2, . . ., &,(ZJ)

= E[exp it(Z,

in probability since & is (uniformly) conclude from (8) and the dominated

Y,,)]+ 1

(11)

continuous [6, Theorem 4.3.51. We convergence theorem that

independent of ItI < K, and h = 1,2, . . . ; hence there exists a 4(t) such that lim 4,(t) = 4(t) uniformly for ItI < K. n-tm For each n = 0, 1, 2, . . ., 4,(t) is a uniformly continuous function of t, ItI < K. Since 4,(t) converges to 4(t) uniformly in ItI < K as n+cc and each c&(t) is continuous it follows that b(t), the limit function, is also continuous in ItI < K, and, in particular, at t = 0. Since K was arbitrary, an appeal to the Cramer-Levy continuity theorem enables us to conclude that there exists an F such that F,,-+F in every point of continuity of F. n Let (U, V) be a random variable with distribution independent of the sequence {(U,, VJ}.

function

G(u, V) and

COROLLARY 1.Under the same conditions on G(u, v) as stated in Theorem 1 there exists a unique solution d(t) of the characteristic-functional equation 4(t) = E[eitU4( Vt)].

(13)

Proof. Since F,+F in every point of continuity of Fan application of the dominated convergence theorem to (4) yields (13). Further, by the uniqueness properties of characteristic functions solutions to (13) are unique. n In the special case that {U,,} and { VJ are mutually independent sequences with common distribution functions G,(u) and G,(v) respectively, (13) reduces to 4(t) = ‘iWE[W’Ql, 04) where $(t) = E[eitU]. Thus, for example, we may by the corollary the existence of a solution 4(t) to the functional equation 4(t) = e-It1

assert

r I e-“” &vt) dv, (15) s whenever 1 > 1. Functional equations of the type (13) and (14) do not seem to have received much attention in the literature; and solutions 4(t)

RANDOM

DIFFERENCE

329

EQUATIONS

of (13) seem to be very hard to come by and, so are the corresponding distribution functions F. Solutions to (14) appear to be more readily attainable than those to (I 3). It is for this reason that we devote the remainder of the paper to discussion of properties of solutions of (14). 3. ANALYTIC NATUREOF

4(t)

In this section we shall specialize further and assume that U,, and V, are nonnegative random variables since this assumption would usually be met in the applications of the model (1). We now give conditions under which the solution 4(t) to (14) is analytic in a neighborhood of t = 0. In this case all the moments of Fexists and the distribution function F can at least be approximated. THEOREM 2. Let {U,,}, {V,,> be sequences of mutually independent, nonnegative random variables with common distribution functions G,(u), G,(v) respectively. Let t = o + iT. Then the solution 4(t) of (14) is an analytic function oft in It] < R’ ,< R, R’ > 0, if and onZy if, PIO < VI < l] = I, P[V, = l] < 1 and $(t) is analytic in It] < R. Remark. We are, for the purposes of su#iciency of this theorem only, letting t denote a complex number. No confusion should arise from this. Everything else remains the same. Proof.

We first show sujiciency

through

the sequence

(4,(t)}

defined by

(4). BY (1) 41(t) = $0) and +l(t) is analytic in ItI < R; suppose 4,-l(t) is analytic in It( < R. Then we have by analogy to (4) and mutual independence of {U,} and (VJ,

sl

h(t) = li/(t> Ll(vt> dG,(v);

(16)

0

since on the induction

hypothesis

c&_,(t) is analytic

in (tl < R, so also is

1

o$,_l(vt) dG, for $,_l(ut) may be represented as a power series which i may be integrated termwise. Thus d.(t) is analytic in It) < R for every n=

1,2,....

We now show that {4,(t)}

converges

I+(t)1 < Eje’o]

uniformly.

Let It/ d R, then

< E]euueizu]

=S E[elfll”] = $(lo])

(17)

and $(]a]) is real. From (16) we find

I$,(t> - $,-l(t)l = IW)-W,-,W)

- A--2V’OII.

(18)

Then from (17) and (18)

14, - on-1I G w~l>~l~“-Iw) - ~“--2wf)l. 22

(19)

330

A. S. PAULSON

In analogy

AND V. R. R. UPPULURI

with (17) M(Vt)l d E[eltT,

where the expectation = 1 we find

(20)

in (20) is with respect to G,(u). Since P[O < V d l] W(W

WI)

<

and indeed,

WV-nvn-I* . . Vjt>l G +CI”I), forj=n,nthe inequality

l,...,

2. By repeated

application

(21)

of (18) and (21) we find

I4, - 4-11 G wl~l))“-‘ahwn~ . * Vd) - 11. ButP[O

< T/G I] = l,P[V=

E[V] = a < 1. Since C#J~= $ is analytic term to obtain

(22)

I] < 1 andso

we may expand

J%h(K * * * vzt>- 11 < E f

(23)

4i in (22) and integrate

(K.

* * VJI,)‘~

( j=l

<

term by

1

PE[;l

(U,>j

= rl[l+b(pl)

-

$1

.

11,

(24)

where (V, . . . V,) > (V, . . . V&f, j = 1, 2, . . ., because of P[O < V < l] zzz1. Since $(t) is analytic in It] d R, $(O) = 1, and 0 < 1, < 1, there exists R’ > 0 such that 0 < n$(]o]) < 1 whenever ItI < R’ d R. Also $(]t]) - 1 is nonnegative and uniformly bounded by K, say, in It] < R” < R’ and Ic/(]o]) < II/ in ItI < R” < R’. Hence

f1 k/~,(t)-

L1(9l

G I$,

kW’)l”-’

< 00,

(25)

whenever ItI < R” uniformly in t. But (25) implies that there exists a function 4(t) such that lim $,(I) = 4(t). Each 4,(t) is analytic in It] < R” and hence the uniform convergence of {4,(t)} in It] < R” allows us to conclude by [4, p. 951 that 4(t) is analytic in It] < R”. Necessity. In order to avoid considering trivialities we suppose $(t) $ 1, d(t) f 1, P[ V = 0] $ 1. Suppose 4(t) is analytic in ItI < R’. Since 4(t) satisfies the functional equation (14) we have by Theorem 8.5.1 of Lukacs [2, p. 1901 that $(t) and E[$(Vt)] are both analytic in at least If1 < R’, say ItI < R’ d R.

RANDOM

DIFFERENCE

331

EQUATIONS

Hence it remains to be shown that P[O < V < l] = 1, P[V = 13 < 1. Suppose the contrary. Then there exists an integerp > 0 such that E[ VP] 3 1. Now 4(t): E[c#J(Vl)] and $(I) are analytic at least in ItI < R. But the pth moment of F is given by #p)(O) = dp~$(t)/dtp evaluated at t = 0 and @p)(O) > 0 since F is non-zero only for nonnegative real numbers by hypothesis. Thus #P)(O) = (dqdtP)$(t) E[w-t)]lr=o involves p + 1 terms. We incorporate (26) and write

(26)

the first p on the right hand side of

4’p’(O) = v,, + EIV’j$‘P’(0),

(27)

where vp > 0 since $(t) f 1, 4(t) f 1, P[ V = 0] # 1. E[W] > 1, implies by (27) that @p)(O) < 0, a contradiction. Hence P[O < V < l] = 1, P[V = I] < I. n 4.

LIMITING

LAWS

We consider first limit laws defined on the nonnegative integers. The first result involves the impossibility of obtaining a Poisson distribution from the functional equation (14). THEOREM 3. The characteristic function a solution of(14) ifP[V = 0] = a < 1.

4(t) = exp[%(e” -

1)] cannot be

Proof. We suppose that $(t) + 1 to avoid trivialities. Since the right hand side of (14) designates a convolution and the convolution of a discrete and a continuous distribution is always continuous we see that $(t) and E[cj(Vt)] are necessarily characteristic functions of discrete distributions. Now the Poisson distribution whose characteristic function is 4(t) = exp[A(e” - l)] has its discontinuity points at the nonnegative integers; hence the distributions of Ii/(t) and E[4(Vt)J have their discontinuity points at the nonnegative integers. This implies in addition that the distribution of V be defined on the nonnegative integers. But 4(t) is analytic in a neighborhood of the origin; hence we have by Theorem 2 that P[V = 0] = a, P[V = l] = b, a + b = 1, and b < 1. Thus a > 0. The hypothesis a < 1 then gives us $0) By [2, Theorem

= $(t)(a

+ b 4(t)).

8.5.31 it must be the case that $(t) = exp[i,(e” a + b+(t) = exp[I.,(e”

in a neighborhood

(28)

-

1) + ipIt],

(29)

-

1) + &t],

(30)

of t = 0 where Aj > 0 and ,uj are real numbers,

U, +

332

A. S.PAULSON

AND V. R. R. UPPULURI

~1~ = 0. But with 4(t) = exp[%(e ir - I)], 0 < a < 1, it cannot be the case that a + b a(t) is of the form (30). Hence we have a contradiction. n We have thus seen that the Poisson distribution cannot be the inverse transform of (14) except in the trivial (and obvious) case. We turn our attention now to the geometric distribution. We will state Theorem 4 which involves a characterization of the geometric distribution but present the details of the proof elsewhere. We do this because this result has a further, more specialized interest in that the functional equation (14) allows for a definition of a multivariate geometric distribution. THEOREM 4. Let $(t) be a characteristic function and V a random variable with distribution G(v). The solution 4(t) of the equation 6(t) =

4wm#4tV)1

is the characteristic function of the geometric distribution if, and only if, *(t) is the characteristic -function of a geometric distribution and G(v) is suchthatP[V=O]=a,P[V= l]=b,a+b= l,O ol. Proof. SufJiciency follows immediately from hypothesis and (14). We find g2 = of/(1 - c”). Necessity. Suppose on the other hand that 4(t) = exp[-$t2tZ] is a solution to (14). Then by [2, Theorem 8.5.21 we have $(t) = exp[iu,t E[#(Vt)]

= exp[iuzt

- _ta:t’],

(31)

- #t2].

(32)

Suppose it is not the case that PIO < IV1 d l] = 1, P[]V] = l] < 1. We again suppose that $(t) f 1 in order to avoid trivialities. Now 4(t) is analytic in a neighborhood of t = 0 so $(t) and E[$( Vt)] are also analytic in that neighborhood. Proceeding exactly as we did in the proof of necessity of Theorem 2 we are led to the contradiction that for an even integer p, i-P@@(O) < 0.

(33)

HenceP[O < IV] < l] = l,P[]V] = 1] < 1. With 4(t) = exp[-fa2t2] we obtain

s 1

ECW-01

=

exp[ - +a2uzt2] dG,(u).

-1

(34)

RANDOM

DIFFERENCE

On expanding find

the right hand side of (34) and integrating

term by term we

= 1 - $a2E[VX]t2 + . . . .

(35)

E[4(Vt)] Expand k =

0,

333

EQUATIONS

the right hand side of (32) and compare coefficients of (it)k, 1,. . ., to conclude that p2 = 0. (This implies by (14) that p1 = 0).

Thus

kto&-fo:r2)”= 1 or ,q,-‘“1

=

0 2k, ??

0

$a%[V*]t2

+ . . *,

k = 0, 1,2, . . .,

and c2 < 6. But these moments of V uniquely determine [6, Theorem 5.5.11 the distribution G,(v) given by P[V = c] = a, P[V = -cl = 6, where a,/o = c. Hence a: + as = 8 by (14), (31), and (32). n If G(t) is the characteristic function of the exponential distribution and P[ V = 0] = a, P[V = 1] = 6, a + b = 1, b < 1, then 4(t) is also the characteristic function of the exponential distribution (but with different parameter). In analogy with Theorem 4 it is felt that (14) also characterizes the exponential distribution but we have not succeeded in proving this. 5.

CONCLUSIONS

We have shown the existence of limiting laws for sequences { Y,} defined by (1) and have, at the same time, given conditions under which solutions 4(t) to (13)-hence (14) also-exist. We have also given conditions under which the solution to (14) is a Laplace transform and hence provide conditions under which numerical inversion may be applied. We have provided some explicit solutions to (14) and have given some characterizations involving functional equations. It was shown that the Poisson distribution cannot be obtained as a limiting law of (1). Explicit expressions for the limiting laws F of (1) and explicit solutions 4(t) of (13) and (14) do not appear to be easily obtainable. Even if 4(t) could be obtained there is no guarantee that the inverse transform for F could be obtained. REFERENCES 1 2 3 4 5 6

M. Loeve, Probability Theory, Van Nostrand Company, New York (1955). E. Lukacs, Characteristic Funcfions, Hafner Publishing Co., New York (1960). E. Malinvaud, StatisticalMethods of Economefrics. North Holland, Amsterdam (1970). E. C. Titchmarsh, The Theory of Functions. Oxford U.P. London (1964). V. R. R. Uppuluri, P. I. Feder, and L. R. Shenton, Math. Biosci. 1, 143 (1967). S. S. Wilks, Mathematical Statistics. Wiley and Sons, New York (1962).