The monte-carlo method for estimating functionals from eigenmeasures of linear integral operators

The monte-carlo method for estimating functionals from eigenmeasures of linear integral operators

15 1. AREF'E'JA M.V., Optimum regularization of linear operator equations with random interference. In: Vychisl. metody i programmirovanie. Vyp. XXVI...

698KB Sizes 0 Downloads 35 Views

15 1. AREF'E'JA M.V., Optimum regularization of linear operator equations with random interference. In: Vychisl. metody i programmirovanie. Vyp. XXVI. Moscow: Izd-vo MGU, pp.93118, 1977. 5. WIENER N., Extrapolation, interpolation and smoothing of stationary time series. N.Y. J. Wiley, 1950. 6. MOROZOV V.A., Optimum regularization of operator equations. Zh. vychisl. Mat. mat. Fiz., Vol.10, No.4, pp.818-829, 1970. T. GONCHARSKII A.V., LEONOV A.S. and YAGOLA A.G., Some estimates of the rate of convergence of regularized approximations for equations of the convolution type. Zh. vychisl. Mat. mat. Fiz., Vo1.12, No.3, pp.762-770, 1972. @. AREF'EVA M.V., Asymptotic estimates of the accuracy of optimum solutions of an equation of the convolution type. Zh. vychisl. Mat. mat. Fiz., Vo1.14, No.4, pp.838-851, 1974. 9. AREF'EVA M.V., Some asymptotic estimates of the optimum error for integral equations of the first kind of the convolution type. Zh. vychisl. Mat. mat. Fiz., Vo1.15, No.5, pp.1310-1317, 1975. 10. BROIN N-G., Asymptotic methods in analysis. Moscow: Izd-vo inostr. lit., 1961.

Translated

U.S.S.R. Comput.Maths.Math.Phys.,Vo1.25,No.3,pp.15-23,1985 Printed in Great Britain

by H.Z.

0041-5553/85 $lO.OO+O.OO Pergamon Journals Ltd.

THE MONTE-CARLO METHOD FOR ESTIMATING FUNCTIONALS FROM EIGENMEASURES OF LINEAR INTEGRAL OPERATORS' S.M. YERMAKOV

and A.A. ZHIGLYAVSKII

The problem of estimating linear functionals from eigenmeaures, corresponding to the maximum eigenvalue, of linear integral positive operators is considered. Several algorithms are proposed and proved, one of which can relate to "generation methods with a constant number of particles". 1. Formulation of the problem. Suppose X is a compact metric space, .@ are c-algebra Bore1 subsets X, A is the Any, is a space of the regular, completely additive functions on 9 of limited variation, set of finite measures on W (A+ is a cone in the space A), d+ is a set of probability functions on x (C+(X) is measures on 1 (.,#-c&_), C+(X) is a set of continuous non-negative is a set of continuous positive functions on the cone in the space C(X)). C+(X) x (C'(X) is the interior of the cone C+(X)), and the function K:Xxl+R’ possesses the properties: I;(., A)eC_(X) for all AEJ and K(.r, .)=.X+ for all z=X. The analytical form of function to obtain an implementation of j( can be unknown but it is necessary, in order for any x=x the random quantity E(r) with the set of values [0, +wj, that EE(r)=g(r)=A'(.r..X), DE(z)fY(=, and (if g(r)>(b) tie method of modelling the probability measure q(f,dz)=~(r,ds)i'g(r). we shail dencte the linear integral operator from .I into & by X, which acts in accordance with the formula XT(.)=

jV(dX,K(X, .)

(the domain of integration is not shown when integrating conlugate to it is defined thus: ~=~~:C(X)-C(X)

with respect

\1

to XI.

The operator

It is well-known fromthe general thecry of linear operators that {see ,?, p.528,') an) limited linear operator, effective from some Banach space in C(X), can be represented in the form (2) and

In addition, by virtue of the fact that x is compact, K(.,A)=C(X) and (see il, pp.522 and 534/j the operators X and .%'are fully continuous. It is known from the theory of linear operators in space with a cone that, !see /2, +. 402/'), if the operator 2 is fully continuous and strongly positive, it has the maximum at least eigenvalue h with respect to the modules, which is positive and simpie, tc i
16 In this case operator 4 is strongly positive, if for any non-zero .@C+(X) obtained, such that .Y"j(~)~C+(X), where 9" is an operator with the kerr.el

n=n(i)

15

j ...jK(-.d=,)K(I,,d*,)...K(r,_,,dr.-,)K(I"-l,.). !Z=jcp' 1s strongly positive (which henceforth is also assumed to Thus, If the operator h of maximum modulus. wF.ich is positive and hold), then the operator .i? has the eigenvalue PEA+ simple, and to which the single eigenmeasure corresponds: l.P(dz)= s P(dz)K(z,d*). It is obvious

that

P(dr)

is a unique solution P(dx) = [ jgW’(dz)]-‘_/

(3)

in the set 1+

of the integral

equation

P(dr)K(z,dz)

and, in addition,

Let us assume that for any z,,z~.._. from X we can calculate the values quantities f(r,).E(rl),...,which are mutually independent and for any ZE_S D~(z)co,~-=‘~,

R(s)=h(z), hGc(.y). Below we construct functional

of the random

where

and examine

algorithms

&(h,P)-_S

of the Monte-Carlo

method

for estimating

the

(51

h(z)P(dz).

The problem of estimating functional (5) is often met when solving a different kind of problen, in computational mathematics,particularly when examining queuing processes and algorithms for seeking a global extremum of functions (see /3/j. Since (4) holds, this problem Includes the problem of estimating the maximum eigenvalue of the integral operator (1, called the problen; of calculating critlcal systems, or the problem of estimating the critic51 parameter of the branching process (see/4-lO/);to solve this problem, so-called "methods of generations b:lth a constant number of partjcles" were developed and investigated. Algorithm 4, which was examined In Sect.2, can alsc be regarded as a generation method and is a generalized Lieberoth algorithm (see j4,/!, which provided its own algorithm with a semiheuristic basis. Recen: papers /6-6/have shown that for the most widely used generation methods, the rate of convergence cf estimares for i. and A has the order O(N-') for the number cf particles A'--. The procedure of this paper enables us to obtain estimates of the type O(N-") only, fcr fairly general assumptions (but those distinguished from the assumptions in /6--8;). or. the other hand, the approach developed in this paper has the following two advantages: by "Sing it, we will reveal a number of qualitative features of the behaviour of eigenmeasures; it 1s more genera: and can be used to investigate algorithms which are different from generation methods (for example, to investigate the sequential algorithm described in Sect.3).

2. Generation Suppose as uniform),

methods.

P.(d.,) is some probability

distribution

K(z,dx)/gb), P,(dx),

QkW={

g(I)=Kb,w,

on 1 (we usually

choose

this distribution

g(z)+O. g(z)=O.

When describing algclthms i-3, It is assumed that for any x=X the rar.5om quantity :t.tl takes only lntegtr values, and X'cR', k>l,Po(dz)is a uniform distributic: on X. Algorithm 1 consists cf a No-fold modelling of the general branching process (see '11, p.94/), determined by the f.2nction h'(;.dr). and the points of the initial generation are chosen as Independent implementations of the random vector with the dlszribution P,(dx).

Algorithm 1. step I. We mxiel

the 5lstrlbution

Po(dr)

h'a times, we obtain

(01 .zl .

(01 and WC set . J.\\, ,

s=o. step

2.

we

set

i-l, S.+,=O.

Step 3.

We model

the random quantity

Ste;; 4.

We model

the distribution

step 5.

we set

:(I,"'), and we obtain

Q(z:"* dz) Jr."'times,

Step 6.

If i
KS,

and we obtain

I k,". .s:::,,,....5~.~1:*~~~.

h',+,=.V,+,+k~"'.

Step

7.

the sar>le

we set

s=s+l

and proceed

to Step 3. to step 2.

Here and beiow the quantity S determines the number of lteratlons :f the ccrrespo:~;:.: al.gorlthm. in the ;lxi.r: (as h';-m. _9_oD) ::s.-+-_._ Since ;sse ':1, p.:O5': the random vectcrs +

17 distribution P(&), algorithm 1 can be used to estimate constructed in the following way: J-

[ &‘. ._I0

functional

(5).

The estimate

is

]-‘&!“,, ,-*1-t

(6)

OCS&S. In particular, when so--S we obtain the following well-known estimate (see for h:h-N.+,/N.. From the computing point of view, algorithm 1 is inconvenient in that when ?.Ci the process rapidly degenerates (all the particles die), and when L>i the number of particles (i.e. the number of points zy)) increases as s increases withtherateofageometricalprogression where /11/j

beginning from a certain instant, it is difficult to store them in the memory. The and, generation methods (algorithm 4 to the greatest degree) are free from this disadvantage to a considerable extent. Following /9/, we shall present two fairly widely used generation methods. Algorithm 2. If, at the s-step of algorithm 1, the number of descendants (i.e. the number of points z,(*+") is N.+,>No, then N=N, particles of the following generation can When N.+,O. Since (see /9/J like N+m, S+- the distributions of the random vectors z.(" weakly converge towards P(dz), estimate (6) is justified for functional (5). Obviously, the economy of algorithm 2 still depends on the quantity 1. In addition, this Algorithm 3 is more attractive, algorithm is not Markov and is therefore difficult to analyse. We shall and a special procedure (see /6-S/) was developed to study its rate of convergence. For this we note that, slightly modify it, and write it in a form more suitable for analysis. at the s-th step of algorithm 3, a random selection is produced N times with restoratIon from the set Jz!?.:. , 1

which

is equivalent

(I,(",.. . (I:”

II”‘, ,

ZJI),

.

kf’

to N-fold modelling

(1) . * . I zf.J , . . . I SI;’ -,

. , a$‘,

kx>

k!”

of the discrete

I

distribution,

concentrated

k<"'/ik:", ,-I

L=l,2,....N.

Such an interpretation of algorithm 3 is not necessary in the integrality quantities E(z), and this requirement is removed when describing the following Algorithm 4. step 1. We choose (,)Step

zx .

2.

step 3.

k (8)

, ,

In the set

) with probabilities

We model

the probability the distribution

By modelling

distribution

Q#(&)

Q.(dz), a specified

the random quantities

on 8, and set

of the random algorithm.

s-0.

number N times and obtain

E(z, ), t-i,2,...,N, we obtain

I,"',...,

their samples

, kn.('!If it was found that ik:“=O. ‘-1

then we repeat Step

4.

the modelling

until this sum becomes

non-zero.

We set

step

5. If S-G, we set s=si-i and proceed to Step 2. Although algorithms 3 and 4 agree in the probability plan,their interpretation in terms Indeed, as a result of the collision at point x of some particle of the particles can differ. with the nucleus of matter, this particle is absorbed (dies), but, with some probability (depending on ~1, produces a random number of new particles, each of which evolves according If escape is considered to be to the law Q(t, dz) until collision or escape from the set X. a collision, as a result of which new particles are not produced, the transfer probability Q holds for all ZEX ; we O(z, X)-l can be considered to be Markovian (i.e. one for which normally consider the escape separately, and assume that the transfer probability Q is subMarkovian, i.e. Q(., X)Bi. The distributions o,(dz) in algorithm 4 are conditional distributions of the rantion

lements is proved USSR *5:7-F

r!*' , * i=l. 2,...,N, s-0,1,..., for the fixed values below,

with certain

assumptions,

that like

I,('-",E(zl'-")-=k?('-",j-1.2. ....Y. i:

s+rn,N+~~

the probabliity

Z~~C~ITEP

18

Pa,NW) - which are absolute distributions of the random elements zl", i=l,Z, ....h'.s-O,i,..., - weakly converge to P(k) and, therefore, using algorithm 4 to estimate functional (S), we can also use algorithm (6). To avoid unimportant complications, we shall further assume that the following holds: for all t=X with the probability 1. a) &(r)>ci>O We shall first prove two additional statements.

condition

Lemma 1. Suppose conditions a), and also the following, hold: b) for all ZEX the random quantity e(z)-&(2)-g(z) has the distribution F(r, de) with zero expectation and is concentrated in the finite interval (-d,d], and for any r,,rz.... from X the random quantities e(t,),e(a),... are mutually independent; c) Q(y,d.r)==_q(y,r)lr(dr),where

PA+,

sup q(Y,t)GMo<-; %“.I

d) the random elements

x,,...,x~ with the distribution

CT. -algebra

Ra(drr....,dr~), specified

on the

99~=0(53x...x.%), N

are symmetrically dependent; 3) the probability distribution Rs(&,,....d.r.V) using the formula

P~(d.r,,..., ,-&)

on d,, is expressed by the distribution

Y

N

where

S,=(y ,,... ,yN. et,. ..,e.r},

Y,-X”x[-d.

lI(dE,)=RN(dy,,

de,) . . .F(YN, dex),

. . ..dy.)F(yi.

d]“,

I

(I

(EN) =

ix

[ddr,)+e,l’-‘.

A(y,e,dl)-[g(y)felQ(Y.dr).

I-1

Then: 1) the random elements xI....,xrr with the distribution P~(&ti,...,d.r~) are SYmmetricallY dependent; 2) the maximum distribution P,(dz)=Pu(dx,X, . . ..X) is represented in the forta

L(dz)where

&(dt)-R,(dz,X,

[ jR,(d4gb)]-‘ j

El,(dz)g(z)Q(z,dt)+A,(dl),

....X).ANEW, AN-cOwith

respect

The fact of the symmetric dependence Proof. from (7) and from the definition of the symmetric

Pw(dti,...,

to variation

(8) as

x-m.

of the random elements dependence:

x~,...,xw

follcws

dt,,) --PM (dt,,, . . . . dta,),

(i,&...,M). where (i,, i,,. ., ix ) is an arbitrary permutation is expressed in the following way: The marginal distribution p,(ds) .".

&(dz)=

jl-I(dS,)a(%)

zA(y.

,,e,,d4=

n(dP,X)u(e,)A(~,,e,,&)1-1

The relation

obtained

is written AN-

Y”

in the form

(8) with (9)

I Ix

with respect to the variation as A'-+-. We shall show that AN-O condition C) and of /12, p.llE/, the convergence mentioned is equivalent

where

UN

(2) -

rn(dS,)[g(y,)+e,lq(yr,z) y.

{Nat%)--[

~&)~.(dz)]-‘}.

Taking to

account of

19

To prove this, we shall show that for any the following equation holds:

d>O,t=X

N.-N.(6)exists,

such that when

N2N.

(10)

Ian(t)l-=b.

E(n)-g(xf)+&(k), i-l,&.. .,N, 422/ the random quantities

It follows from condition d) that the random quantities are symmetrically dependent, and therefore (see /13, pp.421, I

converge

on the average

This can be formulated shall set

as

N-m

to some random quantity

thus: for any

6,>0, N.>i

exists,

8. independent

such

of all

%,i)i,

that EltI,~91<6, when

whilst

&=A'..

We

s==Mxt)+e(x~)ldx~. 4. Using the fact that t3 is independent ~~(maxg(~)+d)M~==lW, we obtain

of

8w

and

a) - c) and the fact that

0, conditions

vrai3up

Iu~(I)1-IE(en-'S)-(EB)-'Elpl=(EB)-'IE(Br-'B~)-EOl~ inf(Eel-a(E(e,-se*-tp) (~c,-‘E(ex-‘91e-erl)< c,-‘vraisup $(vraiinf e,)-~EIe-e,l~c,-~iCIEle-e~~.

Thus,

if we set

i3-6,c,'lA~,Eq.(lO)

will hold when

Corollary 1. Suppose the conditions scme constant independent of N. Proof.

of Lemma

N,N.

The

1 hold.

lemma is

Then

proved.

IIA&zc~N-‘~,

where

c?>O

is

It follows from the inequality EIB,-BI~N-'"+vraiaupIBr-B(P(IBr-B()N-'~),

resulting from the inequality proved in /13, p.169/, and from the central limit theorem for symmetrically dependent random quantities (see /14/l, that EIBr--BICkrN-"', where c,>O is some constant. From the latter chain of inequalities, obtained when solving Lemma 1, we obtarn the required inequality with c,-c,c,-~M Lemma 2. Suppose the operator P-x', determined using (21, is strongly positive, h is the maximum eigenvalue of the operator 2, and P(dt) is an eigenmeasure, unique in the set A+, which corresponds to this eigenvalue. Then the operator V, which acts from 1 to .&' by the formula uv(ds)-v(&)+A-'P(k) has a continuous

js(s)v(c+h-'j

(11)

v(Ws(v)Q(u,ds),

inverse.

By virtue of the corollary from /15, p-454/, it is sufficient to prove that the Proof. equation Uv-0 does not have non-trivial solutions belonging to 1. It follows from Fredholm's alternative /15, p.474/, that this is the same as the equation U'u-0,i.e. u(y)+h-‘g(y)

ju(*)P@+-h-‘g(y) ju(I)Qo&)=0.

(12)

does not have non-trivial solutions belonging to C(X). To prove this, we shall multiply (12) by P(dy) and shall integrate it with respect to X. Thus, if u satisfies (121, then u satisfies the relations X'u=hu and Su(y)P(dy)-0. But these relations can only be satisfied by a function which is identically equal to zero, since, by virtue of what was stated above, the non-zero eigenfunction of the operator X, which responds :o the eigenvalue 1, is either strictly positive or strictly negative. The lemma is proved. Theorem 1. We shall assume that conditions a) - c) hold, and also f) Q(y,dz)kc,p(dr) p-near all ymX, where ~~l+,c,>O. Then: 1) for any N=l,Z,. .. the random elements a,-(z,"'....,~,!"),s-0,l,..., determined in algorithm 4, are connected in a uniform Markov chain, which has the stationary distribution related to thus distribuRs(&,,...,&r), and the random elements x,,...,xk are symmetrically tion; Rx(&)= the marginal distribution 2) for any e>O N.>i exists, such that when N>N. &.(dz, X.....X) differs from P(d+) with respect to variation by no more than E. for

Proof.

We shall consider

chain in Y-X".

algorithm

We shall denote

initial distribution

of the chain

4 as an algorithm

the elements

of Y by

for

modelling

a uniform

Markcv

U-=(&,...,r~). O,=(Z,“‘,...,z~‘)).

I), and the transfer probability Q.(du)=P~(d~,)...P~(dz

The is

20 we shall show that the method of successive approximations Q.(dci)=I Q.-,(da)O(a,dn) r will converge, with respect to variation, conditions a) - c) and f), that P

as ~+m.

Indeed, it fol lows

from

( 13) and from

I

O(a,da)> l[Iz [c,(c,+(N-l)[maxg(r)+d])-']O(r,,dr*), zcx L-1 ,-I L II Nc,c,(c,+(N-l)[maxg(t)+d])-'~(d~,)=c,p(dB), =.s L-l where

p(&)-P(dz,)...P(d~~)

is the probability

measure

on

a,,

c,-[Nc,c,X(c,+(N-i)[maxg(z)+d])-']N i.l (obviously,

OCc,
Hence

it follows

(see /12, p-260/),

that

By virtue of the exponential convergence criterion (see /13, p.387/), the distributions Q.(&) converge with variation as s-00 to the distribution &(du), which is the only positive solution of the equation

J

and,

(following from /13, p.387/).

We shall write Eq.(14)

we have

sup IQ.(B)-Rs(B)IG .-ax in the form

(15)

&'d(l-c,)'.

.V RN&,,... ~dr,)~In(dS)IIa(S,)~.\(y,,.,.d=.). Yr h-1 I_, Since condition f) holds, condition d) also holds, and we can use Lemma 1. Using it successively, ittranspires that the random elements a. with the distributions Q.(da.) for all we shall show that the random elements with the r-o,i.... are symmetrically dependent. distributions &(&) are also symmetrically dependent. For any B==(B,,...,Ey)Ggs we shall set ~(B)-{~A-_(B,~,...,B,,). where (i,,ir,..., i,) is an arbitrary permutation (i-2,. . , fq). we

choose any two sets BMx,/3,~9(B). holds for all s-0,1,..., we have

By virtue of the fact that

Q.(B)-Q.(&)

and that

(15)

The left-hand side of this inequality does not depend on s, and the right-hand side vanishes which is equivalent to the for any BE~.,B,=~(B), as S-L=, and therefore R,,(B)--R,(B,) symmetric dependence of the random elements with the probability distribution RI(&). We shall again use Lemma 1 with M-N, P~(d.r,,... ,d.z~)=R~(d%....,d&). The statement It follows from Lemma 1, in addition, that formulated in para. of Theorem 1, is proved. Rl(dz) can be represented in the form

Rr,bfz)= [

jgWW]-’ j R,(dy)g(y)Q(y,dl)+Ar(dt),

(16)

I

where

Am-0 with respect to the variation We shall introduce into consideration the formula 9(A,R)Ur)-R(dt)-

as N-m. an operator

8. which maps

AX1

into 1, using

[ jg(z)R(dz)]-‘XjR(dy)g(y)Q(y&)-A(d4.

It follows from the definition of differentiability (see /15, p.637/), that the operator 9 is Frechet-differentiated with respect to the second argument at the point (0.P) (P is an eigen probability measure of operator X),and this derivative equals S.'(O,P)-U, whereoperator U is determined using Eq. (11). By virtue of Lemma 2, this operator has a continuous inverse, and we can therefore apply the theorem of the implicit function (see /15, p.662/) to Eq.(16). The theorem is proved. Note 1. When the conditions formulated the rate of convergence of the distributions

in Theorem P..n(dz)

to

1 hold, we can eStimate the orcier of the distribution P(k). Indeed, usinq

21 r-0.1,...

(15), for all

we

obtain supI~,..*-(A)-Rw(A)I-*s~~lQ.(A,X,...,X)*.a

approach R,,(dz) as I+- at the rate of a geOmt?triC PrOgreSSiOn. i.e. the distributions P..r(dr) where (RN-P)
It

and we can therefore use (6) to estimate (5). When distribution P(&). to write out an estimate for the average square of the error of estimate

r.==S (6):

it is easy

s E[I-N-'~:F,(t~')]'~[~--I h(t)P,,&z)’

'+N-'[o,+Dh(z?')l.

Naturally, there are both random and systematic components in the expression for the average square of the error. Note that the concept underlying the approach considered, and some steps of the proofs, have already been used by authors developing methods of searching for a global extremum (see /3, 16/); the results of the respective calculations are also presented in /3/.

3. Sequential algorithms. The algorithm presented below differs from Algorithm earlier are used in it to obtain the following points.

Algorithm

4 in that all the points

5.

Step 1. We specify the natural number N,. By modelling the probability P,(h) No times independently, we obtain t,,...,rx., and we set s=N,. Step

obtained

distribution

We set

2.

where

and if

then

pc-s-', i=l,2,....s. Step 3.

:lodelling the distribution

Q,+,(&), we obtain

the point

s
jlI(d%)a(%)

y, where

R*(dr,,...,

X, d.z,. X ,...,

dz,)

X)-P,(dz),

is

2 ,...,

the behaviourof P.(h), distribution Q,(&).

&/&h)v

(17)

,_I

the joint distribution i=i,

Z,+V

of the random elements

t,,...,t,,whilst

R,(X,...,

s.

Theorem 2. Suppose conditions a) - c) hold and the operator x' is strongly positive. Then the distributions P,(&), determined using (17), weakly converge as SWto P(k), which is the eigen probability measure of the operator 2, corresponding to the eigenvalue x. The random elements t, converge with respect to probability as Proof. i?(dz); indeed, for any random element x with some probability distribution is arbitrary) ing equation occurs (here p is a metric on X, and G-0 Q.+n(d~)-(l-p,+m-t)Q.+m-,(d~)+p.+~-,Q(~.+n-,,d4=. ‘ii,“_,

= n

,+m--L

(I--pt)Q.(dz)+

n

(I-pdp.Qhdz)+.

..

s-cm m>i

to the the follow-

22

i.e. the sequence rr,zr,... is fundamental with respect R(&) is identical with P(dt). When A~911

R(A)-limP.+,(A)=lims-'

I-._

‘-0

zs

1-11,

to probability.

We shall show that

ri(ffE.)Lsa(S.) lA(yc.t,,A).

The assertion of the theorem will be proved if we show that for any i-1,2, ....s &,-CO hold with respect to variation as s-cm, where 6..&# is also determined using the formula

(saw-[5gw?(dr$‘}

6.,~(d.r)=j lI(dE!..)A(y,, es,&)

y,

But this fact is proved almost literally by repeating the second part taking into account the fact that, for uniformly limited sequences of convergence with respect to variation is equivalent to convergence on The theorem is proved. p.1701). It is obvious that Algorithm 5 can be modified insucha way that used at each step remains constant. Algorithm

6.

In Algorithm

5, when

PPE(%)

s>N

[c

(N

of the proof of Lemma 1, random quantities, the average (see 43, the number of points

is a fixed number) we set

&(Zi)]-'9

I-,-W

i=s-N, . . ..s.

otherwise

we carry out the same operations as in Algorithm 5. The convergence of Algorithm 6 as s-c=.,N+= is validated, in an obvious way, using Theorem 2. Note that in,Algorithms 5 and 6 the distributions Q,+,(&) are expressed by Q,(&) in a recurrent way, which makes the construction of algorithms for modelllng these distributions easier.

4. Other approaches. If the analytic form of the distributions Ft(r,dE). F,(r,dt) of the random quantities t(r),t(r) for all z=X is known, the easily-modelled probability distributions T(r, dp, dy), Tab,dL) can be constructed, and the Radon-Nicodemus derivatives

W,b. .)Qb,.I I dT(r, ., .)

dFt (2, .) dTo(& .)

are everywhere positive on X and are easily calculated, then the algorithms presented above can obviously be modified using the intrinsic sampling method. The splitting method (see /lo, p.278/) can also be used when there is no data indicated; the following algorithm is a modification of Algorithm 1, and is based on this method.

Algorithm 7. Step 1.

We model the distribution

P,(&)

N times,

(0 (0, r, ,...,zN , and set

obtain

s==O,

p:"==l, i--i,2,...,N. Step 2.

We set

Step 3.

We model the random quantity

Step 4.

We model the distribution &*+l _p:"kj*.* We set

Step 5.

i-i.

Step 6.

If

i
Step

If

s
7.

Step 8.

We estimate

we set

E(z:'))and obtain

(,(z?',dz)

i==i+l and proceed

the sample

and obtain

to Step

(8, k, .

r?"'.

3.

s-s+1 and proceed to Step 2.

functional

(5) using

the formula

The convergence with respect to probability of f(S) to J as S-F- and when condition (which is the basis of Algorithm 7) holds, follows from the fact that (as t-+-a) E~(z:")&"-j

a)

P.(dr)K'11,dy)L(y)~~"(h,P),

The other group of algorithms, (for the case g(r)--i, which is examined in /17/J, is based on reducing the problem considered to one of estimating the functional from the solution of the linear integral equation of the first kind , which is effected by means of the following lemma. Lemma

3.

We shall assume that the operator

x' LS strongly

positive

and that the maximum

23 eigenvalue

li of the operator

X

is known. (18)

where

$~C+(X),p~.N+,Itp(y)P(dy)>0.

Then

the principal

solution

of the equation

(19) obtained

using

the method

of successive

approximations. <

Q = &P,, n-0 is an eigenmeasure

of the operator X, corresponding to the eigenvalue A. Here K,(!i,dr)=hK(Y, is an integral operator with the kernel &(y,dz). &)-S(Y))r(&). 3, The proof is obvious (we need to substitute the equation for Q into Eq.3 for the eigenmeasure). The theory for estimating the functionals of the solutions of linear integral equations of the second kind using the Monte-Carlo method has been carefully developed and has been used for a long time (see, e.g., /lo/). We can also use this theory to construct estimates for the functionals (A,Q) of Q(dr)=cP(d.r) (here Q is the principal solution (19), P(dt) is the eigenmeasure of the operator 2, corresponding to the eigenvalue X and c is some constant which is positive, but possibly unknown). To estimate (S), it is sufficient to estimate the functional c-(1, Q)-$Q(&) in tandem with the estimate of the functional (A, Q) using the Monte-Carlo method, after which we divide the estimate for (h,Q) into an estimate for c. The only main difficulty when using this approach is obtaining *C+(X) (except for the constant) and ME#, so that (18) holds and the convergence of the method of successive approximations for (19) is not too slow. Note, finally, that we can dispense with the first of the requirements presented: when (18) does not hold, the kernel of the operator %?, will be alternating, but the method indicated can nevertheless be used. REFERENCES Moscow: Izd-vo inostr. 1. DANFORD N. and SHWARTZ J.T., Linear operators. A general theory. lit., 1962. 2. CRANE S.G. Ed.: Functional analysis. Moscow: Nauka, 1972. 3. mMAKOV S.M. AND ZHIGLYAVSKII A.A., Numerical methods of searching for a global extremum (new approaches and results). 1n: Vychisl. algoritmy v zadachakh matem. fiz. Novosibirsk: VIs SO AN SSSR, ~~-46-55, 1983. 4. LIEBEROTH J.A., Monte-Carlo technique to solve the static eigenvalue problem of the Boltzmann transport equation. Nucleonik, Vol.11, No.5, p.213-219, 1968. 5. MIKHAILOV G.A., Calculations of critical systems using the Monte-Carlc method. Zh. vychisl. Mat. mat. Fizz., Vo1.6, No.1, pp.71-80, 1966. 6. KHAIRULLIN R-M., One algorithm of the Monte-Carlo method for calculations of critical systems. 12~. vuzov. Hatematika,No.lO, pp.138-149, 1977. 7. KHAIRULLIN R.KH., Estimating the critical parameter of one class of branching processes. No.0, pp.77-84, 1980. IZV. vuzov. Matematika, 0. ZOIGTUKHIN V.G. and MAIOROV L.A., An estimate of the systematic errors when estimating critically using the Monte-Carlo method. Atomnaya energiya, Vol.55, No.3, pp.l73-175, 1983. 9. KANEVSKII V.A., The iustifiabilitv of the aeneration method with a constant number of particles. In: Metody _ Monte-Carlo v vychisl. matem. i matem. fiz. Novosibirsk: VTs SO AN SSSR, pp.166-176, 1974. ERMAKOV S.M. and MIKHAILOV G.A., Statistical modelling. Moscow: Nauka, 1982. 10. Moscow: Mir, 1966. 11. HARRIS T., Theory of braching random processes. YU.V. andROZANOVYU.A.,Probabilitytheory. Moscow: Nauka, 1973. 12. PROKHOROV 13. LOVE M., Probability theory. Moscow: Izd-vo inostr. lit., 1962. 14. BLUM J.R. et al., Central limit theorems for interchangeable processes. Canadian J. Math. vo1.10, p.222-229, 1959. 15. KANTOROVICH L.V. and AKILOV G.P., Functional analysis. Moscow: Nauka, 1977. 16. EPMAKOV S.M. and ZHIGLYAVSKII A.A., Random search for a global extremum. Probability thecry and its applications, Vo1.28, No.1, pp.129-134, 1983. 17. LEONOV N.N., Integral equations for invariant measures of Markov chains and the essentialsampling method. Dep v VINITI, No.280-83 Dep. 1983.

Translated

by H.Z.