Physica A 192 (1993) 323-341 North-Holland
Kinetic method for waves in random media V.E. Shapiro Departament de Fisica, Universitat de les llles Balears, E-07071 Palma de Mallorca, Spain and L.V. Kirensky Physics Institute, Academy of Sciences of Russia, Krasnoyarsk 660036, Russian Federation Received 5 May 1992 New tools of statistical analysis for systems obeying differential equations in partial (and ordinary) derivatives with randomly alternating parameters of Markovian type and edge boundary conditions are presented. While the conventional master equation methods are valid only for the stochastic equations with given initial conditions, the method presented is more general and incorporates no embedding to an initial value problem. We develop a simple formalism by introduction of special double-sided conditional averages of dynamic variables. Its main advantage is that the averaging procedure for linear systems with multiplicative random influence is linear, yielding the possibility to obtain exact analytical solutions. The method is applied to a typical model of the theory of wave propagation in random media and exact results of its analysis are presented.
I. Introduction T h e present w o r k aims to draw attention to possibilities of exact statistical analysis of systems of r a n d o m structure and interaction, m o d e l e d by differential e q u a t i o n s in partial derivatives with given b o u n d a r y conditions and dep e n d i n g on r a n d o m functions of given statistics. Such a modeling is exploited widely in various r a n d o m media p r o b l e m s concerning p r o p a g a t i o n of waves, scattering of particles, susceptibility, density of states, etc. Their investigation, being c o n c e r n e d with the multiplicative character of r a n d o m influence and back scattering effects, meets as a rule with difficulties. T h e conventional analytical m e t h o d s use a p e r t u r b a t i o n t h e o r y a p p r o a c h and exact results are of particular value. T o find a probability function of the stochastic system, it is often the best way to write d o w n a linear stochastic e q u a t i o n for the c o r r e s p o n d i n g n o n a v e r aged quantity and then p e r f o r m the averaging. It is, in fact, the only way in which o n e can expect to obtain a closed deterministic e q u a t i o n for the statistical m e a n exactly. Let ( u ( x ) ) d e n o t e the m e a n of u ( x ) o b e y i n g a linear 0378-4371/93/$06.00 (~) 1993- Elsevier Science Publishers B.V. All rights reserved
V.E. Shapiro / Waves in random media
324
differential equation of the form
(1)
M ( a ( x ) , Ox) u = f(x) ,
where M ( a , Ox) is a polynomial in a x = O/Ox with coefficients being functions of a; a = a(x) is a random function of given statistics; x, a(x), u(x) may be multi-component quantities, correspondingly M is a matrix polynomial and f is a vector function, which may be random as well. Obviously, the solution of (1) for a given realization of a(x) is a function of x and a nonlinear functional of a ( x ' ) ,
u = u(x, [a(x')]) ,
(2)
for x' ranging in the area determined by the imposed boundary conditions. By definition of statistical average (q~), where qb is a function (or a functional) of u, (~) =
f ~(x, [a(x')l ) D P ( I a ( x ' ) ] ) . [,~(x')l
(3)
Here and throughout, J" with a symbol under it denotes summation (integration) over all possible values that this symbol can assume, so the sum in (3) is over all realizations of the process. The function P characterizes their probability measure, in terms of random states this is a multi-point (infinite-point) probability function relating to the corresponding probability density function P as D P ( a I , x l ; . . . ; a , , x , ) = P(al, x l ; . . . ; a , , x,)
dot
I
. . .
da, ,
where a i =- a(xi). When f(x) in (1) includes randomness statistically independent of the process a(x), additional averaging over its statistics is to be implied. For q~ = u, such a procedure, since u is a linear functional of f, incorporates summation over the one-point probability density function of f(x), so it is much simpler than the multi-point summation given by (3). A rigorous body of the mathematics concerned is the approach based on the master equations for the Markov-type processes and associated I t o Stratonovich calculus, and adjoin specific methods for step-wise random functions a(x) (e.g. refs. [1-13]). But these methods are effective, in fact, for the stochastic differential equations of the evolution type, i.e. with given initial (final) conditions and random a(x) in the corresponding one dimension x.
V.E. Shapiro / Wavesin random media
325
Modeling a(x) by a Markov process (or a component of a multi-component Markov process) and using e.g. a standard trick of dealing with a joint function (a, ~, 0 ~ , . . •, OxN-1 ~ ) , where N is the degree of the stochastic differential equation for ~, one obtains a linear deterministic Master equation in this extended phase space, which is an effective means to find (q~). To apply the approach for the nonevolution type systems, like, e.g., systems describing waves and density of states in random media, the present-day methods involve reducing them to evolution stochastic equations. But both specific methods of the reduction, applicable for particular equations (including the method of "transfer matrices" (ref. [7]) and the device associated with polar coordinates (e.g. refs. [6, 14])), and a general method, the "embedding" (e.g. refs. [15-18]), complicate the problem critically as they reduce the linear stochastic equations to essentially nonlinear ones that admit analytical treatment only in the limit of weak (in some sense) random influence. There even can arise problems of ambiguity of the reduction. The present work involves a new idea. It does not incorporate any reduction to evolution equations and introduces a new kind of extension of the system's phase space, by means of dealing with special conditional averages which may be called "double-sided"• In application to the systems (1) in one dimension x, these quantities are determined as follows: • o,o,,(x) =
• (x; [a(x')]) D P ( [ a ( x - ) l [ a ( x ) = a') D e ( [ a ( x + ) ] l a ( x ) = a"). [a(x-),,~(x+)]
(4) Here P([a(x-)]la(x ) = t~') is the conditional probability function that at x' < x a ( x ' ) takes on the values [a(x-)], which is the part of [a(x')] in q~ for x' < x, under condition that a(x) = a ' ; similarly P([a(x ÷)][a(x) = a") is the probability function of states [a(x÷)], which is the part of [a(x')] for x ' > x, under condition a(x) = a". Summation in (4) is over all realizations [a(x')] excluding the state of x' = x. It is essential that [a(x')] in the argument of • in (4), as well as of u in (2), is a single-piece realization, its parts [a(x-)], [a(x÷)] are related, while a ' and a" in (4) are arbitrary, i.e. independent quantities from the set of possible values of a(x). As agreed-upon, we consider only such stochastic equations which have well-defined solutions for each realization [a(x')] and we assume throughout that any regularity condition is satisfied, e.g. differentiability, integrability, existence of limits, etc. Since this implies single-valued realizations, quantities (4) are rather unusual, resembling the density matrix descrip-
V.E. Shapiro / Waves in random media
326
tion in quantum mechanics. Though ~a'a" for a ' ~ a" seem unphysical, one can specify them by (4) just as for a ' = a". The statistics of a ( x ) is assumed to be specified by the kinetic operator determining kinetics of the random process. The description of system (1) kinetics developed below in terms of the double-sided quantities looks as if the kinetic operator of the random subsystem is split into two, associated with delayed and advanced influence, acting separately. It is this feature that makes the approach constructive, leading to a linear (unlike the embedding) procedure of finding (u(x)). The procedure relies on the formulae of differentiation deduced below for the double-transition quantities (4). For ease of presentation, a ( x ) is assumed to be a Markov scalar function of x in one dimension. Multi-dimensional cases do not meet principle difficulties, it was touched upon in the first work on the topic [19]. Now another derivation of the formulae of differentiation will be given and the necessity of their additional specification will be shown, and more details concerning application of the method are presented.
2. Formulae of differentiation
Let us differentiate both sides of (4) with respect to x assuming a(x) a Markov process and x a scalar variable. For this purpose let us expand (2) into the functional Taylor series • (x,
=
o) b
b
1 al a
• • • ~.V(x,x,
.....
x°).
(5)
a
Here V(x,
x 1, . . . , X n ) =
aa~ aa 2 . . . ac~,
and a, b are the edge points determined by given boundary conditions for (1). Since the function V is symmetric in xi's and in view of the identity DP([a(x')]la(x)
= a) = 1
(6)
[,~{x')]
(which is true for an arbitrary set of x' not involving x, e.g. either for x' < x or
v.E. Shapiro / Waves in random media
327
x ' > x), it follows from (4) and (5): = ,t,(x, o)
~ £ m=O -~ n~=l=
m!(n 1- m)!
b
f
dx1 "'"
a
S
a
dxm ( Oll
"'"
Olm)ct(x)=°t'
b
fdXm+,fdxn
, ...
x
x
x, .....
x,).
(7)
Here the integrals J'~ dx 0 (j'~ dx.+l) are to be replaced by 1; (al • • • am)~(x)=~' is the mean of the product a ( x l ) . . , a(Xm) under condition that a ( x ) = a', similarly ( a m + l . . . a.)~(x)=~, is the mean of the product am+~.., a. under condition a(x) = a". Performing the differentiation of (7) gives
± OX
(8)
~ '~"
where (Oq~/Ox),~,,~,, is equal to (4) with q~ replaced by OeP/Ox and J,q~,~,, incorporates all the summations with the derivatives of the conditional moments. A is a linear operator in a ' , a", it does not depend on any of the variables entering in [a(x ~)]. Such a form of J~ follows from the fact that all of the probability functions concerning future, as well as past, random states are determined as soon as the random state of the Markov process at point x is specified. For a step-wise Markov a(x), the operator A obviously acts as a matrix, i.e. A@,~,,= ~
A~,~,~,~,,~,~,,.
(9)
Evaluating the derivatives of the conditional moments in the rhs of (7) we find
A,~, ,,,,tr, = 6 ( a " - fl") A:,,, + $(a' - fl') A+~,,tr,,
(10)
where 6(a) is the delta-function and
A~o = lim
z~x+~O
OxP(a(z) =
i l i a ( x ) = a),
(11)
here +0 means that the limit is to be taken respectively from the right- or the
V.E. Shapiro / Waves in random media
328
left-hand side. Indeed, the differentiation of ( a I . . . 0/m)~(x)=~ in (7) produces a matrix operator in 0/. Its form is the same for any m and coincides with the one determined by differentiating the conditional probability density P(a(x~) = 0/1[0/(x) = 0/) for arbitrary x~ < x. Performing the differentiation of the latter P-function, letting x~ ~ x - 0 and noticing that this P-function tends to 6(0/~ - a), one readily comes to the form given by the matrix A~t~ in (11). Similarly, the differentiation of (am+l... 0/n),~(x)=,~"produces the matrix operator in 0/" given by A+,,~,,. The matrices A~,tr and A+~,,~,,, since they act on different variables, commute and we come to (10). Since the replacement of 0 x by 0 z in the (11) results in changing A2~ + respectively into -A2~, it follows that the quantities - A ~ have the sense of mean rates of transition from state 0/into/3 and we may write +
A,~ = v(0/, x) 6 ( 0 / - / 3 ) - v(a, x) q( /3 [ a, x) ,
(12)
where v(0/, x) is the mean rate of jumps from state 0/ and q(/3 [ 0/, x) is the probability density of transition from 0/ into /3 for one jump at point x. The matrix of the elements (12) is nothing but the operator of the well-known backward Kolmogorov-Feller equation for step-wise 0/(x). The matrix - A ~+ is the operator of the corresponding forward equation. The coefficients A~o can be expressed through the same v and q by means of the identity P(0/, x) P(a(z) =/3 I 0/(x) = 0/) = P(/3, z) P(0/(x) = 0/I 0/(z) = / 3 ) ,
(13)
where P(0/, x) is the one-point probability density that 0/(x) = 0/. Differentiating the identity with respect to x and taking the limit z--* x - 0 gives the desired relation P(0/,x) A~+~+ P(/3, x) A~a = - 6 ( a - / 3 ) O x P ( a , x ) .
(14)
We now turn to the form of A for the continuous, as opposed to the step-wise, Markov 0/(x). Obviously, when the steps of the latter tend to zero we approach to the former Then q(/3 [ a, x) in (12) is close to 6(/3 - 0/) and + the convolution of the matrix A~a with a function q~ smoothly varying in/3 is presented by the rapidly convergent expansion
~] A+a~t3 0
-.~ -
(a~
0
~ ~
a2 --
02 - -
+ 2? 00/2
Here a, = a , ( a , x) is given by
) -[-...
(jr)
o
V.E. Shapiro / Waves in random media
329
~.(~, x) = v(~, x) ~ (/3 - ~)" q(/3 [ a, x), #
which is nothing but the limit
a , ( a , x ) = ~-+0 lim 31 ((~(x + a) - ~(x))")o,x)=o
(15)
The limit of continuous processes corresponds to neglecting all the a,'s for n~>3. Similarly, the matrix operator A~t3 turns into the differential one differing + from the A~t3 in that the limit A--->+0 in (15) is to be replaced by A--->-0. As a result we obtain the following form of A in (8):
A=A£,+A
+
with 0
A + =-a,
a2
as
02
2 Oa 2 '
(16)
where at and a 2 being given by (15) have the sense of drift and diffusion coefficients of the process a(x). The operator adjoint to (16), -(A~ ~ + ) *, is the operator of the Fokker-Planck equation. The operator .,t~ can be expressed through the same al, a 2 by means of the identity (13); the desired relation is a continuous analog of (14) and reads ^+
A~ + (A,,)
,
= -OxP(a, x),
SO
A~- =
aP(a, x) Ox
a 1 a2 Oa 11 + 2 '0oL '-~
12 "
Parallel with (8) the following formulae of differentiation for integer positive k hold:
(a
-- A)k ~,,,,,. = \-~Xk.l,,,,,," .
(17)
The formulae (8), (17) can he derived (see further) directly, without relying on the expansion (5), by taking into account, in accordance with the notion agreed-upon of the differential equations, that the derivatives ok~/oX ~ include
V.E. Shapiro / Waves in random media
330
all the variations of @ versus x. Note also that the derivatives ok~/OXk of randomly varying qO in (8), (17) must be defined in the same way as in the original stochastic differential equation.
3. The method of averaging Applying the procedure given by (4) to a linear stochastic equation, i.e. multiplying both its sides by the D P functions entering (4) and performing the summation, one obtains a linear relation between its derivatives (Ok~/OXk)~,~°. Replacing them by means of (17) makes the relation a closed set of linear deterministic equations for ~,,,(x). The average (q~(x)) is expressed through the solutions ¢b , ,,(x) according to definitions (3), (4) as follows:
(q)(x)) = f q)..(x) P(a, x) da.
(18)
Let us consider problems arising in this way for 4) = u with u obeying (1). Multiplying both sides of (1) by the D P functions, performing the summation according with (4) and replacing the derivatives by means of (17), we obtain
M(a, 0 x -
~,) u~,,,, = f~,,,,,
(19)
where f~,,, is determined by (4) with q~ = f. It should be emphasized that a, a ' , a" in (19), unlike a(x) in (1), are not functions of x but variables independent of x, while f,,~,, and J, are deterministic functions and an operator in a', a". This set of deterministic relations becomes a set of closed equations for finding (u(x)) provided determination of the boundary conditions and relation between a and a ' , a". Concerning the relation between a and a ' , a", our first thought was that its specifying is not essential, i.e. does not influence the mean solution (u(x)). But this is not so, since taking a = a ' and a = a" in (19) can result in different characteristic equations and, consequently, in different solutions ( u ( x ) ) . What is the origin of this ambiguity? All the derivation of (19) is in the first-hand use of the formulae of differentiation to the stochastic equation (1) with the Markov random a(x) admitting a master equation description. For this the involved probability functions of the random process are to be well-defined, integrable and differentiable to the extent required for the deduction of the formulae of differentiation. The deduction of (8) from (5) is easily verified e.g. for the step-wise a(x), provided that its states a k and transition rate parameters v, q are finite.
V.E. Shapiro / Waves in random media
331
Let us present another derivation of the formulae of differentiation, not using the Taylor expansion, but directly from the master equations
axP([n(x-)l I n(x) = n') = A2,P([n(x )]in(x) = ,~'), O~P([n(x + ) ] [ a ( x ) = a") = A ,~,,P([a(x + + ) ] I n ( x ) = a") . Multiplying these equations by ~ ( x , [n(x')]) and by corresponding P functions and summing, we readily obtain
f
q~(ax - A~,) D P ( [ n ( x - ) ] l n ( x ) = ~') D P ( [ n ( x + ) ] l n ( x ) = n")
I,~(~-),,~(x+)] =0,
f
^+
q~(a x
-
A~,,) D e ( [ ~ ( x + ) ] l n ( x ) = ~") D P ( [ ~ ( x - ) l l n ( x ) = n')
{,~(x-),,~(x+)l =0. At first glance, adding these relations, taking into account that A in (8) acts only on the indexes a', a" of q~,~,,, we at once arrive at (8). But a x in the upper relation is the derivative from side x + 0 and in the lower one from side x - 0. T o remove this obstacle, let us modify the definition (4) by excluding a small area of [ a ( x ' ) ] with center at x' = x from the sets [a(x-+)] entering the P functions and the summation. Then the derivatives may be taken the same, either from x - 0 or x + 0, so one rigorously arrives at (8). Reduction of this small area to zero makes no change in (8) while in the limit we come to (18) and consequently to the algorithm presented. But it is out of the scope of this reasoning that, while ~ used in our deduction is a well-defined functional, the object of application, the solution of the stochastic equation, is not, since the derivatives of (1), being derivatives of the random functions not differentiable in the routine sense, require additional definition. The only place in our device allowing us to account for different definitions of the stochastic calculus is to specify t~ as a function of a', a" in the formulae of differentiation. Specifying it we specify the random system model, and accordingly obtain different results. So the ambiguity does not imply incorrectness of the deduction, but rather prompts us to specify the original stochastic model. Similar to the device used by Stratonovich in ref. [20] for specifying the stochastic differential equations with initial conditions, let us proceed from the relation
V.E. Shapiro / Waves in random media
332
(20)
a = e a ' + ( 1 - e)a"
and specifying it by choosing a constant value of e, 0 ~< e ~< 1. The choice e = 1 corresponds to the notion of derivatives conventionally used in physical literature when dealing with the stochastic equations with given initial (or final) conditions. In terms of the Ito calculus this notion corresponds to the Stratonovich rule of treating stochastic integrals. We come to this statement comparing the algorithm given by (19), (20) with e = 1 (and (18) for q~ = u) with the results of averaging (1) with given initial (or final) conditions obtained by a routine technique used for the case. In the particular case of given initial conditions the solution of (1) is a functional of a ( x ' ) for x' < x , i.e. u = u(x, [a(x-)]. It follows in account of identity (6):
u.,.,,(x) =
f u DP([a(x-)]la(x) = ~') DP([a(x+)lla(x) -- ~") [~(x-),a(x+)]
= (
uDP([a(x-)IIa(x )= ~'),
[,~(x-)l which is independent of a", i.e. is an ordinary one-sided conditional mean. The operator 7] in (19) acting on such u~,~,, reduces to its part 7]- = 7]~- and the set (19) with a = a ' for finding (u(x)) reduces to
M(a, Ox - 7]-) u~ =f~ ,
(u(x)) = f us(x ) P(a, x) d a ,
(21)
with u s = u ~ . . Applying to the stochastic equation (1) with initial conditions a familiar technique of averaging, e.g. the trick of dealing with joint function (a, u, 8 x u , . . . , (Ox)U-lu), we arrive at the equations which are equivalent to (21). Similarly, finding (u(x)) for (1) with final conditions by means of solving (19) with a = a', we obtain u~,~,, reducing to a one-sided conditional mean independent of a ' , while the operator 7] in (19) reduces to its part 7]+. So the set (19) reduces to (21) with 7]- replaced by 7]+. Again the results coincide with those obtained by the conventional methods used for the case. But for the evolution-type systems with the inverted positive direction of x, we obtain for the choice e = 1 in (20), (19) results differing from those presented. Now the adequate modeling corresponds to the formulae of differentiation specified by the choice e = 0. Both choices, e = 1 and e = 0, cause asymmetry of back and forward direction of x. Obviously, for the boundary value problems in which the choice
V.E. Shapiro / Waves in random media
333
of positive direction of x is insignificant there is no reason for the asymmetry and it is naturally to adopt e = ½. Up to this point, we have not specified the boundary conditions. Obviously, the rule (18) and the formulae of differentiation, properly specified by the choice of e, hold true for arbitrary boundary-value problems. To consider a concrete problem one needs to determine the corresponding boundary conditions for (19). They should be found by means of definition (4) and (specified) formulae (17) from the boundary conditions given for the original stochastic equation. Let {xi} be a set of boundary points and let the boundary conditions for (1) at each x i have the form
[h + hou + h 1 Ou/Ox + . . . + hm(Ox)mU]x=xi = 0,
(22)
where m ~< N - 1, N is the degree of polynomial L, and the coefficients h, h k may depend on x and a(x). The requirement of continuity of u(x) and its derivatives at boundaries and a number of other "local" restrictions are reduced to the form (22). Applying the operation given by (4) to (22) and using (17) we readily obtain the corresponding boundary conditions for (19):
[ h + h o U + h l ( O x . A .) U +.
x . +hm(O .
~ ~ Ulx= = 0, A)
(23)
with U = {u~,~,,(x)} and a(xi) entering in h, h k replaced by a given by (20) with specified value of e. In particular, the zero boundary conditions for (1) correspond to the zero boundary conditions for (19). Note also that the trick of replacing u(x) by v(x) + g(x), where v is a new unknown and g an appropriate function, allows one to reformulate the original boundary-value problem into a more convenient one for finding the boundary conditions for U. Thus, solving (19) and taking the weighted trace
(u(x)) = f u ~ ( x ) P(a, x) d ~ ,
(24)
one can find the desired average.
4. Solving the stochastic Helmholtz equation Let us consider a typical object of the theory of waves in random media - the Helmholtz equation
334
V.E. Shapiro / Waves in random media
02¢
c 2 02---~=0
Ot 2
Ox 2
(25) '
with
c2 = I c2(a(x)) [ coz = const
for O~ l ,
where c2(a) is a function of a = a(x), a given Markov process. Let a wave, harmonic in t, of unit amplitude be incident on the layer from the left. Continuity of ¢(x, t) at the boundaries completes the formulation of the problem of finding arbitrary averaged characteristics of the wave field. Putting ~o(x, t ) = u(x) e x p ( - i w t ) one has the one-dimensional stochastic equation d2u + k2u = 0 dx 2
with k 2
=
(26)
W2/C2 and the boundary conditions
u(O) = 1 + R ,
u(t) = r,
u(0) =iko(1 - R ) ,
ti(l) = ikoT ,
(27)
where ti = d u / d x , k o = W/Co, T and R are the coefficients of transmission and reflection from the layer. Excluding unknown T and R, we have the conditions of the form (22), ti(0) + ikou(0 ) - 2ik o = 0 ,
ti(l) - ikou(l ) = O.
(28)
Let us first dwell on the technique of finding ( u ( x ) ) . This problem is reduced to solving the closed system of deterministic equations -
+ k2(a
u.,~,, = 0 ,
(29)
with a = ½(a' + or") for the quantities u.,.,,(x) under the boundary conditions
u.,~,,(O) + ikou.,.,,(O ) = 2ik o ,
fi~,~.(l) - ikou~,~,,(l ) = O,
(30)
here u~,~,, = (0 x - J . ) u ~ , . , , and 7i. is given in section 3. The desired ( u ( x ) ) is expressed through the solution of (29), (30) as the weighted trace (24).
335
V.E. Shapiro / Waves in random media
Let e.g. a ( x ) be a stationary dichotomic Markov process, i.e. a ( x ) takes on only two values Otl, o~2 with equal probabilities, P(a, x) = ½ [6(a - oq) + 6 ( a - a2) ] and let
v(a, x)
=
v,
q(81
x) = & ,
i.e. the mean frequency of jumps from one state to the other is independent of a and x. In this case (29) turns into the system of four linear ordinary equations with constant coefficients: [(d--~ - A) 2+ K2] U = 0
(31)
for four quantities U = (u~l~, u~1~2, u~2.~, u~2~2), where
(0
u A=~
1 1 0
0 0 0)
0 0 1
0 0 - 1
K= -
k2 0 0 0 k3 0 0 0 k4
'
w i t h k i = w /2c i ,2
c 1 = C(al), c 4 = c(a2), c 2 = c a = c(½(a] + a2))--- co. Its general solution has the form 8
(32)
U =E c i eqix , i=1
with vector coefficients C i and the qi's obeying the characteristic equation d e t [ ( q E - A) 2 + K 2] = 0 ( E is the identity 4 x 4 matrix), which in terms of 2 2 k2 K =q + ,
kz_k21+k~ 2
'
62_k21-k~ 2
reads (for c o = Co) K2[K6 _~. 2V2K4 +
(/4 __ ~4 __ 4v2kZ)K2 _ / . t 2 ~ 4 ]
= 0.
V.E. Shapiro / Waves in random media
336
The desired (u(x)) = ½(U1 + U2). Substituting (32) into (30) one determines the coefficients Cil, Ci2 and thus completes the problem of finding (u(x)). Interestingly, in the limit ¢5--~ 0, besides the roots close to the familiar values q = - i k one finds the roots close to q = i k + - ½v. The contribution of the corresponding extra exponents to (u(x)) is essential unless I is small compared with 1/v, the mean period of alternations in a(x). Note that a typical random media problem employing the statistical modeling corresponds to the case of 1 >>1/v and not 1 ,~ 1/n. As a matter of fact, at 8 = 0 no extra exponents appear in (u(x)), nor in U(x). One can verify this from (31), (32) by means of the substitution U(x) = exp(Ax) V(x). If instead of the symmetrical formulae of differentiation, corresponding to e = ½ in (20), (19), we take e.g. e = 1, we obtain essentially different and unphysical results. For instance, the solution corresponding to the wave incident either from the left or from the right, and for any boundary conditions of the local type [h(x) + ho(x ) U(x) + h,O(x)]lx=o, x=, = O, with h, h0, h I independent of the random process a(x), consists only of 4, not 8, exponents exp(qix) with qi's taking the values q
=
-½v
+-- [ 1 / / 2
_ k 2 -4- (/.,,2k2 _ ~ 4 ) 1 / 2 ] 1 / 2
.
Appearance of the asymmetry towards ---x direction in the solution evidently shows the unphysicity. Let us briefly dwell on the technique of finding the mean intensity characteristic (u2(x)) and probability density function (p(u, ~t, x)). It follows from (27), (28) that the intensity obeys the stochastic equation dZ
d---~+ A(a(x)) Z = 0,
where
(U2)
Z =
uu
,
(33)
(0 A(o,)=
/j2
0
and the boundary conditions 4,z(o)
+ ~,z(o
k~(o,)
+ n = o,
--2 0
2k2(ot)
-
,
V.E. Shapiro / Waves in random media where 2ik o
-k
4~=/
O0
0
,
So the procedure of finding equations
qj =
(0 -ik
0
0
(U2(X)) is
1
,
337
77 =
.
-ik 0 reduced to solving the deterministic
[Ox - A + A ( ½ ( a ' + a"))lZ~,~,, = 0.
(34)
Since ~bik~0ik= 0 for any i, k components of the matrices ~b and q~, the boundary conditions for (34) take the form ckZ~,~,,(O) + ~OZ,~,,(l) + 7/= 0.
(35)
The desired (u2(x)) is the weighted trace (u2(x)) = f Z l ~ ( x ) P(a, x) d a , a
where ZI~ ~ = (Z 1 ) ~ , Z 1 is the first component of vector Z. In the particular case of the dichotomic a(x) the closed set of equations (34) consists of 12 linear ordinary differential equations with constant coefficients. Its characteristic equation takes on the form det[( qE - A) z + 4K 2] = 0,
d e t ( q E - A) = O,
with the matrices A, E, K determined above. The roots of the first determinant equation differ from the characteristic roots for (31) in that the values of k and 6 are to be multiplied by 2. The roots of d e t ( q E - A) = 0 are associated with the zero root of the characteristic equation for (33) with a constant, they read q = ½u, q = 0 (the zero root is multiple of order 2). The same conclusions hold for any other type of specification of the formulae of differentiation. So, like dealing with (u(x)), one can find the solution of (34), (35) explicitly and carry out the problem of finding (u2(x)) to completion. Details will be considered elsewhere as well as comparison with results known from literature. Note that the solution of (34), (35) and consequently (u2(x)) can be presented in a compact matrix form by means of the evolution matrix for eq. (34), G(x) = exp[(A - A ) x ] .
V.E. Shapiro / Waves in random media
338
Then the solution reads
Z(x) = - G ( x ) [ok + OG(I)]-I~? • All the A, A, $, $ are constant deterministic matrices and so the problem reduces to the elementary algebra. In a similar form one can write the solution for (u(x)) and for the higher moments. Let us turn to (p(u, u, x)). The nonaveraged density function p(u, d, x) = ~(u - u(x)) ~(d - d(x))
obeys the continuity equation
[0x + dO. - k2(x) Ou]p =0,
(36)
and the boundary conditions following from (28)
p(u, fi, O) = p(u, O) 6(u + ikou - 2 i k o ) , (37)
p(u, u,l) = p(u,l) 6(u - i k o u ) , where p(u, x) is the density function in u-space, p(u, 0) and p(u, l) are to be determined self-consistently. According to our device the problem of finding ( p ) is reduced to dealing with the extended master equations which have the form of (36), (37) provided the replacements p ( u , d, x)----> p~,~,,(u, d, x ) ,
Ox----->Ox - A ,
and a(x) in the function k 2 in (36) replaced by (20)-depending on the adopted stochastic model. Note that now a', ~" are independent variables as well as u, d. The desired ( p ) is to be found as
(p(u, d, x)) = f p~(u, d, x) P(a, x) d a . ot
Finally, let us juxtapose the method presented with that given by the embedding applied for the case intensively (e.g. refs. [16-18]). This device is treating the solution u(x) of (26), (27) as function u(x, l) of x and l, the layer thickness, and deduction of the relation
Ou(x,l) al = ik°u(x'l) + ie(l) W(l) u ( x , l ) ,
(38)
v.E. Shapiro / Waves in random media
339
where W ( l ) = u(l, l) and e ( l ) = [k2(/)- k2o]/2ko . This relation is a stochastic equation with respect to l with the initial condition u ( x , l)l,=x = W ( x ) .
The unknown W(x) obeys the stochastic Riccati equation dW dx = 2ik°(1 - W) + ie(x) W 2
(39)
with the initial condition W(0) = 1. Modeling k2(x) by a Markov process, one applies to these stochastic equations with initial conditions conventional methods, e.g. the trick of dealing with the joint density probability function (in the phase space of the system and of the random process). The corresponding kinetic equations will not be cited here, they are complicated. The nonlinearity of (38), (39) does not allow analyzing the system exactly; numerical methods are used or a perturbation t h e o r y - w h e n there exists a small parameter. In this respect, our method, as demonstrated, differs in that no nonlinearities arise and the problem admits exact treatment.
5. Conclusion
The method developed treats both evolution and nonevolution type systems equally and so that the random influences act as if they cause two separate kinds of scattering processes, delayed and advanced. A characteristic feature is, as for all the kinetic methods, that the values of a figure in the averaged equations, while in the original stochastic equations we have to deal with functions ~(x). Of course, the price of the result is the enlargement of the space of variables, which is similar to the case of deterministic systems. But for the stochastic systems such important circumstances appear that direct dealing with the stochastic equation is a much harder problem: one needs to find the explicit solutions for arbitrary, irregular to a certain extent, a(x); besides, the problem concerns summation over multi-point distributions of the random process. Note that even for the kinetic operator A of a simple form the multi-point probability functions can have a form far from simple. Another advantage, besides compactness, is that the kinetic operators directly characterize the kinetics of the random process. In a sense, this is like looking at an oscillator equation we recognize at once, not considering the solutions, the natural frequency, damping, anharmonism.
340
v.E. Shapiro / Waves in random media
The method presented, unlike the conventionally used, performs the averaging of linear stochastic systems with nonadditive random influence without any nonlinear reduction and the averaged equations are linear. The advantages are particularly evident for the random influences modeled by Markov step-wise functions. Using the formulae of differentiation and specifying the kinetics of the random process, i.e. the transition rates between random states, allows, as shown, to write down immediately the closed equations for the averages. In particular, as demonstrated for a typical model of the theory of wave propagation in random media, the problem of finding the average solution can be carried out to c o m p l e t i o n - t o elementary algebra. For example, one can readily determine the dispersion relations associated with the "effective media". Such exact results are obtained for the first time, to my knowledge. It follows from linearity of the method that the results can easily be extended for a much wider class of stochastic equations, e.g. like (1) but with multic o m p o n e n t u ( x ) and multi-component random function a ( x ) of several variables x. The analysis of the present work has revealed that a problem of adequate modeling by the stochastic differential equations arises when we go from the initial-value problems to the boundary-value ones. This problem is associated with that of how to specify derivatives of the stochastic equations. There exists an extensive literature on the subject (see refs. [1,20-22]), but it concerns only the equations with initial conditions. The case of boundary conditions presents, as shown, new problems. This interesting question needs a detailed investigation which is out of the scope of the present paper.
Acknowledgements I am very thankful to Professor M. San Miguel for the interest and support and the Physics Department of the University of Illes Balears for the hospitality. I would also like to thank Professors J.M. Sancho and F.J. De La Rubia for the interest and support organized from the Universities of Barcelona and U N E D in Madrid.
References [1] I.I. Gikhman and A.V. Skorokhod, Stochasticheskie Diferentialnie Uravneniya (Naukova Dumka, Kiev, 1968); Teoriya Sluchainikh Processov, vol. 3 (Nauka, Moscow, 1975). [2] C.W. Gardiner, Handbook of Stochastic Methods (Springer, Berlin, 1983). [3] W. Horsthempke and R. Lefever, Noise Induced Transitions (Springer, Berlin, 1984).
V.E. Shapiro / Waves in random media
341
[4] N.G. van Kampen, Stochastic Processes in Physics and Chemistry (North-Holland, Amsterdam, 1981). [5] S.M. Rytov, Yu.A. Kravtsov and V.I. Tatarsky, Principles of Statistical Radiophysics 4. Wave Propagation through Random Media (Springer, Berlin, 1989). [6] I.M. Lifshitz, S.A. Gredeskul and L.A. Pastur, Vvedenie v Teoriyu Neuporyadochenikh Sistem (Nauka, Moscow, 1982). [7] W. Koler and G. Papanicolaou, J. Math. Phys. 14 (1973) 1733; 15 (1974) 2186. P. Sheng, R. White, Z.Q. Zhang and G. Papanicolaou, in: Scattering and Localization of Classical Waves (World Scientifc, Singapore, 1989). [8] F. Moss and P.V.E. McLintock, eds., Noise in Nonlinear Dynamical Systems (Cambridge Univ. Press, Cambridge, 1989). [9] E.W. Montroll and C.H. Weiss, J. Math. Phys. 6 (1965) 167. [10] A. Brissoud and U. Frisch, J. Math. Phys. 15 (1974) 524. [11] R.C. Bourret, U. Frisch and A. Pouquet, Physica A 65 (1973) 303. [12] V.E. Shapiro and V.M. Loginov, Physica A 91 (1978) 563; Dynamicheskie Systemy pri Sluchainykh Vozdeystviyakh (Nauka, Novosibirsk, 1983). [13] A.G. Kofman, R. Zaibel, A.M. Levine and Y. Prior, Phys. Rev. A 11 (1990) 6434. [14] A.I. Saichev, Izv. VUZ Radiofiz. (USSR) 23 (1980) 183. [15] R. Bellman and G.M. Wing, An Introduction to Invariant Imbedding (Wiley, New York, 1975). [16] C.I. Babkin and V.I. Klyatskin, Wave Motion 4 (1982) 195. [17] V.I. Klyatskin, Metod Pogruzeniya v Teorii Rasprostraneniya Voln (Nauka, Moscow, 1986). [18] V.I. Goland and V.I. Klyatskin, Akust. Zh. (USSR) 33 (1988) 828. [19] V.E. Shapiro, Phys. Lett. A 162 (1992) 309. [20] R.L. Stratonovich, in: Noise in Nonlinear Dynamical Systems, F. Moss and P.V.E. McClintock, eds. (Cambridge Univ. Press, Cambridge, 1989), p. 16-71. [21] P. Bedeaux, Phys. Lett. A 62 (1977) 10. [22] B.J. West, A.R. Bulsara, K. Lindenberg, V. Seshardi and K.E. Shuler, Physica A 97 (1979) 211.