Available online at www.sciencedirect.com
Mathematics and Computers in Simulation 82 (2011) 679–690
Original articles
On the linear advection equation subject to random velocity fields F.A. Dorini a,∗ , M.C.C. Cunha b a
Department of Mathematics, Federal University of Technology - Paraná, 80230-901, Curitiba, PR, Brazil b Department of Applied Mathematics, State University of Campinas, 13083-970, Campinas, SP, Brazil Received 13 November 2010; received in revised form 1 October 2011; accepted 13 October 2011 Available online 11 November 2011
Abstract This paper deals with the random linear advection equation for which the time-dependent velocity and the initial condition are independent random functions. Expressions for the density and joint density functions of the solution are given. We also verify that in the Gaussian time-dependent velocity case the probability density function of the solution satisfies a convection–diffusion equation with a time-dependent diffusion coefficient. Some exact examples are presented. © 2011 IMACS. Published by Elsevier B.V. All rights reserved. MSC: 35R60; 60H15 Keywords: Advection; Random; Velocity; Gaussian processes
1. Introduction Uncertainties in data are natural in many models of real world problems that use differential equations, specially in physical sciences. However, the deterministic formulations have been traditional and convenient. Parameters of differential equations are, in general, viewed as well defined local quantities that can be assigned at each point. In practice such parameters can, at best, be measured at selected locations and interpretative procedures are used at points where measurements are not available. Quite often the support of measurements is uncertain and the data are corrupted by experimental and interpretative errors. These errors and uncertainties render the data random parameters and the corresponding stochastic differential equations. Once the statistical properties of relevant random parameters have been inferred from data, the next nontrivial step is to solve stochastic differential equations. Several techniques have been used in solving this kind of problem (see [1,2,5,7–9,16,19,21,23,25,30,33], for example). In this paper, we deal with the random one-dimensional advection problem ∂ ∂ Q(x, t) + V (t) Q(x, t) = 0, ∂t ∂x Q(x, 0) = Q0 (x),
t > 0, x ∈ R,
(1)
where V(t) is the random velocity and Q0 (x) is the random initial condition. We suppose independence between the time-dependent velocity and the initial condition. Advection equations arise in the modeling of a wide variety of ∗
Corresponding author. Tel.: +55 41 3342 7369. E-mail addresses:
[email protected] (F.A. Dorini),
[email protected] (M.C.C. Cunha).
0378-4754/$36.00 © 2011 IMACS. Published by Elsevier B.V. All rights reserved. doi:10.1016/j.matcom.2011.10.008
680
F.A. Dorini, M.C.C. Cunha / Mathematics and Computers in Simulation 82 (2011) 679–690
phenomena that involve advective transport of substances or wave motions (for example, see [10,23,31]). In case of uncertainties in the transport velocity or/and in the initial condition, the random advection equation provides a better description of the process. Several authors have studied problems related to (1). Most of approaches include methods by which one seeks the statistical moments of the solution (for example, see [10,11,23,28–31,35], and the references therein). The main effort is usually concentrated on the derivation of appropriate differential equations for average quantities using, in general, small perturbations with some kind of closure. Another approach is to solve numerically appropriate equations for representative sets of realizations of random fields and to average computed functions. This approach is the so-called Monte Carlo method (see [12,35], for example) which has the advantage of applying to a very broad range of both linear and nonlinear problems. The large volume of calculation, the errors in solving the deterministic equations, and the difficulty for generalizing the results may limit the significance of this approach. In recent years polynomial chaos has been used as a complete basis to represent random processes in stochastic Galerkin projections, transforming the stochastic differential equation in a set of deterministic equations (for example, see [13–15,17,21,33] and the references therein). Although the solution of a stochastic differential equation would be the (joint) probability density function (pdf), it is difficult to construct differential equations for this function. In a pdf-method the density function is modeled, in one point and one time, by evolution equations. As far as we know, pdf-methods have not been developed in many fields (see [3,6,18,24–26,32], for example). The purpose of this paper is to present some results on the random solution of (1). In Section 2 we couple the total probability theorem with the characteristic method to find an expression for the pdf of the solution of (1). In Section 3 we present results and examples in which the velocity in (1) is a Gaussian time-dependent process. We straightforwardly verify that the pdf of the solution satisfies a convection–diffusion equation with a time-dependent diffusion coefficient. Although differential equations for moments and density functions have been studied in the literature for some particular cases [15,17,29,30], we believe that our methodology gives a new insight on the subject and is more direct. Finally, in Section 4 we obtain the two-point joint density function of the solution of (1) and present some examples. 2. The pdf of Q(x, t) For each realization V(t, ω1 ), of the random velocity, and Q0 (x, ω0 ), of the initial condition (for example, see [19,22] for more details on probability spaces and stochastic processes), (1) becomes a deterministic advective equation whose solution at a fixed (x, t), Q(x, t, ω1 , ω0 ), is constant along the characteristics [20], i.e., Q(x, t, ω1 , ω0 ) = Q0 (x0 , ω0 ), where
x0 (x, t, ω1 ) = x −
t
V (τ, ω1 )dτ. 0
Thus, the solution of (1) can be expressed as Q(x, t) = Q0 (x − A(t)), where t A(t) = V (τ)dτ.
(2)
0
The concept of conditional probability will play a role in calculating the cumulative function of Q(x, t), FQ (q ; x, t), for a fixed (x, t). In fact, by the Law of Total Probability (see [22, p. 103] or [27, p. 111], for example) we can write FQ (q; x, t) = P(Q(x, t) ≤ q) = EX0 [P(Q(x, t) ≤ q | X0 )],
(3)
where X0 is given by X0 (x, t) = x − A(t), EX0 denotes the expected value relative to random variable X0 , P denotes the probability measure, and P(U | V ) denotes the conditional probability of U given V. On the other hand, by the characteristic method and the independence between V(t) and Q0 (x) it follows that Q(x, t) ≤ q given that X0 = x0 is equivalent to Q0 (x0 ) ≤ q. Thus, from (3), +∞ +∞ FQ (q; x, t) = P(Q0 (x0 ) ≤ q) fX0 (x0 ) dx0 = FQ0 (q; x0 ) fX0 (x0 ) dx0 , −∞
−∞
F.A. Dorini, M.C.C. Cunha / Mathematics and Computers in Simulation 82 (2011) 679–690
where FQ0 is the cumulative function of Q0 (x0 ) and fX0 is the pdf of X0 . Taking the derivative with respect to q we obtain +∞ fQ (q; x, t) = fQ0 (q; x0 ) fX0 (x0 ) dx0 . −∞
681
(4)
Recalling that A(t) is a random variable for t fixed, we have FX0 (x0 ) = P(x − A(t) ≤ x0 ) = P(A(t) ≥ (x − x0 )) = 1 − FA(t) (x − x0 ), and by the differentiation with respect to x0 we arrive at fX0 (x0 ) = fA(t) (x − x0 ). Then, substituting fX0 (x0 ) in (4) we obtain +∞ fQ (q; x, t) = fA(t) (x − x0 ) fQ0 (q; x0 ) dx0 . −∞
The arguments so far summarized prove the following results: Proposition 2.1. The pdf of the solution of (1) at a fixed (x, t), fQ (q ; x, t), is given by +∞ fQ (q; x, t) = fA(t) (x − x0 ) fQ0 (q; x0 ) dx0 . −∞
(5)
Corollary 2.1. The mth moment, μm (x, t) = E[(Q(x, t))m ], m ∈ Z, m ≥ 1, of the solution of (1) is given by +∞ +∞ m m q fQ (q; x, t) dq = fA(t) (x − x0 ) μm μ (x, t) = 0 (x0 ) dx0 , −∞
−∞
where μm 0 (x) is the mth moment of Q0 (x). Remark 2.1. Note that the expressions above require the knowledge of the probability function of A(t) defined by expression (2). This can become difficult depending of time-dependent velocity since, to the best of our knowledge, there is not any general result about the pdf of the integral of a stochastic process. Remark 2.2. In case of V(t) = V is a random variable we obtain A(t) = Vt, fA(t) (x) = (1/t)fV (x/t), and +∞ fQ (q; x, t) = fV (v) fQ0 (q; x − vt) dv = EV [fQ0 (q; x − Vt)]. −∞
Remark 2.3. In case of Q(x, 0) = g(x) is a deterministic function we can express (5) as +∞ fQ (q; x, t) = fA(t) (x − x0 ) δ(g(x0 ) − q) dx0 , −∞
(6)
where δ is the Dirac (delta) distribution. Furthermore, if g(x) is a smooth function, if the equation g(x) − q = 0 has n isolated zeros, xj,q , (j = 1, 2, . . ., n), and if g (x) does not vanish at each of the zeros, we have (for example, see [34]) δ(q − g(x)) =
n δ(x − xj,q )
|g(xj,q )|
j=1
and
fQ (q; x, t) = =
+∞
−∞ n j=1
,
fA(t) (x − x0 )
n δ(x0 − xj,q ) j=1
|g(xj,q )|
1 fA(t) (x − xj,q ). |g(xj,q )|
dx0 (7)
682
F.A. Dorini, M.C.C. Cunha / Mathematics and Computers in Simulation 82 (2011) 679–690
Example 2.1. Consider problem (1) with deterministic initial condition 1, if x ≤ 0, Q(x, 0) = 0, if x > 0.
(8)
This initial condition has been used by several authors to study mixing zones of substance concentrations (see [4], for example). In this case, from (6): 0 +∞ fQ (q; x, t) = δ(q − 1) fA(t) (x − x0 ) dx0 + δ(q) fA(t) (x − x0 ) dx0 −∞ 0 +∞ x (9) = δ(q − 1) fA(t) (x0 ) dx0 + δ(q) fA(t) (x0 ) dx0 −∞
x
= δ(q − 1)[1 − FA(t) (x)] + δ(q)FA(t) (x),
that is, Q(x, t) is the Bernoulli discrete random variable with +∞ fA(t) (x0 ) dx0 = 1 − FA(t) (x). P(Q(x, t) = 1) = x
Furthermore, it is important to note that μm (x, t) = 1 · P(Q(x, t) = 1) + 0 · P(Q(x, t) = 0) = 1 − FA(t) (x), for all m ∈ Z, m ≥ 1. Example 2.2. Consider (1) with initial condition 1, if x ≤ ξ, Q0 (x; ξ) = 0, if x > ξ, where ξ is a random variable. The pdf of this random function is given by fQ0,ξ (q; x) = δ(q) P(Q0 (x; ξ) = 0) + δ(q − 1) P(Q0 (x; ξ) = 1) = δ(q) P(x > ξ) + δ(q − 1) P(x ≤ ξ) = δ(q) Fξ (x) + δ(q − 1) [1 − Fξ (x)], with Fξ being the cumulative function of ξ. From (5), +∞ fQ (q; x, t) = fA(t) (x − x0 ) fQ0,ξ (q; x0 ) dx0 −∞ +∞ = δ(q) fA(t) (x − x0 )Fξ (x0 ) dx0 + δ(q − 1) 1 − −∞
+∞
−∞
fA(t) (x − x0 )Fξ (x0 ) dx0 ,
that is, Q(x, t) is the Bernoulli discrete random variable with +∞ P(Q(x, t) = 1) = 1 − fA(t) (x − x0 )Fξ (x0 ) dx0 . −∞
Now we present some expressions that will be useful in the next section when we treat the Gaussian time-dependent velocity case. From (2) we can write (for example, see [22]) t μ(t) = E[A(t)] = E[V (τ)]dτ (10) 0
and σ (t) = Var[A(t)] = 2
t
t
CovV (s, τ)dsdτ, 0
0
(11)
F.A. Dorini, M.C.C. Cunha / Mathematics and Computers in Simulation 82 (2011) 679–690
683
where CovV (t, τ) is the covariance function of V(t). Also, if E[V(t)] and CovV (t, τ) are continuous functions it follows that d μ(t) = E[V (t)] dt and d 2 σ (t) = 2 dt
(12)
t
CovV (t, τ)dτ.
(13)
0
3. The Gaussian time-dependent velocity case If V(t) is Gaussian then A(t) in (2) is also a Gaussian random variable for each t [22]; its mean and variance are given by (10) and (11), respectively. In this case we have the following result: Proposition 3.1. The pdf of Q(x, t), fQ (q ; x, t), satisfies the convection–diffusion equation t 2 ∂ ∂ ∂ fQ (q; x, t) + E[V (t)] fQ (q; x, t) = CovV (t, τ)dτ f (q; x, t). 2 Q ∂t ∂x ∂x 0 Proof. Since A(t) ∼ N(μ(t), σ 2 (t)), μ(t) and σ(t) given in (10) and (11), its density function is
1 (x − μ(t))2 . exp − fA(t) (x) = √ 2σ 2 (t) 2πσ(t)
(14)
(15)
The differentiation of fA(t) (x) yields ∂ x − μ(t) fA(t) (x) = −fA(t) (x) , ∂x σ 2 (t)
∂2 fA(t) (x) (x − μ(t))2 fA(t) (x) = −1 + , ∂x2 σ 2 (t) σ 2 (t) and
(x − μ(t))2 σ(t) (x − μ(t)) ∂ fA(t) (x) = fA(t) (x) − + μ(t) + σ(t) . ∂t σ(t) σ 2 (t) σ 3 (t)
Therefore, using (12) and (13), ∂ fA(t) (x) + E[V (t)] ∂t
∂ ∂ ∂ fA(t) (x) = fA(t) (x) + μ(t) fA(t) (x) ∂x ∂t ∂x
σ(t) (x − μ(t))2 = fA(t) (x) − + σ(t) σ(t) σ 3 (t)
σ(t) (x − μ(t))2 = fA(t) (x) −1 + σ(t) σ 2 (t) ∂2 1 d(σ 2 (t)) ∂2 = σ(t)σ(t) 2 fA(t) (x) = f (x) 2 A(t) ∂x t 2 2 dt ∂x ∂ = CovV (t, τ)dτ fA(t) (x). ∂x2 0
On the other hand, differentiating (5) conveniently and using the differential equation for fA(t) above, we obtain (14).
684
F.A. Dorini, M.C.C. Cunha / Mathematics and Computers in Simulation 82 (2011) 679–690
1
0.25
0.8
0.2
0.6
0.15
0.4
0.1
0.2
0.05
0
0
0.1
0.05
0
−0.2 −5
0
5
−0.05 −5
(a)
−0.05
−0.1 0
(b)
5
−5
0
5
(c)
Fig. 1. Illustration of the mean (a), the variance (b), and the third central moment (c) of the solution to (1), with Gaussian V(t) and deterministic initial condition (8), computed using expression (18) (solid line) and the Monte Carlo method (squares); t = 2; −5 ≤ x ≤ 5.
Corollary 3.1. The mth moment, μm (x, t), m ∈ Z, m ≥ 1, of the solution of (1) also satisfies (14), that is, t 2 ∂ m ∂ m ∂ m CovV (t, τ)dτ μ (x, t). μ (x, t) + E[V (t)] μ (x, t) = 2 ∂t ∂x ∂x 0
(16)
Example 3.1. Consider problem (1) with Gaussian V(t) and deterministic initial condition (8). From (9) it follows that Q(x, t) is the Bernoulli discrete random variable defined by +∞ P(Q(x, t) = 1) = fA(t) (x0 ) dx0 x
+∞ 1 (x0 − μ(t))2 =√ exp − dx0 (17) 2σ 2 (t) 2πσ(t) x +∞ 1 x − μ(t) 1 −x02 =√ e dx0 = erfc √ , π (x−μ(t))/(√2σ(t)) 2 2σ(t) where erfc(x) is the complementary error function. From (17), observing that P(Q(x, t) = 0) = 1 − P(Q(x, t) = 1), we conclude that x − μ(t) 1 , (18) μm (x, t) = erfc √ 2 2σ(t) which is the solution of (16) and (8). Moreover, the variance of the solution is given by Var[Q(x, t)] = μ2 (x, t) − [μ1 (x, t)]2 = μ1 (x, t)[1 − μ1 (x, t)] 1 1 x − μ(t) x − μ(t) = erfc √ 1 − erfc √ . 2 2 2σ(t) 2σ(t) Fig. 1 illustrates the mean, the variance, and the third central moment of the solution to (1), with Gaussian V(t) and deterministic initial condition (8), computed using expression (18). We also present Monte Carlo results for 5000 velocity realizations. Here, the velocity is defined by its mean, E[V(t)], and the exponentially decaying covariance function, Cov V (t, τ) = σV2 exp(−|t − τ|/λ). This covariance function is parameterized by the variance, Var[V (t)] = σV2 , and by the correlation length, λ > 0, which governs the decay rate of the time correlation. The plots correspond to the following data: E[V(t)] = 0.5, σV2 = 0.25, λ = 0.4, and t = 2. The experiments were performed in MATLAB (MathWorks, Inc.). Example 3.2. In this example the velocity is also Gaussian (defined by its mean and exponentially decaying covariance 2 function, as in the previews example). Let g(x) = α e−βx (α, β > 0) be the initial condition. We may find fQ (q ; x, t) by
F.A. Dorini, M.C.C. Cunha / Mathematics and Computers in Simulation 82 (2011) 679–690 0.4
685
0.025
0.35
0.5
0.02
0.3 0.4
0.25
0.015
0.2
0.3
0.15
0.01
0.2 0.1 0.1
0
0 −5
0.005
0.05
0
(a)
5
−0.05 −5
0 0
5
−5
(b)
0
5
(c)
Fig. 2. Illustration of the mean (a), the second moment (b), and the variance (c) computed using expression (19) (solid line) and the Monte Carlo method (squares); t = 1; −5 ≤ x ≤ 5.
1/2 1/2 using (7). For 0 < q < α the equation g(x) = q has two roots, x1,q = (1/β) ln (α/q) and x2,q = − (1/β) ln (α/q) . For the function g(x), we find 1/2 1 α , (j = 1, 2). |g(xj,q )| = 2βq ln β q Moreover, from (7) we can write +∞ 1 fQ (q; x, t) = fA(t) (x − x0 ) [δ(x0 − x1,q ) + δ(x0 − x2,q )] dx0 = 2βρ(q)q −∞ 1 = [fA(t) (x − x1,q ) + fA(t) (x − x2,q )], 2βρ(q)q 1/2 . where ρ(q) = (1/β) ln (α/q) Thus, using (15) we arrive at the density function:
(x + ρ(q) − μ(t))2 (x − ρ(q) − μ(t))2 1 fQ (q; x, t) = √ + exp − . exp − 2σ(t)2 2σ(t)2 8πβσ(t)ρ(q)q
(19)
Evidently, fQ (q ; x, t) = 0 for q ∈ / (0, α). Note that from (11) we obtain t t 1/2 t t 1/2 √ |s − τ| σ(t) = σV dsdτ + −1 exp − = 2λσV exp − . λ λ λ 0 0 Fig. 2 illustrates the mean, the second moment, and the variance of the solution to (1), with Gaussian V(t) and g(x) = 2 (1/2)e−x , computed using expression (19). We also present the Monte Carlo results for 5000 velocity realizations. The plots correspond to the following data: E[V(t)] = 0.5, σV2 = 0.25, λ = 0.4, and t = 1. In Fig. 3 we present fQ (q ; x, 1), given by (19), at different values of x. Example 3.3. Let V(t) be the Gaussian white noise with zero mean and power spectrum η. In this case, A(t) is also Gaussian with zero mean, ηt variance and ε2 1 fA(t) (ε) = √ . exp − 2ηt 2πηt By (13) we arrive at t η CovV (t, τ) dτ = , 2 0
686
F.A. Dorini, M.C.C. Cunha / Mathematics and Computers in Simulation 82 (2011) 679–690
15
15
15
10
10
10
5
5
5
0 −0.1
0
0.1
0.2
0.3
0.4
0.5
0 −0.1
0
0.1
0.2
q
0.3
0.4
0.5
0 −0.1
(a) fQ (q; −1, 1)
(b) fQ (q; −0.5, 1)
15
10
10
10
5
5
5
0.1
0.2
0.3
0.4
0.5
0 −0.1
0
0.1
0.2
0.3
0.4
0.5
0 −0.1
0.4
0.5
0
0.1
0.2
0.3
0.4
0.5
q
q
(d) fQ (q; 0.5, 1)
(e) fQ (q; 1, 1)
(f) fQ (q; 1.5, 1)
15
15
10
10
10
5
5
5
0
0.3
q
15
0 −0.1
0.2
(c) fQ (q; 0, 1)
15
0
0.1
q
15
0 −0.1
0
q
0.1
0.2
0.3 q
(g) fQ (q; 2, 1)
0.4
0.5
0 −0.1
0
0.1
0.2
0.3
0.4
0.5
0 −0.1
0
0.1
0.2
0.3
q
q
(h) fQ (q; 2.5, 1)
(i) fQ (q; 3, 1)
Fig. 3. Plots of fQ (q ; x, 1), given by (19), at different values of x.
0.4
0.5
F.A. Dorini, M.C.C. Cunha / Mathematics and Computers in Simulation 82 (2011) 679–690
687
and (14) becomes ∂ η ∂2 fQ (q; x, t). fQ (q; x, t) = ∂t 2 ∂x2 This diffusive equation with fQ (q; x, 0) = fQ0 (q; x) can be solved by the Green’s function approach:
+∞ 1 (x − x0 )2 fQ (q; x, t) = √ fQ0 (q; x0 ) dx0 . exp − 2ηt 2πηt −∞ This solution agrees with (5) with fA(t) being the Green function. 4. The joint pdf Let Q1 = Q(x1 , t) and Q2 = Q(x2 , t) be the random solutions of (1) at (x1 , t) and (x2 , t), t > 0, respectively. As known, second-order properties of a random process can give significant information about the process such as the correlation of Q1 and Q2 , that demands the joint density function of these random variables. The joint cumulative function of Q1 and Q2 is given by FQ (q1 , q2 ; x1 , x2 , t) = P(Q1 ≤ q1 , Q2 ≤ q2 ) = EX0 ,Y0 [P(Q1 ≤ q1 , Q2 ≤ q2 | X0 , Y0 )],
(20)
where, again, we have used the Law of Total Probability [22]. Here, X0 and Y0 are the random functions X0 (x1 , t) = x1 − A(t) and Y0 (x2 , t) = x2 − A(t), A(t) given in (2). By the characteristic method and the independence between V(t) and Q0 (x) we have that Q1 ≤ q1 and Q2 ≤ q2 given that X0 = x0 and Y0 = y0 is equivalent to Q0 (x0 ) ≤ q1 and Q0 (y0 ) ≤ q2 . Therefore, from (20), +∞ +∞ FQ0 (q1 , q2 ; x0 , y0 ) fX0 ,Y0 (x0 , y0 ) dx0 dy0 , (21) FQ (q1 , q2 ; x1 , x2 , t) = −∞
−∞
where fX0 ,Y0 is the joint density function of X0 and Y0 , and FQ0 (q1 , q2 ; x0 , y0 ) is the joint cumulative function of Q0 (x0 ) and Q0 (y0 ). The second-order mixed derivative of (21) gives +∞ +∞ fQ (q1 , q2 ; x1 , x2 , t) = fQ0 (q1 , q2 ; x0 , y0 ) fX0 ,Y0 (x0 , y0 ) dx0 dy0 . (22) −∞
−∞
To calculate the joint density function, fX0 ,Y0 , we start with FX0 ,Y0 (x0 , y0 ) = P(X0 ≤ x0 , Y0 ≤ y0 ) = P(x1 − A(t) ≤ x0 , x2 − A(t) ≤ y0 ) = P(A(t) ≥ x1 − x0 , A(t) ≥ x2 − y0 ) = 1 − FA(t) (ϕ(u(x0 ), v(y0 ))),
(23)
where ϕ(u, v) = max{u, v}, u(x0 ) = x1 − x0 , and v(y0 ) = x2 − y0 . We can write the ϕ-function as ϕ(u, v) = max{u, v} = v + (u − v) H(u − v), where H is the Heaviside function [34]. Also, since H (α) = δ(α), the Dirac (delta) distribution, and α δ(α) ≡ 0, the derivatives of ϕ in the sense of distributions [34] are ∂ϕ ∂ϕ = H(u − v) and = 1 − H(u − v). ∂u ∂v Moreover, ∂2 ϕ ∂2 ϕ ∂ϕ ∂ϕ = = −δ(u − v) and . = H(u − v)(1 − H(u − v)) = 0. ∂u ∂v ∂v ∂u ∂u ∂v
688
F.A. Dorini, M.C.C. Cunha / Mathematics and Computers in Simulation 82 (2011) 679–690
Now we use these expressions to obtain the second-order mixed derivative of (23): ∂ ∂ϕ FX ,Y (x0 , y0 ) = fA(t) ( ϕ(u(x0 ), v(y0 ))) (u(x0 ), v(y0 )), ∂x0 0 0 ∂u and ∂2 FX ,Y (x0 , y0 ) ∂y0 ∂x0 0 0
∂2 ϕ (u(x0 ), v(y0 )) ∂v ∂u (ϕ(u(x ), v(y ))) ∂ϕ . ∂ϕ (u(x ), v(y )) −fA(t) 0 0 0 0 ∂v ∂u = fA(t) (ϕ(u(x0 ), v(y0 ))) δ(u(x0 ) − v(y0 )) = fA(t) (u(x0 )) δ(u(x0 ) − v(y0 )),
= −fA(t) ( ϕ(u(x0 ), v(y0 )))
since fA(t) (max{u, v})δ(u − v) = fA(t) (u)δ(u − v) = fA(t) (v)δ(u − v) [34]. Thus we arrive at fX0 ,Y0 (x0 , y0 ) =
∂2 FX ,Y (x0 , y0 ) = fA(t) (u(x0 )) δ(u(x0 ) − v(y0 )). ∂y0 ∂x0 0 0
Substituting this expression in (22) we find that +∞ +∞ fQ (q1 , q2 ; x1 , x2 , t) = fQ0 (q1 , q2 ; x0 , y0 ) fA(t) (u(x0 ))δ(u(x0 ) − v(y0 )) dx0 dy0 −∞ −∞ +∞ +∞ = fQ0 (q1 , q2 ; x1 − u, x2 − v) fA(t) (u) δ(u − v) du dv −∞ −∞ +∞ fQ0 (q1 , q2 ; x1 − a, x2 − a) fA(t) (a) da. = −∞
With these arguments we have: Proposition 4.1. Let Q1 = Q(x1 , t) and Q2 = Q(x2 , t) be the random solutions of (1) at (x1 , t) and (x2 , t), t > 0, respectively. The joint density function of these random variables is given by +∞ fQ (q1 , q2 ; x1 , x2 , t) = fQ0 (q1 , q2 ; x1 − a, x2 − a) fA(t) (a) da. (24) −∞
Furthermore, the covariance of the solution of (1) at (x1 , t) and (x2 , t), t > 0, is given by +∞ Cov(Q1 , Q2 ) = Cov(Q0 (x1 − a), Q0 (x2 − a)) fA(t) (a) da. −∞
Remark 4.1. In case of Q(x, 0) = g(x) is a deterministic function we can write (24) as +∞ fQ (q1 , q2 ; x1 , x2 , t) = δ(q1 − g(x1 − a))δ(q2 − g(x2 − a)) fA(t) (a) da.
(25)
−∞
Assuming g given in (8), x1 > x2 , and observing that xj < a, δ(qj − 1), if δ(qj − g(xj − a)) = δ(qj ), if xj > a, (j = 1, 2), we can rewrite (25) as fQ (q1 , q2 ; x1 , x2 , t)
x2
= δ(q1 )δ(q2 ) −∞
x1
fA(t) (a) da + δ(q1 )δ(q2 − 1)
+∞
fA(t) (a) da + δ(q1 − 1)δ(q2 − 1) x2
fA(t) (a) da x1
= δ(q1 )δ(q2 )FA(t) (x2 ) + δ(q1 )δ(q2 − 1)[FA(t) (x1 ) − FA(t) (x2 )] + δ(q1 − 1)δ(q2 − 1)[1 − FA(t) (x1 )].
F.A. Dorini, M.C.C. Cunha / Mathematics and Computers in Simulation 82 (2011) 679–690
689
Consequently, we arrive at the following probabilities: P(Q1 = 0, Q2 = 0) = FA(t) (x2 ), P(Q1 = 0, Q2 = 1) = FA(t) (x1 ) − FA(t) (x2 ),
and
P(Q1 = 1, Q2 = 1) = 1 − FA(t) (x1 ). From this result, we can express the covariance function as Cov(Q1 , Q2 ) = E[Q1 Q2 ] − E[Q1 ]E[Q2 ] +∞ +∞ +∞ fA(t) (a) da − fA(t) (a) da fA(t) (a) da = x1 x1 x2 +∞ x2 = fA(t) (a) da fA(t) (a) da x1
(26)
−∞
= [1 − FA(t) (x1 )]FA(t) (x2 ), and the correlation coefficient, ρQ1 Q2 , as [1 − FA(t) (x1 )]2 [FA(t) (x2 )]2 FA(t) (x1 )[1 − FA(t) (x1 )] FA(t) (x2 )[1 − FA(t) (x2 )] [1 − FA(t) (x1 )] FA(t) (x2 ) = . [1 − FA(t) (x2 )] FA(t) (x1 )
2 ρQ = 1 Q2
Remark 4.2. The covariance function of the solution to (1), with Gaussian V(t) and deterministic initial condition (8), computed using expression (26), is given by 1 1 x1 − μ(t) x2 − μ(t) Cov(Q(x1 , t), Q(x2 , t)) = erfc √ 1 − erfc √ . 2 2 2σ(t) 2σ(t) 5. Conclusions In this paper we studied the solution of the one-dimensional linear advection equation subject to random timedependent velocity fields and random initial conditions. We coupled the total probability theorem with the characteristic method to find the expressions for its density and joint density functions. Also, we verified that in the Gaussian time-dependent velocity case the density function of the solution satisfies a convection–diffusion equation with a time-dependent diffusion coefficient. Several illustrative examples were presented. Acknowledgements The authors thank the Brazilian Council for Development of Science and Technology (CNPq) for support through grant 303701/2010-2. We also thank the anonymous reviewers for their constructive comments and suggestions. References [1] M. Al-Tawil, W. El-Tahan, A. Hussein, A proposed technique of sfem on solving ordinary random differential equation, Appl. Math. Comput. 161 (2005) 35–47. [2] M. Al-Tawil, W. El-Tahan, A. Hussein, Using fem-rvt technique for solving a randomly excited ordinary differential equation with a random operator, Appl. Math. Comput. 187 (2007) 856–867. [3] M.A. Al-Tawil, The approximate solutions of some stochastic differential equations using transformations, Appl. Math. Comput. 164 (2005) 167–178. [4] M.R. Borges, F. Furtado, F. Pereira, H.P.A. Souto, Scaling analysis for the tracer flow problem in self-similar permeability fields, Multiscale Model. Simul. 7 (2008) 1130–1147. [5] G. Calbo, J.C. Cortés, L. Jódar, Random analytic solution of coupled differential models with uncertain initial condition and source term, Comput. Math. Appl. 56 (2008) 785–798.
690
F.A. Dorini, M.C.C. Cunha / Mathematics and Computers in Simulation 82 (2011) 679–690
[6] P.J. Colucci, F.A. Jaberi, P. Givi, S.B. Pope, Filtered density function for large eddy simulation of turbulent reacting flows, Phys. Fluids 10 (1998) 499–515. [7] J.C. Cortés, L. Jódar, L. Villafuerte, Random linear–quadratic mathematical models: computing explicit solutions and applications, Math. Comput. Simul. 79 (2009) 2076–2090. [8] J.C. Cortés, P. Sevilla-Peris, L. Jódar, Analytic–numerical approximating processes of diffusion equation with data uncertainty, Comput. Math. Appl. 49 (2005) 1255–1266. [9] M.C.C. Cunha, F.A. Dorini, Statistical moments of the solution of the random burgers-riemann problem, Math. Comput. Simul. 79 (2009) 1440–1451. [10] G. Dagan, Flow and Transport in Porous Formations, Springer, Heidelberg, 1989. [11] F.A. Dorini, F. Furtado, M.C.C. Cunha, On the evaluation of moments for solute transport by random velocity fields, Appl. Numer. Math. 79 (2009) 2994–2998. [12] G.S. Fishman, Monte Carlo: Concepts, Algorithms and Applications, Springer-Verlag, New York, 1996. [13] R.G. Ghanem, Ingredients for a general purpose stochastic finite element formulation, Comput. Methods Appl. Mech. Eng. 168 (1999) 19–34. [14] R.G. Ghanem, P.D. Spanos, Stochastic Finite Elements: A Spectral Approach, Courier Dover Publications, 2003. [15] D. Gottlieb, D. Xiu, Galerkin method for wave equations with uncertain coefficients, Commun. Comput. Phys. 3 (2008) 505–518. [16] D. Henderson, P. Plaschko, Stochastic Differential Equations in Science and Engineering, World Scientific, Singapore, 2006. [17] M. Jardak, C.H. Su, G.E. Karniadakis, Spectral polynomial chaos solutions of the stochastic advection equation, J. Sci. Comput. 17 (2002) 319–338. [18] S. Kadry, On the generalization of probabilistic transformation method, Appl. Math. Comput. 190 (2007) 1284–1289. [19] P.E. Kloeden, E. Platen, Numerical Solution of Stochastic Differential Equations, Springer, New York, 1999. [20] R.J. LeVeque, Numerical Methods for Conservation Laws, Birkhäuser, Berlin, 1992. [21] O.P.L. Maître, O.M. Knio, Spectral Methods for Uncertainty Quantification: With Applications to Computational Fluid Dynamics, Springer Science, Dordrecht, 2010. [22] A. Papoulis, Probability, Random Variables, and Stochastic Processes, 3rd ed., McGraw-Hill Inc., New York, 1991. [23] L.I. Piterbarg, A.G. Ostrovskii, Advection and Diffusion in Random Media, Kluwer Academic Publishers, Dordrecht, 1997. [24] S.B. Pope, Lagrangian pdf methods for turbulent flows, Annu. Rev. Fluid Mech. 26 (1994) 23–63. [25] S.B. Pope, Turbulent Flows, Cambridge University Press, Cambridge, 2000. [26] S.B. Pope, Advances in pdf methods for turbulent reactive flows, in: H.I. Andersson, P.A. Krogstad (Eds.), Advances in Turbulence X, CIMNE, 2004, pp. 529–536. [27] S.M. Ross, Introduction to Probability Models, 6th ed., Academic Press, San Diego, 1997. [28] L.T. Santos, F.A. Dorini, M.C.C. Cunha, The probability density function to the random linear transport equation, Appl. Math. Comput. 216 (2010) 1524–1530. [29] M. Shvidler, K. Karasaki, Exact averaging of stochastic equations for transport in random velocity field, Transp. Porous Media 50 (2003) 223–241. [30] M. Shvidler, K. Karasaki, Probability density functions for solute transport in random fields, Transp. Porous Media 50 (2003) 243–266. [31] N. Suciu, Spatially inhomogeneous transition probabilities as memory effects for diffusion in statistically homogeneous random velocity fields, Phys. Rev. E 81 (2010) 056301. [32] H. Wang, P.P. Popov, S.B. Pope, Weak second-order splitting schemes for lagrangian monte carlo particle methods for the composition pdf/fdf transport equations, J. Comput. Phys. 229 (2010) 1852–1878. [33] D. Xiu, Numerical Methods for Stochastic Computations: A Spectral Method Approach, Princeton University Press, Princeton, 2010. [34] E. Zauderer, Partial Differential Equations of Applied Mathematics, Pure and Applied Mathematics, John Wiley & Sons, New York, 1983. [35] D. Zhang, Stochastic Methods for Flow in Porous Media—Coping with Uncertainties, Academic Press, San Diego, 2002.