Numerical inversion of the Laplace transform via fractional moments

Numerical inversion of the Laplace transform via fractional moments

Applied Mathematics and Computation 143 (2003) 99–107 www.elsevier.com/locate/amc Numerical inversion of the Laplace transform via fractional moments...

131KB Sizes 1 Downloads 137 Views

Applied Mathematics and Computation 143 (2003) 99–107 www.elsevier.com/locate/amc

Numerical inversion of the Laplace transform via fractional moments Aldo Tagliani

a,*

, Yurayh Vel asquez

b

a

b

Faculty of Economics, Trento University, 38100 Trento, Italy Escuela de Ingeneria de Sistemas, Universidad Metropolitana, 52120 Caracas 1050-A, Venezuela

Abstract A method for the numerical inversion of Laplace transform on the real line of heavytailed (probability) density functions is presented. The method assumes as known a finite set of real values of the Laplace transform and chooses the analytical form of the approximant maximizing Shannon-entropy, so that positivity of the approximant itself is guaranteed. The problem resorts to a finite fractional Hausdorff moment problem and some results of convergence are provided. Some numerical results are illustrated. Ó 2002 Elsevier Science Inc. All rights reserved. Keywords: Fractional moments; Generalized Hausdorff moment problem; Hankel matrix; Laplace transform inversion; Maximum entropy

1. Introduction The literature on numerical inversion of the Laplace transform is very extensive [1]. Such a long list of papers is probably due to the fact that a single method, which works equally well for all the problems proposed by the applications is, at the moment, lacking. It therefore seems worthwhile attempting to improve the existing methods which are already quite accurate. In this paper we draw the attention to the inversion of Laplace transform Z 1 F ðsÞ ¼ est f ðtÞ dt ð1:1Þ 0

*

Corresponding author. E-mail address: [email protected] (A. Tagliani).

0096-3003/02/$ - see front matter Ó 2002 Elsevier Science Inc. All rights reserved. doi:10.1016/S0096-3003(02)00349-1

100

A. Tagliani, Y. Velasquez / Appl. Math. Comput. 143 (2003) 99–107

of heavy-tailedR (probability) density functions f ðtÞ, which have finite mean 1 value l1 ðf Þ ¼ 0 tf ðtÞ dt. Then the corresponding F ðsÞ has abscissa convergence s0 equal to zero and F 0 ð0Þ finite. In particular we assume known F ðsj Þ, j ¼ 0; . . . ; M for some real and positive values sj , as required in some physical problems. Then the Laplace transform R 1 inversion leads to a generalized finite Hausdorff moment problem F ðsj Þ ¼ 0 xaj wðxÞ dx, j ¼ 1; . . . ; M in a new variable x 2 ½0; 1 . If the given values F ðsj Þ are interpreted as a priori information, we can refer to maximum entropy (ME) principle [2] in the choice of the approximant wM ðxÞ of wðxÞ. It will be proved that wM ðxÞ converges to wðxÞ in L1 norm and in directed divergence. As a consequence, in the original domain t ¼ ½0; 1Þ the approximating density fM ðtÞ converges to f ðtÞ in L1 -norm and in directed divergence too. Using Shannon-entropy and linear constraints the ME principle guarantees a positive approximant [3]. Then some probabilistic interesting quantities, such as the entropy and hazard rate, which require a positive approximating density, may be calculated.

2. Formulation of the problem The main step consists in transforming the problem (1.1) into an equivalent generalized Hausdorff moment problem. By the change of the variable t ¼ ð1=rÞ ln x, where r is an arbitrary positive parameter, and considering a finite sequence of values F ðsj Þ, j ¼ 0; . . . ; M, where sj are defined in the sequel, from (1.1) we have Z 1 Z 1 f ðð1=rÞ ln xÞ dx ¼ F ðsj Þ ¼ xðsj =rÞ xaj wðxÞ dx; rx 0 0 j ¼ 0; . . . ; M ð2:1Þ with wðxÞ ¼ ðf ðð1=rÞ ln xÞÞ=rx, a density function in ½0; 1 and aj ¼ sj =r. (2.1) is equivalent to a generalized finite Hausdorff moment problem, or, more properly, a finite fractional Hausdorff moment problem. If F ðsj Þ are considered as given expected values F ðsj Þ ¼ F ðraj Þ ¼: EðX aj Þ, then ME technique consisting in maximizing Shannon-entropy Z 1 H ½w ¼  wðxÞ ln wðxÞ dx ð2:2Þ 0

constrained by (2.1), leads to the approximant [3] ! M X aj wM ðxÞ ¼ exp  kj x j¼0

ð2:3Þ

A. Tagliani, Y. Velasquez / Appl. Math. Comput. 143 (2003) 99–107

101

Here (k0 ; . . . ; kM ) are Lagrange multipliers, which must be supplemented by the condition that first M fractional moments of wM ðxÞ coincide with F ðraj Þ, i.e, F ðraj Þ ¼

Z

1

xaj wM ðxÞ dx;

j ¼ 0; . . . ; M:

ð2:4Þ

0

wM ðxÞ has Shannon-entropy H ½wM ¼ 

Z

1

wM ðxÞ ln wM ðxÞ dx ¼

0

M X

kj F ðraj Þ:

ð2:5Þ

j¼0

Given two probability densities wðxÞ and wM ðxÞ, there are two well-known measures of the distance R 1 between wðxÞ and wM ðxÞ. Namely the divergence measure Iðw; w Þ ¼ wðxÞ lnðwðxÞ=wM ðxÞÞ dx and the variation measure M 0 R1 V ðw; wM Þ ¼ 0 jwM ðxÞ  wðxÞj dx. If wðxÞ and wM ðxÞ have the same fractional moments F ðraj Þ ¼: EðX aj Þ, j ¼ 1; . . . ; M then Iðw; wM Þ ¼ H ½wM  H ½w

ð2:6Þ

R1 R1 PM holds. In fact Iðw; wM Þ ¼ 0 wðxÞ lnðwðxÞ=wM ðxÞÞ dx ¼ H ½w þ j¼0 kj 0 xaj PM wðxÞ dx ¼ H ½w þ j¼0 kj F ðraj Þ ¼ H ½wM  H ½w . In literature several lower bounds for the divergence measure I based on V are available. We shall however use the following bound [4] IP

V2 : 2

ð2:7Þ

In Appendix A we prove the following theorem M

Theorem 2.1. If faj gj¼0 are equispaced, i.e., aj ¼ jDa, j ¼ 0; . . . ; M, for some Da > 0, then ME approximant wM ðxÞ converges in entropy to wðxÞ, i.e., lim H ½wM ¼ H ½w :

M!1

ð2:8Þ

Taking into account (2.6) and (2.7) then (2.8) entails convergence in directed divergence and in L1 -norm. Once reconstructed wðxÞ and then f ðtÞ, in probability framework usually one calculates expected values. If gðtÞ denotes a bounded function, such that jgðtÞj 6 K, K > 0, taking into account (2.7), (2.8) and setting t ¼ ð1=rÞ ln x, we have

102

A. Tagliani, Y. Velasquez / Appl. Math. Comput. 143 (2003) 99–107

Z 1     jEf ðgÞ  EfM ðgÞj ¼  gðtÞð f ðtÞ  fM ðtÞÞ dt Z0 1 6K jf ðtÞ  fM ðtÞj dt 0

¼K

Z

1

jwðxÞ  wM ðxÞj dx 6 K

pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 2ðH ½wM  H ½w Þ

ð2:9Þ

0

Eqs. (2.8) and (2.9) suggest which fractional moments have to be chosen faj gM j¼1 : H ½wM ¼ minimum

ð2:10Þ

As a consequence, if the choice of equispaced aj ¼ jDa guarantees entropyconvergence then the choice (2.10) guarantees entropy-convergence too. From a computational point of view, the kj calculation leads to minimize a proper potential function Cðk1 ; . . . ; kM Þ [3], with " ! ! Z 1 M X aj aj min Cðk1 ; . . . ; kM Þ ¼ min ln x exp  kj x dx k1 ;...;kM

k1 ;...;kM

þ

M X

0

#

j¼1

kj F ðraj Þ

ð2:11Þ

j¼1

From (2.10) and (2.11) the approximating density wM ðxÞ is obtained through two nested minimization procedures " # ! ! Z 1 M M X X aj aj wM ðxÞ : min min ln x exp  kj x kj F ðraj Þ dx þ a1 ;...;aM k1 ;...;kM

0

j¼1

j¼1

ð2:12Þ (2.12) suggests the choice r ¼ 1 in (2.1) without loss of generality.

3. Convergence in the original domain t ¼ [0; ‘) In the past section we proved that the choice faj gM j¼1 , according to Eq. (2.10), guarantees that wM ðxÞ converges to wðxÞ in entropy, in directed divergence and in L1 -norm. Resorting to the original variable t ¼ ð1=rÞ ln x the above kinds of convergence entail R1 (a) from limM!1 H ½wM ¼ H ½w and calling l1 ðf Þ ¼ 0 tf ðtÞ dt and analogously l1 ðfM Þ, we have H ½wM  H ½w ¼ ½H ½fM ðtÞ þ ln r  l1 ðfM Þ  ½H ½f ðtÞ þ ln r  l1 ðf Þ ¼ H ½fM  H ½f  r½l1 ðfM Þ  l1 ðf Þ :

ð3:1Þ

A. Tagliani, Y. Velasquez / Appl. Math. Comput. 143 (2003) 99–107

103

Then entropy convergence in ½0; 1 does not imply entropy convergence in ½0; 1Þ. Nevertheless, adding to the constraints F ðraj Þ, j ¼ 0; . . . ; M in (2.4) the further one F 0 ð0Þ (which implies l1 ðf Þ ¼ l1 ðfM Þ), we obtain the new ME density ! M X  aj wM ðxÞ ¼ exp  kj x  kMþ1 ln x j¼0

where the exponents faj gM j¼1 are chosen according to (2.10), while Lagrange multipliers are obtained extending (2.12) in an obvious way. Then wM ðxÞ converges in entropy to wðxÞ since the corresponding fM ðtÞ satisfies l1 ðfM Þ ¼ l1 ðf Þ ¼ F 0 ð0Þ. From (2.8) and (3.1) fM ðtÞ converges in entropy to f ðtÞ. R1 (b) From Iðw; wM Þ ¼ 0 wðxÞ lnðwðxÞ=wM ðxÞÞ dx ¼ H ½wM  H ½w ! 0 as M ! 1, we have Z 1 Z 1 wðxÞ f ðtÞ dx ¼ dt ¼ Iðf ; fM Þ ! 0: wðxÞ ln f ðtÞ ln ð3:2Þ wM ðxÞ fM ðtÞ 0 0 i.e., fM ðtÞ converges to f ðtÞ in directed divergence. R1 (c) From 0 jwM ðxÞ  wðxÞj dx ! 0 as M ! 1, we have Z 1 Z 1 jwM ðxÞ  wðxÞj dx ¼ jfM ðtÞ  f ðtÞj dt ! 0 0

ð3:3Þ

0

i.e., fM ðtÞ converges to f ðtÞ in L1 -norm.

4. Numerical results In all the examples shown we consider heavy-tailed distributions having finite mean value. r ¼ 1 is set. For each distribution we report (a) directed divergence and L1 -norm between fM ðtÞ and f ðtÞ, according to (3.2) and (3.3); (b) graphical comparison. From the shown examples we can conclude that few fractional moments, chosen according to (2.10), are enough for an accurate reconstruction of f ðtÞ. According to (2.9) and (3.1), the approximating density fM ðtÞ is suitable for an accurate calculation of expected values. Example 1. We consider the Laplace transform F ðsÞ ¼ probability density

R1 0

est f ðtÞ dt of the

104

A. Tagliani, Y. Velasquez / Appl. Math. Comput. 143 (2003) 99–107

f ðtÞ ¼

n p 1 sin p n 1 þ tn

The values F ðsj Þ are numerically obtained. With n ¼ 4 we have H ½f ’ 0:61365433, H ½w ’ 0:093452444. In Table 1 directed divergence, L1 -norm for an increasing number of fractional moments M, and fractional exponents faj gM j¼1 , uniquely when M ¼ 4, are reported. Fig. 1 shows a graphical comparison of f ðtÞ and fM ðtÞ when M ¼ 4. R1 Example 2. We consider thepffiffiffiffiffi Laplace transform F ðsÞ ¼ 0 est f ðtÞ dt of a ffi lognormal density f ðtÞ ¼ ð1= 2ptÞ expðð1=2Þ ln2 tÞ whose real values of Laplace transform are numerically obtained from (1.1), with H ½f ’ 1:418938533 and H ½w ’ 0:229782738. In Table 2 directed divergence, L1 -norm for an M increasing number of fractional moments M, and fractional exponents faj gj¼1 , uniquely when M ¼ 4, are reported. Fig. 2 shows a graphical comparison of f ðtÞ and fM ðtÞ when M ¼ 4.

Table 1 Directed divergence, L1 -norm and fractional exponents of distributions having an increasing number of common fractional moments M

faj gM j¼1

Iðf ðtÞ; fM ðtÞÞ

kfM ðtÞ  f ðtÞk1

1 2 3 4

2.1084 2.4071 3.1700 4.0019

0.3977E1 0.4657E2 0.2085E2 0.1911E2

0.2329E0 0.5980E1 0.1490E1 0.5508E2

Fig. 1. Difference fM ðtÞ  f ðtÞ, with M ¼ 4.

A. Tagliani, Y. Velasquez / Appl. Math. Comput. 143 (2003) 99–107

105

Table 2 Directed divergence, L1 -norm and fractional exponent of distributions having an increasing number of common fractional moments M

faj gM j¼1

Iðf ðtÞ; fM ðtÞÞ

kfM ðtÞ  f ðtÞk1

1 2 3 4

0.1389 0.1465 1.0949 14.4993

0.9240E1 0.5035E1 0.2508E1 0.9382E2

0.2515E0 0.1656E0 0.1129E0 0.3383E1

Fig. 2. Difference fM ðtÞ  f ðtÞ, with M ¼ 4.

Appendix A. Entropy convergence A.1. Some background Let us consider a sequence of equispaced points aj ¼ jDa, j ¼ 0; . . . ; M and lj ¼: EðX aj Þ ¼

Z

1

taj fM ðtÞ dt;

j ¼ 0; . . . ; M

ðA:1Þ

0

with fM ðtÞ ¼ expð (A.1) we have

PM

aj

lj ¼ EðX Þ ¼

j¼0

Z 0

j ¼ 0; . . . ; M

1

kj taj Þ. With a simple change of variable x ¼ tDa , from "

#

X

M 1 1 j x exp  k0  ln kj x  1   ln x dx; Da Da j¼1 j

ðA:2Þ

which is a reduced Hausdorff moment problem for each fixed M value and a determinate Hausdorff moment problem when M ! 1. Referring to (A.2) the following symmetric definite positive Hankel matrices are considered

106

A. Tagliani, Y. Velasquez / Appl. Math. Comput. 143 (2003) 99–107

 D0 ¼ l0 ;

D2 ¼

l0 l1

2



l1 ; . . . ; D2M l2

l0 6 .. ¼4 . lM

  

3 lM .. 7 . 5 l2M

ðA:3Þ

whose (i; j)th entry i; j ¼ 0; 1; . . . holds Z 1 liþj ¼ xiþj fM ðxÞ dx; 0

PM where fM ðxÞ ¼ exp½ðk0  lnð1=DaÞÞ  j¼1 kj xj  ð1  ð1=DaÞÞ ln x . Hausdorff moment problem is determinate and the underlying distribution has a continuous distribution function F ðxÞ, with density f ðxÞ. Then the maximum mass qðxÞ which can be concentrated at any real point x is equal to zero [5, Corollary (2.8)]. In particular, at x ¼ 0 we have ð0Þ 0 ¼ qð0Þ ¼ lim qi ¼:  i!1  l2  ..  .  l

iþ1

jD2i j  ¼ lim ðl  lðiÞ 0 Þ    liþ1  i!1 0 ..   .   l 

ðA:4Þ

2i

ð0Þ

where qi indicates the largest mass which can be concentrated at a given point ðiÞ x ¼ 0 by any solution of a reduced moment problem of order P i and l0 indicates the minimum value of l0 once assigned the first 2i moments. Let us fix fl0 ; . . . ; li1 ; liþ1 ; . . . ; lM g while only li , i ¼ 0; . . . ; M varies continuously. From (A.2) we have 2 3 dk0 =dli 6 7 .. D2M  4 ðA:5Þ 5 ¼ eiþ1 . dkM =dli where eiþ1 is the canonical unit vector 2 RMþ1 , from which 2 3 dk0 =dli     dk0 dkM dk0 dkM 6 7 .. ¼  0< ;...; ; . . . ;  D2M  4 eiþ1 5 . dli dli dli dli dkM =dli dki ¼ 8i dli

ðA:6Þ

A.2. Entropy convergence The following theorem holds.  P  aj Theorem A.1. If aj ¼ jDa, j ¼ 0; . . . ; M and fM ðxÞ ¼ exp  M then j¼0 kj x

A. Tagliani, Y. Velasquez / Appl. Math. Comput. 143 (2003) 99–107

lim H ½fM ¼: 

M!1

Z

1

fM ðxÞ ln fM ðxÞ dx ¼ H ½f ¼:  0

Z

107

1

f ðxÞ ln f ðxÞ dx: 0

ðA:7Þ Proof. From (A.1) and (A.7) we have H ½fM ¼

M X

kj lj

ðA:8Þ

j¼0

Let us consider (A.8). When only l0 varies continuously, taking into account (A.3)–(A.6) and (A.8) we have M X d dkj H ½fM ¼ lj þ k0 ¼ k0  1 dl0 dl 0 j¼0    l2    lMþ1    .. ..   .  .   2  l    l2M  d dk0 1 ¼ H ½fM ¼ ¼  Mþ1 < 0: 2 ðMÞ jD2M j dl0 dl0 l0  l0 ðMÞ

Thus H ½fM is a concave differentiable function of l0 . When l0 ! l0 then H ½fM ! 1, whilst at l0 it holds H ½fM > H ½f , being fM ðxÞ the ME density ðMÞ once assigned ðl0 ; . . . ; lM Þ. Besides, when M ! 1 then l0 ! l0 . So the theorem is proved. 

References [1] B. Davis, B. Martin, Numerical inversion of the Laplace transform; a survey and comparison of methods, J. Comput. Phys. 33 (1979) 1–32. [2] E.T. Jaynes, Where do we stand on maximum entropy, in: R.D. Levine, M. Tribus (Eds.), The maximum entropy formalism, MIT Press, Cambridge MA, 1978, pp. 15–118. [3] H.K. Kesavan, J.N. Kapur, Entropy Optimization Principles with Applications, Academic Press, New York, 1992. [4] S. Kullback, A lower bound for discrimination information in terms of variation, IEEE Trans. Inform. Theor. IT-13 (1967) 126–127. [5] J.A. Shohat, J.D. Tamarkin, The problem of moments, AMS Mathematical Survey, Providence RI, 1 (1963).