Moment-recovered approximations of multivariate distributions: The Laplace transform inversion

Moment-recovered approximations of multivariate distributions: The Laplace transform inversion

Statistics and Probability Letters 81 (2011) 1–7 Contents lists available at ScienceDirect Statistics and Probability Letters journal homepage: www...

277KB Sizes 0 Downloads 93 Views

Statistics and Probability Letters 81 (2011) 1–7

Contents lists available at ScienceDirect

Statistics and Probability Letters journal homepage: www.elsevier.com/locate/stapro

Moment-recovered approximations of multivariate distributions: The Laplace transform inversion Robert M. Mnatsakanov Department of Statistics, West Virginia University, PO Box 6330, Morgantown, WV 26506, USA

article

abstract

info

Article history: Received 14 April 2010 Received in revised form 15 September 2010 Accepted 15 September 2010 Available online 26 September 2010 Keywords: Hausdorff moment problem Moment-recovered distribution Uniform rate of approximation Laplace transform inversion

The moment-recovered approximations of multivariate distributions are suggested. This method is natural in certain incomplete models where moments of the underlying distribution can be estimated from a sample of observed distribution. This approach is applicable in situations where other methods cannot be used, e.g. in situations where only moments of the target distribution are available. Some properties of the proposed constructions are derived. In particular, procedures of recovering two types of convolutions, the copula and copula density functions, as well as the conditional density function, are suggested. Finally, the approximation of the inverse Laplace transform is obtained. The performance of moment-recovered construction is illustrated via graphs of a simple density function. © 2010 Elsevier B.V. All rights reserved.

1. Introduction Consider the multiple sequence of real numbers ν = {µj , j ∈ Np }, where j = (j1 , j2 , . . . , jp ), p ⩾ 1, and N = {0, 1, . . .}. We say F is a solution of the p-dimensional Stieltjes moment problem if there exists a multivariate distribution F such that

µj =

∫ p

tj dF (t),

for j = (j1 , j2 , . . . , jp ) ∈ Np , R+ = [0, ∞).

R+

∏p

j

p

k Here, for simplicity of notation we denote by tj = k=1 tk for any t = (t1 , t2 , . . . , tp ) ∈ R+ . We also say F is p moment-determinate (M-determinate) if the solution is unique. When support of the distribution F is a compact in R+ , e.g., supp{F } = [0, T ]p with T < ∞, we say that F is the solution of the p-dimensional Hausdorff moment problem. It is known that the p-dimensional Hausdorff moment problem is always M-determinate, see Shohat and Tamarkin (1943), where necessary and sufficient conditions in terms of the sequence {µj , j ∈ Np } are derived. Several sufficient conditions (similar to Carleman, as well as the integral criteria) are studied in De Jeu (2003) for the general Hamburger case, i.e., when supp{F } = Rp , p ⩾ 1. See also Fuglede (1983) and the references therein. In the one-dimensional Hausdorff case Mnatsakanov (2008a,b) constructed the moment-recovered approximants of a cumulative distribution function (cdf) F and its probability density function (pdf) f that have the direct analytical forms depending on the moment sequence of F . The rate of convergence and some other properties have been studied as well. There are many works where the Maximum Entropy principle (see Kapur and Kesavan (1992)) is applied. But even in the univariate case this method is ill-conditioning when the number of assigned moments is large. To reduce this instability, Novi Inverardi et al. (2003) suggested the use of fractional moments instead of integer ones. In the framework of a generalized moment problem, Lasserre (2008) suggested the use of a semidefinite programming approach. It should be mentioned that this approach does not work well when p > 2.

E-mail address: [email protected]. 0167-7152/$ – see front matter © 2010 Elsevier B.V. All rights reserved. doi:10.1016/j.spl.2010.09.011

2

R.M. Mnatsakanov / Statistics and Probability Letters 81 (2011) 1–7

The current work can be viewed as a continuation of the results from Mnatsakanov (2008a,b). In the following, for simplicity of notation, we shall assume p = 2, although all results are valid for p ⩾ 3. The aim of the present article is to construct the stable approximants Fa,ν and fa,ν of the cdf F and its pdf f introduced in (2) and (5), respectively, as well as to approximate the inverse Laplace transform via (15). The proposed approximations have closed forms depending upon the moments of an underlying distribution and are easily calculated. The simulation study and graphical illustrations of the approximation fa,ν in several simple cases are conducted in Mnatsakanov and Li (2010). It is shown that approximant fa,ν is stable when the number of moments is large. Besides, the moment-recovered inversion of the Radon transform is obtained. Note that one can use the empirical counterparts of Fa,ν and fa,ν for estimating F and f , respectively. This question will be studied in a separate article. Note also that our constructions are applicable when other estimators can’t be used, e.g., in situations where only the moments (estimated) of F are available. To demonstrate the performance of our construction we apply fa,ν for approximation (and estimation) of the bivariate pdf f (x, y) = x + y, 0 ⩽ x, y ⩽ 1, given its assigned (empirical) moments. The paper is organized as follows. We introduce some notations, and present constructions of the moment-recovered functions in Section 2. Then we show how to recover the distributions of two different types of convolutions, the distribution in some biased models, and the copula, and copula density functions, as well as the conditional density functions via the transformed moment sequences of F , see, Theorems 1 and 3 (with the proofs in the Appendix) and Corollaries 2 and 3 in Section 3. In Corollary 4 we construct the approximation of the Laplace transform inversion using results from Theorems 1 and 3. In Theorem 2 the uniform rate of convergence for moment-recovered pdf is given (see also Mnatsakanov and Li (2010)). Finally, in Section 4 we consider an example. In particular, we compared the graphs of the target pdf f with the corresponding moment-recovered counterpart fα,ν defined in (5), see Fig. 1. 2. Some notations and preliminaries Suppose that cdf F has a finite support, supp{F } = [0, T ]2 , T < ∞. Denote by f the corresponding density function of F with respect to the Lebesgue measure on [0, T ]2 . Our method of recovering the cdf F and pdf f is based on the inverse operator Ka−1 that yields a solution of the Hausdorff moment problem. To simplify the notations we also assume T = 1. Let us denote the moments (geometric moments of order j + m) of F by

µj,m (F ) =

∫ [0,1]2

t j sm dF (t , s) := (K F )(j, m),

j, m ∈ N,

µ0,0 (F ) = 1.

(1)

In the two-dimensional case we construct the inverse of the operator K from (1) as follows

Ka−1 ν (x, y) =





′ y] α [α x] [α α′      ′    − − −− α j α m

k=0 l=0 j=k m=l

j

k

m

l

(−1)j+m−k−l µj,m (F ),

(2)

where 0 ⩽ x, y ⩽ 1, a = (α, α ′ ) with α, α ′ ∈ N. Let us use the notation a → ∞ that means α → ∞ and α ′ → ∞. Later we will use the following result Bα (u, v) =

[αv]   − α k=0

k

uk 1 − u



α−k

 →

1, 0,

uv

as α → ∞.

(3)

Note that (3) follows from a suitable interpretation of the left hand side as a sum of binomial probabilities. Moreover, it is worth noting that from (3) and (A.1), see the proof of Theorem 1 in the Appendix, one can easily derive Ka−1 K F →w F , as a → ∞ (cf. Mnatsakanov and Ruymgaart (2003), where p = 1 and T = 1). Here →w denotes the weak convergence of cdfs, i.e., convergence at each continuity point of the limiting cdf. In the following the uniform convergence will be denoted by −→u . For any moment sequence ν = {µj,m , j, m ∈ N}, let us denote by Fa,ν the moment-recovered function constructed by means of operator (2): Fa,ν := Ka−1 ν , and by Fν its weak limit: Fν = lima→∞ Fa,ν . Note that the asymptotic properties of Fa,ν studied in Section 3 are valid not only for cdfs F but also for any positive M-determinate functions with finite support. Denote the supremum norm of a function φ : [0, 1]2 → R by ‖φ‖∞ = supx,y∈[0,1] |φ(x, y)|. Let

β(u, b, c ) =

Γ (b + c ) b−1 u (1 − u)c −1 , Γ (b)Γ (c )

0 < u < 1,

(4)

be a pdf of the Beta(b, c ) distribution with the shape parameters b, c > 0. To simplify the notations below, let us use the following notations: when b = [α x] + 1 and c = α − [α x] + 1 in (4), we write βα (·, x) := β(·, b, c ), while for b = [α ′ y] + 1 and c = α ′ − [α ′ y] + 1 we use βα ′ (·, y). Also denote by ∆(f , δ) = supx,y∈[0,1] sup(t ,s)∈S (x,y;δ) |f (t , s) − f (x, y)| the modulus of continuity of f , where B(x, y; δ) = {(t , s) : |t − x| ⩽ δ; |s − y| ⩽ δ} and 0 < δ < 1.

R.M. Mnatsakanov / Statistics and Probability Letters 81 (2011) 1–7

3

The moment-recovered pdf fa,ν := Ba−1 ν is constructed as follows: fa,ν (x, y) := Ba−1 ν(x, y) =

Γ (α + 2)Γ (α ′ + 2) Γ ([α x] + 1)Γ ([α ′ y] + 1)

−[α ′ y] α−[α −x] α′−

×

m=0

j =0

(−1)m+j µm+[αx],j+[α′ y] (F ) , m!j!(α − [α x] − m)!(α ′ − [α ′ y] − j)!

0 ⩽ x, y ⩽ 1.

(5)

The form of fa,ν can be explained by taking the ratio

1Fα,ν (x) , ∆∆′

with ∆ =

1

α

, and ∆′ =

1

α′

,

(6)

and scaling (6) by (α + 1)(α ′ + 1)/αα ′ . Here 1Fα,ν (x, y) = Fα,ν (x, y) − Fα,ν (x, y − ∆′ ) − Fα,ν (x − ∆, y) + Fα,ν (x − ∆, y − ∆′ ). Finally, let us introduce two different convolutions of cdfs F1 and F2 :



F1 ⊗ F2 (x, y) =

F1 x/t , y/s dF2 (t , s),



[0,1]2



0 ⩽ x, y ⩽ 1,

(7)

and F1 ⋆ F2 (x, y) =

∫ [0,1]2

F1 (x − t , y − s)dF2 (t , s),

0 ⩽ x, y ⩽ 2 ,

(8)

and the convolutions between corresponding pdfs f1 and f2 : f1 ⊗ f2 (x, y) =

∫ [0,1]2

f1 (x/τ , y/σ )f2 (τ , σ )

1

τσ

dτ dσ ,

0 ⩽ x, y ⩽ 1,

and f1 ⋆ f2 (x, y) =

∫ [0,1]2

f1 (x − τ , y − σ )f2 (τ , σ )dτ dσ ,

0 ⩽ x, y ⩽ 2 .

Note here that (7) and (8) represent the cdfs of Z = (X1 Y1 , X2 Y2 ) and X + Y, assuming X = (X1 , X2 ) and Y = (Y1 , Y2 ) to be independent random vectors distributed according to F1 and F2 , respectively. For any moment sequences ν1 = {µj,m (F1 ), j, m ∈ N} and ν2 = {µj,m (F2 ), j, m ∈ N}, let us use the following notations: ν1 ⊙ ν2 = {µj,m (F1 ) × µj,m (F2 ), j, m ∈ N} and ν1 ~ ν2 = {¯νj,m , j, m ∈ N}, where

ν¯ j,m =

  j − m  − j m µj−k,m−l (F1 ) × µk,l (F2 ). j−k m−l k=0 l=0

(9)

k Also denote µ⊙ = {µkj,m (F ), j, m ∈ N} and F ⊗k = F ⊗ · · · ⊗ F for the corresponding k-fold convolution (cf. with (7)). F In Section 3 we recover the copula and copula density of a joint distribution of two random variables (r.v.’s) by means of the constructions Fa,ν and fa,ν introduced in (2) and (5), respectively. To be more specific, consider two r.v.’s X and Y with a joint cdf F (x, y) = P (X ⩽ x, Y ⩽ y) defined on [0, 1]2 . Denote the marginal cdfs of F by G1 and G2 , and corresponding densities by g1 = dG1 /dx and g2 = dG2 /dx. It is known (see Sklar (1959)) that under the condition that G1 and G2 are − continuous functions, there exists a unique function (copula) C : [0, 1]2 → [0, 1] such that C (u, v) = F (G− 1 (u), G2 (v)), 0 ⩽ − u, v ⩽ 1. Here Gk (t ) = inf{x : Gk (x) ⩾ t } represents the generalized inversion of Gk , k = 1, 2. Note that a copula function C of F links a joint distribution to its marginals G1 and G2 , and is known under the name of a dependence function as well. If C has a density c (called the copula density function) with respect to the Lebesgue measure on [0, 1]2 , we can write − − − −1 c (u, v) = f (G− , where f is the pdf of F . 1 (u), G2 (v))[g1 (G1 (u))g2 (G2 (v))]

3. Main results In this section we present some asymptotic properties of the moment-recovered cdf Fa,ν = Ka−1 ν and corresponding moment-recovered pdf fa,ν defined in (2) and (5), respectively. In Corollary 3(iii) we recover f (y | x): the conditional pdf of Y given X = x, while in Corollary 4 the new Laplace inverse formula is derived.

4

R.M. Mnatsakanov / Statistics and Probability Letters 81 (2011) 1–7

3.1. Asymptotic properties of Fa,ν and fa,ν One can easily verify that the moment sequences of F1 ⊗ F2 and F1 ⋆ F2 are ν1 ⊙ ν2 and ν1 ~ ν2 , respectively. Also the reverse statements are true, see (i) and (ii) in Theorems 1 and 3. It is also worth noting that ν¯ j,m in (9) represents the so-called two-dimensional (j, m)-point discrete convolution, a ⋆ b, of the sequences a = {ak,l } and b = {bk,l }, namely, j − m −

(ν1 ~ ν2 )j,m =

aj−k,m−l × bk,l := (a ⋆ b)j,m ,

k=0 l=0

where ak,l =

   j m k

l

µk,l (F1 ) and bk,l = µk,l (F2 ).

That is, the convolution (8) between two cdfs F1 and F2 is transformed to the discrete convolution of the form (9) between a = {ak,l } and b = {bk,l } defined by moment sequences of F1 and F2 , respectively. The representation (9) together with its k-fold discrete convolution analog allows us to calculate high order convolutions of moment sequences recursively or using fast computational techniques (see Chen (1990), among others). Theorem 1.

(i) If ν = ν1 ⊙ ν2 , then

Fa,ν →w Fν ,

as a → ∞

(10)

with Fν = F1 ⊗ F2 . (ii) If ν = ν1 ~ ν2 , then (10) holds with Fν = F1 ⋆ F2 . (iii) If for some a, b > 0 and c , d ⩾ 0, ν = {¯νj,m = µaj+c ,bm+d (F )/µc ,d (F ), j, m ∈ N}, then (10) holds with 1

F ν ( x, y ) =



x1/a



y1/b

t c sd dF (t , s),

µc ,d ( F ) 0 0 (iv) If ν = {¯νj,m , j, m ∈ N}, with ∫ ν¯ j,m = [G1 (t )]j [G2 (s)]m dF (t , s),

x, y ∈ [0, 1].

(11)

(12)

[0,1]2

for some continuous and increasing functions Gk : [0, 1] → [0, 1], k = 1, 2, then (10) holds with Fν (x, y) = F (G− ), G− 1]. 1 (x∑ 2 (y)), x, y ∈ [0, ∑ ∑m m m ⊙k ⊗k (v) If ν = . k=1 βk µF , where k=1 βk = 1, βk > 0, then (10) holds with Fν = k=1 βk F Corollary 1. then

(i) If ν = {¯νj,m , j, m ∈ N} with ν¯ j,m defined according to (12) and Fa∗,ν (x, y) = Fa,ν (G1 (x), G2 (y)), x, y ∈ [0, 1],

Fa∗,ν →w F ,

as a → ∞.

(13)

(ii) If ν = {µaj,bm (F ), j, m ∈ N} for some a, b > 0, then (13) holds with Fa∗,ν (x, y) = Fa,ν (xa , yb ), x, y ∈ [0, 1]. (iii) If ν = {aj bm µj,m (F ), j, m ∈ N} for some a, b > 0, then (13) holds with Fν (x, y) = F (x/a, y/b), x ∈ [0, a], y ∈ [0, b]. Proof of Corollary 1. The transformations x → Gk (x), k = 1, 2, and statement (iv) from Theorem 1 yield Corollary 1(i). Statement (ii) is a special case of (i) since

µaj,bm (F ) =

∫ [0,1]2

[G1 (t )]j [G2 (s)]m dF (t , s)

with G1 (t ) = t a and G2 (s) = sb . Since the distribution with the moments µj,m = aj bm , j, m ∈ N is degenerated at (a, b), Theorem 1(i) yields Corollary 1(iii).  Remark 1. If a = 1, b = 1 in Theorem 1(iii), then cdf Fν from (11) represents the biased sampling model with the weight function w(t , s) = t c sd . Also, assuming in (12) that G1 and G2 represent the marginal distributions of F , we conclude from − Theorem 1(iv) that Ca := Fa,ν recovers the copula function C (·, ·) = F (G− 1 (·), G2 (·)). Consider the functions f : [0, 1]2 → R that are polynomials up to p + q order: f (t , s) =

p − q −

amj t m sj .

m=0 j=0

The class of all such functions with a00 = 0 and all amj finite will be denoted by Pp+q . In the next statement we use the symbol a(m) = a(a + 1) · · · (a + m − 1). In Mnatsakanov and Li (2010) it is proved:

R.M. Mnatsakanov / Statistics and Probability Letters 81 (2011) 1–7

5

Theorem 2. Let ν = {µj,m (F ), j, m ∈ N}. If pdf f is continuous on [0, 1]2 , then fa,ν −→u f and for some 0 < δ < 1,

‖fa,ν − f ‖∞ ⩽ ∆(f , δ) +

2‖f ‖∞ . δ 4 (α + 2)(α ′ + 2)

Corollary 2. If f ∈ Pp+q , then (i) fa,ν (x, y) − f (x, y) =

∑p

∑q

amj {bmj (x, y) − xm yj }, where  ′  [α x] + 1 (m) · [α y] + 1 (j)     ; bmj (x, y) = α + 2 (m) · α ′ + 2 (j) m=0



j =0



and      ∑p ∑q C 3 3   (ii) if α = α ′ → ∞, then ‖fa,ν − f ‖∞ ∼ α1 , where C1 = + m=0 j=0 |amj | 3 − (m + 1) m + 2 − (j + 1) j + 2     m m + 21 + j j + 12 . Several additional properties of fa,ν can be derived as well (cf. with Mnatsakanov (2008b)). Let us consider the following conditions:

∫ [0,1]2

f k ( t , s)

1 ts

dtds < ∞,

for k = 1, 2.

(14)

Theorem 3. Assume the densities f , fk , k = 1, 2, are continuous on [0, 1]2 . (i) If ν = ν1 ⊙ ν2 and the conditions (14) are satisfied, then fa,ν −→u f1 ⊗ f2 ; (ii) If ν = ν1 ~ ν2 , then fa,ν −→u f1 ⋆ f2 ; (iii) If ν = {µ ¯ j,m , j, m ∈ N} with µ ¯ j,m defined according to (12), where Gk : [0, 1] → [0, 1] are both increasing (decreasing) continuous functions with gk = G′k , k = 1, 2, then fa,ν −→u fν where − − − −1 . fν (x, y) = f (G− 1 (x), G2 (y))[g1 (G1 (x))g2 (G2 (y))]

Corollary 3. (i) If f is continuous and for some a, b > 0 and c , d ⩾ 0, ν = {ν¯ j,m = µaj+c ,bm+d (F )/µc ,d (F ), j, m ∈ N}, then fα,ν −→u fν with fν (x, y) =

1 abµc ,d (F )

f



1

1

xa ,yb



x

c +1 a −1

y

d+1 −1 b

;

(ii) If f is continuous and ν = {aj bm µj,m (F ), j, m ∈ N} for some a, b > 0, then fα,ν −→u fν with fν (x, y) = a−1 b−1 f (x/a, y/b), x ∈ [0, a], y ∈ [0, b]; (iii) Let ν = {µ ¯ j,m , j, m ∈ N} with µ ¯ j,m defined in (12). If f is continuous, and G1 and G2 represent the marginal distributions of F , then we have fa,ν (G1 (x), G2 (y))g2 (y) −→u f (x, y)/g1 (x) := f (y | x). Proof of Corollary 3. Statement (i) with c = d = 0 is a special case of Theorem 3 (iii), where G1 (t ) = t a and G2 (t ) = t b . When c , d ̸= 0, the proof of (i) is reduced to the case with c = d = 0 by replacing pdf f with fc ,d (x) = xc yd f (x, y)/µc ,d (F ). Case (ii) can be proved by replacing x and y by x/a and y/b on the right hand side of (5), and with the argument similar to the one used in the proof of Theorem 3(i). The transformations x → Gk (x), k = 1, 2, combined with the statement of Theorem 3(iii) yields Corollary 3(iii).  Remark 2. Assuming in (12) that G1 and G2 represent the marginal distributions of F , we conclude from Theorem 3(iii) that − − − −1 ca := fa,ν recovers the copula density function c (u, v) = f (G− . 1 (u), G2 (v))[g1 (G1 (u))g2 (G2 (v))] 3.2. Inversion of the Laplace transform Assuming that the distribution F is defined on R2+ and the Laplace transform of F is finite at any t = (t , s) ∈ R2+ , let us denote

L F ( t) =

∫ R2+

e−t·x dF (x).

Here t · x denotes a scalar product of t and x. Note that given the Laplace transform LF , one can calculate the moments of F as follows: µj,m (F ) = (−1)j+m ∂∂t j ∂ sm LF (t) |t=0 and then apply (2) and (5) for recovering F and f , respectively. On the j+m

6

R.M. Mnatsakanov / Statistics and Probability Letters 81 (2011) 1–7

other hand, we can use the properly chosen transformed moments (12) in combination with (2) and (5) to recover cdf F and its density f directly from the values of LF . Before discussing the details, recall several known inversions of the Laplace transform. In the univariate case, let us recall the well known Laplace inversion formula proposed by Widder (1946): 1

fα (x) = (−1)(α−1)

Γ (α)

[  α ] α α ψ (α−1) , x

x ∈ R+ , α ∈ N,

x

where ψ(t ) := LF (t ), t ∈ R+ . It is known that if f is bounded and continuous, then fα (x) converges uniformly to f on any bounded interval as α → ∞ (Feller, 1971, pp. 233 and 440). See also Widder (1934) for another approximation of the Laplace inversion. A regularized inversion of the noisy Laplace transform is constructed in Chauveau et al. (1994), where the L2 -rate of convergence is obtained using the representation of the Laplace transform as the convolution with respect to the multiplication group operation on R+ equipped by the Haar measure dt /t. Diaconis and Freedman (2004a,b) characterize the Laplace transform of bounded densities and discuss the inversion formula for mixture models as well. We suggest the following construction: consider LF (j, m) as the transformed moments of F (cf. with (12), where Gk (x) = e−x , k = 1, 2). As a consequence of the results from Theorems 1 and 3 we derive the alternative constructions for recovering F and f (given the Laplace transform LF ). Corollary 4. Let ν = {µ ¯ j,m = LF (j, m), j, m ∈ N}. (i) If Fa,ν = Ka−1 ν , then (10) holds with F ν ( x, y ) =









− ln x

dF (u, v) x, y ∈ [0, 1]. − ln y

(ii) If F¯a,ν (x, y) = Fa,ν (G1 (x), G2 (y)), x, y ∈ R+ , then F¯a,ν →w F¯ν ,

as a → ∞,

where F¯ν is the survival function of F , F¯ν (x, y) =

∞∞ x

y

dF (u, v).

(iii) If fa,ν = Ba−1 ν is defined by (5) and fa∗,ν (x, y) = fa,ν (G1 (x), G2 (y))e−(x+y) , then fa∗,ν −→u f on any compact in R2+ . Proof. The proof is left to the reader as an exercise.



In other words, if 1 = (1, 1), ν = {µ ¯ j,m = LF (j, m), j, m ∈ N} and the operator Ba−1 is defined according to (5), then the corresponding inversion of LF has the form: 1 −1 −x·1 (L− , a ν)(x) = (Ba ν)(G1 (x), G2 (y))e

x = (x, y) ∈ R2+ .

(15)

For the univariate case, taking



µ ¯ j (F ) := LF (j) =

e−τ j dF (τ ),

j = 1, . . . , α,

R+

we derive the Laplace inversions for recovering F and f , respectively: 1 (L¯ − α ν)(x) = 1 −

[α− e−x ] − α  k=0

1 (L− α ν)(x)

j=k

Γ (α + 2) = Γ ([α e−x ] + 1) e

−x

α

  j

j

k

α−[α e−x ] − m=0

(−1)j−k µ ¯ j (F )

(−1)m µ ¯ m+[αe−x ] (F ) , m!(α − [α e−x ] − m)!

x ∈ R+

(cf. with Mnatsakanov (2008a,b)). 4. Example Let us recover the function f (x, y) = x + y, 0 ⩽ x, y ⩽ 1, via its moment sequence ν = {µj,m (F ), j, m ∈ N}, where µj,m (F ) = [(m + 1)(j + 2)]−1 + [(j + 1)(m + 2)]−1 . We conducted the computations of moment-recovered functions fa,ν when α = α ′ = 15. See Fig. 1(a) with the graph of fa,ν . We can see that the approximant fa,ν almost coincides with the corresponding function f . To specify the exact uniform rate of approximation, let us note that f ∈ P1 with only two nonzero coefficients a10 = a01 = 1. Application of Corollary 2(i) yields fa,ν (x, y) − f (x, y) = ([α x] + 1)/(α + 2) + ([α y] + 1)/(α + 2) − x − y. Hence, ‖fa,ν − f ‖∞ ⩽ 6/(α + 2). Assume now we are given n i.i.d. copies of (X , Y ) from cdf F (x, y) = (x2 y + xy2 )/2, 0 ⩽ x, y ⩽ 1. To estimate its pdf f (x, y) = x + y, we simulated n = 3000 copies (X1 , Y1 ), . . . , (Xn , Yn ) from F . Let νˆ = {νˆ j,m , j, m ∈ N} be the empirical moments of F :

νˆ j,m =

∫ [0,1]2

t j sm dFˆn (t , s),

j, m ∈ N.

R.M. Mnatsakanov / Statistics and Probability Letters 81 (2011) 1–7

7

Here Fˆn denotes the empirical cdf of (X1 , Y1 ), . . . , (Xn , Yn ). By plugging the values of νˆ into (5) instead of ν , we calculated the values of the moment-density estimator fa,ˆν at (x, y) ∈ {(j/α, m/α ′ ), j = 0, 1, . . . , α, m = 0, 1, . . . , α ′ } when the parameters α = α ′ = 15. See Fig. 1(b) where we plotted the curve of fa,ˆν . One can see that it has a shape similar to the one defined by f (x, y) = x + y, 0 ⩽ x, y ⩽ 1. In a forthcoming paper we will study the asymptotic properties of fa,ˆν , Fa,ˆν , as well as the problem of choosing the optimal α and α ′ as functions of the sample size n that minimize the Integrated Mean Squared Error.

a

b

Fig. 1. (a) Approximation of f (x, y) = x + y by fa,ν ; (b) estimation of f by fa,ˆν .

Acknowledgements The author is grateful to the Associate Editor and referee for helpful suggestions and comments. The research was supported by NSF grant DMS 0906639. Appendix. Supplementary data Supplementary material related to this article can be found online at doi:10.1016/j.spl.2010.09.011. References Chauveau, D.E., van Rooij, A.C.M., Ruymgaart, F.H., 1994. Regularized inversion of noisy Laplace transforms. Adv. in Appl. Math. 15, 186–201. Chen, K., 1990. Efficient parallel algorithm for the computation of two-dimensional image moments. Pattern Recognit. 23, 109–119. De Jeu, M., 2003. Determinate multidimensional measures, the extended Carleman theorem and quasi-analytic weights. Ann. Probab. 31, 1205–1227. Diaconis, P., Freedman, D., 2004a. The Markov moment problem and de Finetti’s theorem: part I. Math. Z. 247, 183–199. Diaconis, P., Freedman, D., 2004b. The Markov moment problem and de Finetti’s theorem: part II. Math. Z. 247, 201–212. Feller, W., 1971. An Introduction to Probability Theory and its Applications, vol. II. Wiley, New York. Fuglede, B., 1983. The multidimensional moment problem. Expo. Math. 1, 47–65. Kapur, J.N., Kesavan, H.K., 1992. Entropy Optimization Principles with Applications. Academic Press, New York. Lasserre, J., 2008. A semidefinite programming approach to the generalized problem of moments. Math. Program. Ser. B 65–92. Mnatsakanov, R.M., 2008a. Hausdorff moment problem: reconstruction of distributions. J. Statist. Probab. Lett. 78, 1612–1618. Mnatsakanov, R.M., 2008b. Hausdorff moment problem: reconstruction of probability density functions. J. Statist. Probab. Lett. 78, 1869–1877. Mnatsakanov, R.M., Li, S., 2010. The Radon transform inversion via moments. Manuscript. Mnatsakanov, R.M., Ruymgaart, F.H., 2003. Some properties of moment-empirical CDF’s with application to some inverse estimation problems. Math. Methods Statist. 12, 478–495. Novi Inverardi, P.L., Petri, A., Pontuale, G., Tagliani, A., 2003. Hausdorff moment problem via fractional moments. Appl. Math. Comput. 144, 61–74. Shohat, J.A., Tamarkin, J.D., 1943. The Problem of Moments. Amer. Math. Soc., New York. Sklar, A., 1959. Fonctions de repartition a n dimensuions et leurs marges. Publ. Inst. Statist. Univ. Paris 8, 229–231. Widder, D.V., 1934. The inversion of the Laplace integral and the related moment problem. Trans. Amer. Math. Soc. 36, 107–200. Widder, D.V., 1946. The Laplace Transform. Princeton University Press.

Further reading Chen, S.X., 1999. Beta kernel estimators for density functions. Comput. Statist. Data Anal. 31, 131–145. Johnson, N.L., Kotz, S., Balakrishnan, N., 1994. Continuous Univariate Distributions. Wiley, New York.