Applied Mathematics and Computation 144 (2003) 61–74 www.elsevier.com/locate/amc
Hausdorff moment problem via fractional moments Pierluigi Novi Inverardi a, Giorgio Pontuale b, Alberto Petri b, Aldo Tagliani a,* a b
Faculty of Economics, Trento University, 38100 Trento, Italy CNR, Istituto di Acustica ‘‘O.M. Corbino’’, 00133 Roma, Italy
Abstract We outline an efficient method for the reconstruction of a probability density function from the knowledge of its infinite sequence of ordinary moments. The approximate density is obtained resorting to maximum entropy technique, under the constraint of some fractional moments. The latter ones are obtained explicitly in terms of the infinite sequence of given ordinary moments. It is proved that the approximate density converges in entropy to underlying density, so that it turns out to be useful for calculating expected values. 2002 Elsevier Science Inc. All rights reserved. Keywords: Entropy; Fractional moments; Hankel matrix; Maximum entropy; Moments
1. Introduction In applied sciences a variety of problems, formulated in terms of linear boundary values or integral equations, leads to a Hausdorff moment problem. Such a problem arises when a given sequence of real numbers may be represented as the moments around the origin of non-negative measure, defined on a finite interval, typically ½0; 1. The underlying density f ðxÞ is unknown, while its R1 moments lj ¼ 0 xj f ðxÞ dx, j ¼ 0; 1; 2; . . ., l0 ¼ 1 are known. Next, through a variety of techniques, for practical purposes f ðxÞ is recovered by taking into
*
Corresponding author. E-mail address:
[email protected] (A. Tagliani).
0096-3003/02/$ - see front matter 2002 Elsevier Science Inc. All rights reserved. doi:10.1016/S0096-3003(02)00391-0
62
P. Novi Inverardi et al. / Appl. Math. Comput. 144 (2003) 61–74 M
account only a finite sequence flj gj¼0 . Such a process implies f ðxÞ is wellcharacterized by first few moments. On the other hand, it is well known that the moment problem becomes ill-conditioned when the number of moments involved in the reconstruction increases [1,2]. In Hausdorff case, once fixed þ (l0 ; . . . ; lM1 ), the moment lM may assume values within the interval ½l M ; lM , where [3] 2ðM1Þ lþ : M lM 6 2
ð1:1Þ PM
If one considers the approximating density fM ðxÞ ¼ expð j¼0 kj xj Þ by entropy maximization, constrained by first M moments [4], then its entropy R1 H ½fM ¼ 0 fM ðxÞ ln fM ðxÞ dx satisfies lim H ½fM ¼ 1:
ð1:2Þ
lM !l M
Such a relationship is satisfied by any other distribution constrained by same first M moments, since fM ðxÞ has maximum entropy. On the other hand f ðxÞ and fM ðxÞ have same first M moments and as a consequence, as we illustrate in Section 3, the following relationship holds: Z 1 f ðxÞ dx ¼ H ½fM H ½f : Iðf ; fM Þ ¼: f ðxÞ ln ð1:3Þ fM ðxÞ 0 Here H ½f is the entropy of f ðxÞ, while Iðf ; fM Þ is the Kullback–Leibler distance between f ðxÞ and fM ðxÞ. Eqs. (1.1)–(1.3) underline once more the ill-conditioning of moment problem. The ill-conditioning may be even enlightened by considering the estimation of the parameters kj of fM ðxÞ. The kj calculation leads to minimize a proper potential function Cðk1 ; . . . ; kM Þ [4], with " ! ! # Z 1 M M X X j exp kj x dx þ kj lj : min Cðk1 ; . . . ; kM Þ ¼ min ln k1 ;...;kM
k1 ;...;kM
0
j¼1
j¼1
ð1:4Þ fM ðxÞ satisfies the constraints ! Z 1 M X j k lj ¼ x exp kk x 1 dx; 0
j ¼ 0; . . . ; M:
ð1:5Þ
k¼0
Letting l ¼ ðl0 ; . . . ; lM Þ and k ¼ ðk0 ; . . . ; kM Þ, (1.5) may be written as the map l ¼ /ðkÞ
ð1:6Þ
Then the corresponding Jacobianpmatrix, is up to sign a Hankel matrix, ffiffiffi pwhich ffiffiffiffiffi has conditioning number ’ ð1 þ 2Þ4M = M [5]. All the previous remarks lead
P. Novi Inverardi et al. / Appl. Math. Comput. 144 (2003) 61–74
63
to the conclusion that f ðxÞ may be efficiently recovered from moments if only few moments are requested. In other terms, f ðxÞ may be recovered from moments if its information content is spread among first few moments. In this paper we are looking for a way to overcome the above-quoted difficulties in recovering f ðxÞ from moments. First of all, we assume that we know 1 the infinite sequence of moments flj gj¼0 . Then, from such a sequence we calculate fractional moments Z 1 1 X aj EðX Þ ¼: xaj f ðxÞ dx ¼ bn ðaj Þln ; aj > 0; ð1:7Þ 0
n¼0
where the explicit analytic expression of bn ðaj Þ is given by (2.5). Finally, from a finite of fractional moments fEðX aj ÞgM j¼1 we recover fM ðxÞ ¼ PMnumber aj expð j¼0 kj x Þ by entropy maximization [4]. The exponents faj gM j¼1 are chosen as follows: M
faj gj¼1 : H ½fM ¼ minimum:
ð1:8Þ
The choice of faj gM j¼1 , according to (1.8), leads to a density fM ðxÞ having minimum distance from f ðxÞ, as stressed by (1.3). Remark. If the information content of f ðxÞ is sheared among first moments, so that ME approximant fM ðxÞ represents an accurate approximation of f ðxÞ, then fractional moments may be accurately calculated by replacing f ðxÞ with fM ðxÞ. Then fM ðxÞ converges in entropy and then in L1 -norm to f ðxÞ [6] and the error obtained replacing f ðxÞ with fM ðxÞ Z 1 aj aj jEf ðX Þ EfM ðX Þj 6 xaj jf ðxÞ fM ðxÞj dx Z 6
0 1
jf ðxÞ fM ðxÞj dx 6
pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 2ðH ½fM H ½f Þ
ð1:9Þ
0
may be rendered arbitrarily small by increasing M (inequalities in (1.9) are proved in Section 3).
2. Fractional moments from moments Let X be a continuous random variable with density f ðxÞ on the support ½0; 1, with moments of order s, centered in c, c 2 R Z 1 ls ðcÞ :¼ E½ðX cÞs ¼ ðx cÞs f ðxÞ dx; s 2 N ¼ N [ f0g ð2:1Þ 0
64
P. Novi Inverardi et al. / Appl. Math. Comput. 144 (2003) 61–74
and moments from the origin ls ¼: ls ð0Þ related to moments generically centered in c through the relationship s X s sh c lh ðcÞ; s 2 N : ð2:2Þ ls ¼ h h¼0 Then we recall the relationship similar to (2.2) which permits to calculate the (fractional) moment of order s 2 Rþ (which replaces aj for notational convenience as in (1.7) and (3.2) involving all the central moments of a given distribution about the point c. Firstly, R 1 by definition of non-central moment of order s, we can write EðX s Þ ¼ 0 xs f ðxÞ dx and then, by Taylor expansion of xs around c, where c 2 ð0; 1Þ, we have n 1 X ðnÞ ðx cÞ ½xs x¼c n! n¼0
n 1 X s ðx cÞ ¼ n! xsn n! n x¼c n¼0 1 X s n ¼ csn ðx cÞ ; n n¼0
xs ¼
ð2:3Þ
ðnÞ
where ½kðxÞx¼c indicates the nth derivative of the function kðxÞ w.r.t. x, evaluated at c. Taking the expectation on both sides of the last equation in (2.3), we get the required relationship 1 1 X X s sn n EðX s Þ ¼ c E½ðX cÞ ¼ bn ln ðcÞ; ð2:4Þ n n¼0 n¼0 where s sn bn ¼ c ; n
n 2 N
ð2:5Þ
represents the coefficient of the integral n-order moment of X centered at c. The formulation of the s-order fractional moments as in (2.4) shows some numerical instabilities which depend on the structure of the relationship between ln ðcÞ and EðX s Þ; these instabilities are related to the value of the center c and increase as the order of the central moments becomes high. In particular, n
n
(a) the numerical error DEðX cÞ due to the evaluation of EðX cÞ in terms of non-central integral moments EðX h Þ, h 6 n, becomes bigger as c and n increase. In fact,
P. Novi Inverardi et al. / Appl. Math. Comput. 144 (2003) 61–74
X n n h n nh h ð 1Þ c DEðX Þ jDEðX cÞ j ¼ h¼0 h n X n 6 cnh DEðX h Þ h h¼0 n X n nh h ¼ kDEðX Þk1 c h h¼0
65
ð2:6Þ
¼ kDEðX h Þk1 ð1 þ cÞn ’ epsð1 þ cÞn ; where eps corresponds to the error machine. (b) the numerical error DEðX s Þ due to the evaluation of EðX s Þ involving the n first Mmax central moments EðX cÞ , is given by M max X s sn n c DEðX cÞ jDEðX s Þj ¼ n n¼0 M max X s csn jDEðX cÞn j 6 n n¼0 ð2:7Þ n M max s X 1 n s 6 kDEðX cÞ k1 c max n n n¼0 c 1 Mmax þ1 s 1 n s c ¼ kDEðX cÞ k1 c max 1 n 1 n c with s s ¼ max n ½s=2 n
if ½s is even
and s s max ¼ n ½s=2 þ 1 n
if ½s is odd;
where [x] represents the integer part of x. The product of first two factors of the right hand side of (2.7) is an increasing function of c, whilst the last factor gives a function which decreases with c. Hence, taking in account both (a) and (b), a reasonable choice of c could be c ¼ 1=2. Further, rewriting the last inequality in (2.7) as 1 Mmax þ1 1 s s n s c jDEðX Þ j 6 kDEðX cÞ k1 c max
66
P. Novi Inverardi et al. / Appl. Math. Comput. 144 (2003) 61–74
we can reconstruct the s-order fractional moment with a prefixed level of accuracy e, e > 0, just involving a number of central moments equal to the value Mmax .
3. Recovering f (x) from fractional moments Let X Rbe a positive r.v. on ½0; 1 with density f ðxÞ, Shannon-entropy 1 1 H ½f ¼ 0 f ðxÞ ln f ðxÞ dxPand moments flj gj¼0 , from which positive frac1 tional moments EðX aj Þ ¼ n¼0 bn ðaj Þln may be obtained, as in (2.4) and (2.5). From [4], we know that the Shannon-entropy maximizing density function fM ðxÞ, which has the same M fractional moments EðX aj Þ, of f ðxÞ, j ¼ 0; . . . ; M, is ! M X fM ðxÞ ¼ exp kj xaj : ð3:1Þ j¼0
Here (k0 ; . . . ; kM ) are Lagrangian multipliers, which must be supplemented by the condition that first M fractional moments of fM ðxÞ coincide with EðX aj Þ, i.e., Z 1 aj xaj fM ðxÞ dx; j ¼ 0; . . . ; M; a0 ¼ 1: ð3:2Þ EðX Þ ¼ 0
Shannon-entropy H ½fM of fM ðxÞ is given as Z 1 M X H ½fM ¼ fM ðxÞ ln fM ðxÞ dx ¼ kj EðX aj Þ: 0
ð3:3Þ
j¼0
Given two probability densities f ðxÞ and fM ðxÞ, there are two well-known measures of the distance between f ðxÞ and fM ðxÞ. Namely the divergence measure Z 1 f ðxÞ dx Iðf ; fM Þ ¼ f ðxÞ ln fM ðxÞ 0 R1 and the variation measure V ðf ; fM Þ ¼ 0 jfM ðxÞ f ðxÞj dx. If f ðxÞ and fM ðxÞ have the same fractional moments EðX aj Þ;
j ¼ 1; . . . ; M
then Iðf ; fM Þ ¼ H ½fM H ½f holds. In fact
ð3:4Þ
P. Novi Inverardi et al. / Appl. Math. Comput. 144 (2003) 61–74
Iðf ; fM Þ ¼
Z
1
f ðxÞ ln
0
¼ H ½f þ
M X f ðxÞ dx ¼ H ½f þ kj fM ðxÞ j¼0
M X
Z
67
1
xaj fM ðxÞ dx 0
kj EðX aj Þ ¼ H ½fM H ½f :
j¼0
In the literature several lower bounds for the divergence measure I based on V are available. We shall however use the following bound [7]: IP
V2 : 2
ð3:5Þ
If gðxÞ denotes a bounded function, such that jgðxÞj 6 K, K > 0, taking into account (3.4) and (3.5), we have Z 1 pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi jEf ðgÞ EfM ðgÞj 6 jgðxÞj jf ðxÞ fM ðxÞj dx 6 K 2ðH ½fM H ½f Þ: 0
ð3:6Þ Eq. (3.6) suggests to us what fractional moments have to be chosen M
faj gj¼1 : H ½fM ¼ minimum:
ð3:7Þ
The use of fractional moments in the framework of ME underlines on the following two theoretical results. The first is a theorem [8, Theorem 2] which guarantees the existence of a probability density from the knowledge of an infinite sequence of fractional moments more precisely. Theorem 3.1 [8, Theorem 2]. If X is a r.v. assuming values from a bounded in1 terval ½0; 1 and faj gj¼0 an infinite P1 sequence of positive and distinct numbers satisfying limj!1 aj ¼ 0 and j¼0 aj ¼ þ1, then the sequence of moments 1 fEðX aj Þgj¼0 characterizes X. The second concerns the convergence in entropy of fM ðxÞ, where entropyconvergence means limM!1 H ½fM ¼ H ½f . More precisely. M
Theorem 3.2. If faj gj¼0 are equispaced within ½0; 1Þ, with aMjþ1 ¼ j=ðM þ 1Þ, j ¼ 0; . . . ; M then ME approximant converges in entropy to f ðxÞ. Proof. See Appendix A.
We bound ourselves to point out that the choice of equispaced points aMjþ1 ¼ j=ðM þ 1Þ, j ¼ 0; . . . ; M satisfies both conditions of Theorem 3.1, i.e.
68
P. Novi Inverardi et al. / Appl. Math. Comput. 144 (2003) 61–74
lim aM ¼ 0 and
M!1
lim
M!1
M X j¼0
aj ¼ lim
M!1
1 M ðM þ 1Þ ¼ þ1: M þ1 2
As a consequence, if the choice of equispaced aMjþ1 guarantees entropyconvergence then the choice (3.7) guarantees entropy-convergence too. From a computational point of view, Lagrangian multipliers (k1 ; . . . ; kM ) are obtained by (1.4), from which follows the normalizing constant k0 , by imposing M that the density integrates to 1. Then the optimal faj gj¼1 exponents are obtained as
faj gM : min min Cðk ; . . . ; k Þ : ð3:8Þ 1 M j¼1 a1 ;...;aM
k1 ;...;kM
4. Numerical results We compare fractional and ordinary moments by choosing some probability densities on ½0; 1. Example 1. Let p f ðxÞ ¼ sinðpxÞ 2 with H ½f ’ 0:144729886. From f ðxÞ we have ordinary moments satisfying the recursive relationship 1 nðn 1Þ ln ¼ ln2 ; 2 p2
1 n ¼ 2; 3; . . . ; l0 ¼ 1; l1 ¼ : 2 P1 1 aj From fln gn¼0 we calculate EðX Þ ¼ n¼0 bn ðaj Þln , as in (2.4) and (2.5). M From fEðX aj Þgj¼0 we obtain ME approximant fM ðxÞ for increasing values of M M, where faj gj¼1 satisfy (3.7). In Table 1 are reported (a) H ½fM H ½f ¼ Iðf ; fM Þ and exponents faj gM j¼1 satisfying (3.7), where H ½fM is obtained using fractional moments. (b) H ½fM H ½f ¼ Iðf ; fM Þ, where H ½fM is obtained using ordinary moments. Inspection of Table 1 allows us to conclude that • entropy decreasing is fast, so that practically 4–5 fractional moments determine f ðxÞ; • on the converse a high number of ordinary moments are requested for a satisfactory characterization of f ðxÞ;
P. Novi Inverardi et al. / Appl. Math. Comput. 144 (2003) 61–74
69
Table 1 Optimal fractional moments and entropy difference of distributions having an increasing number of common (panel a) fractional moments and (panel b) ordinary moments M
faj gM j¼1
H ½fM H½f
Panel a 1
13.4181
0.8716E)1
2
0.00289 4.69275
0.2938E)2
3
0.04680 1.84212 13.2143
0.3038E)3
4
0.00220 2.76784 13.7293 20.5183
0.3276E)4
5
0.0024 2.7000 13.700 20.500 25.200
0.1016E)4
Panel b 2 4 6 8 10 12
0.9510E)2 0.2098E)2 0.7058E)3 0.4442E)3 0.3357E)3 0.3288E)3
• approximately 12 ordinary moments have an effect comparable to three fractional moments. f ðxÞ and fM ðxÞ, obtained by 4–5 fractional moments, are practically indistinguishable. Example 2. This example is borrowed from [9]. Here the authors attempt to recover a non-negative decreasing differentiable function f ðxÞ from frequency moments xn , with Z 1 n ½f ðxÞ dx; n ¼ 1; 2; . . . xn ¼ 0
The authors of [9] realize that other density reconstruction procedures, alternative to ordinary moments, would be desirable. We propose fractional moments density reconstruction procedure. Here
70
P. Novi Inverardi et al. / Appl. Math. Comput. 144 (2003) 61–74
1 1 1 ln f ðxÞ ¼ 2 þ 1 ; 2 10 Ax þ B 1 1 ¼ 1 þ e5 1 þ e5
B¼
1 ; 1 þ e5
A
with H ½f ’ 0:06118227 (f ðxÞ, compared to [9], contains the normalizing constant 2). From f ðxÞ we have ordinary moments P1 ln through numerical aj procedure. From fln g1 we calculate EðX Þ ¼ n¼0 n¼0 bn ðaj Þln , as in (2.4) and M aj (2.5). From fEðX Þgj¼0 we obtain ME approximant fM ðxÞ for increasing values of M, where faj gM j¼1 satisfy (3.7). In Table 2 are reported M
(a) H ½fM H ½f ¼ Iðf ; fM Þ and exponents faj gj¼1 satisfying (3.7), where H ½fM is obtained using fractional moments. (b) H ½fM H ½f ¼ Iðf ; fM Þ, where H ½fM is obtained using ordinary moments. Inspection of Table 2 allows us to conclude that • entropy decreasing is fast, so that practically four fractional moments determine f ðxÞ; Table 2 Optimal fractional moments and entropy difference of distributions having an increasing number of common (panel a) fractional moments and (panel b) ordinary moments M Panel a 1
faj gM j¼1
H ½fM H½f
1.56280
0.6278E)2
2
0.52500 3.90000
0.3152E)2
3
1.05000 3.00000 7.87500
0.1169E)2
4
0.44062 7.65470 12.5262 63.9093
0.1025E)3
Panel b 2 4 6 8 10 12 14
0.5718E)2 0.1776E)2 0.1320E)2 0.6744E)3 0.3509E)3 0.2648E)3 0.1914E)3
P. Novi Inverardi et al. / Appl. Math. Comput. 144 (2003) 61–74
71
• a high number of ordinary moments are requested for a satisfactory characterization of f ðxÞ; • approximately 14 ordinary moments have an effect comparable to four fractional moments. f ðxÞ and fM ðxÞ, obtained by four fractional moments, are practically indistinguishable. As a consequence we argue four fractional moments are equivalent to eight frequency moments (as in [9]). The latter ones, indeed, provide an approximant fM ðxÞ practically indistinguishable from f ðxÞ (see Fig. 1 of [9]).
5. Conclusions In this paper we have faced up to the Hausdorff moment problem and we have solved it using a low number of fractional moments, calculated explicitly in terms of given ordinary moments. The approximating density, constrained by few fractional moments, has been obtained by maximum-entropy method. Fractional moments have been chosen minimizing the entropy of the approximating density. The strategy of the paper in recovering a given density function consists in accelerating the convergence by a proper choice of fractional moments obtaining an approximating density, using moments of low order as (1.1) suggests.
Appendix A. Entropy convergence A.1. Some background Let us consider a sequence of equispaced points aj ¼ j=ðM þ 1Þ, j ¼ 0; . . . ; M and Z 1 taj fM ðtÞ dt; j ¼ 0; . . . ; M ðA:1Þ lj ¼: EðX aj Þ ¼ 0
with fM ðtÞ ¼ expð from (A.1) we have
PM
j¼0
kj taj Þ. With a simple change of variable x ¼ t1=ðMþ1Þ ,
lj ¼ EðX aj Þ " # Z 1 M X xj exp ðk0 lnðM þ 1ÞÞ kj xj þ M ln x dx; ¼ 0
j ¼ 0; . . . ; M
j¼1
ðA:2Þ
72
P. Novi Inverardi et al. / Appl. Math. Comput. 144 (2003) 61–74
which is a reduced Hausdorff moment problem for each fixed M value and a determinate Hausdorff moment problem when M ! 1. Referring to (A.2) the following symmetric definite positive Hankel matrices are considered
D0 ¼ l0 ;
D2 ¼
l0 l1
2
l1 ;...; l2
D2M
l0 6 .. ¼4 . lM
3 lM .. 7 . 5 l2M
ðA:3Þ
whose (i; j)th entry i; j ¼ 0; 1; . . . holds liþj ¼
Z
1
xiþj fM ðxÞ dx;
0
where " fM ðxÞ ¼ exp ðk0 lnðM þ 1ÞÞ
M X
# j
kj x þ M ln x :
j¼1
Hausdorff moment problem is determinate and the underlying distribution has a continuous distribution function F ðxÞ, with density f ðxÞ. Then the massimal mass qðxÞ which can be concentrated at any real point x is equal to zero ([10, Corollary 2.8]). In particular, at x ¼ 0 we have ð0Þ 0 ¼ qð0Þ ¼ lim qi ¼: i!1 l2 .. . l
iþ1
jD2i j ¼ lim ðl lðiÞ 0 Þ; liþ1 i!1 0 .. . l
ðA:4Þ
2i
ð0Þ
where qi indicates the largest mass which can be concentrated at a given point ðiÞ x ¼ 0 by any solution of a reduced moment problem of order P i and l0 indicates the minimum value of l0 once assigned the first 2i moments. Let us fix fl0 ; . . . ; li1 ; liþ1 ; . . . ; lM g while only li , i ¼ 0; . . . ; M varies continuously. From (A.2) we have 2
3 dk0 =dli 6 7 .. D2M 4 5 ¼ eiþ1 ; . dkM =dli where eiþ1 is the canonical unit vector 2 RMþ1 , from which
ðA:5Þ
P. Novi Inverardi et al. / Appl. Math. Comput. 144 (2003) 61–74
2
73
3
dk0 =dli
dk0 dkM dk0 dkM 6 7 . .. 0< ;...; ;...; D2M 4 eiþ1 5¼ dli dli dli dli dkM =dli dki ¼ 8 i: dli
ðA:6Þ
A.2. Entropy convergence The following theorem holds. Theorem A.1. If aj ¼ j=ðM þ 1Þ, j ¼ 0; . . . ; M and fM ðxÞ ¼ expð then lim H ½fM ¼:
Z
M!1
1
fM ðxÞ ln fM ðxÞ dx ¼ H ½f ¼: 0
Z
PM
kj xaj Þ
j¼0
1
f ðxÞ ln f ðxÞ dx: 0
ðA:7Þ
Proof. From (A.1) and (A.7) we have H ½fM ¼
M X
ðA:8Þ
kj lj :
j¼0
Let us consider (A.8). When only l0 varies continuously, taking into account (A.3)–(A.6) and (A.8) we have M X d dkj H ½fM ¼ lj þ k0 ¼ k0 1; dl0 dl0 j¼0
d2 dk0 H ½fM ¼ ¼ dl0 dl20
l2 .. . l
Mþ1
jD2M j
lMþ1 .. . l 2M
¼
1 ðMÞ
l0 l0
< 0: ðMÞ
Thus H ½fM is a concave differentiable function of l0 . When l0 ! l0 then H ½fM ! 1, whilst at l0 it holds H ½fM > H ½f , fM ðxÞ being the maximum entropy density once assigned (l0 ; . . . ; lM ). Besides, when M ! 1 then ðMÞ l0 ! l0 . So the theorem is proved.
74
P. Novi Inverardi et al. / Appl. Math. Comput. 144 (2003) 61–74
References [1] D. Fasino, Spectral properties of Hankel matrices and numerical solutions of finite moment problems, J. Comput. Appl. Math. 65 (1995) 145–155. [2] G. Talenti, Recovering a function from a finite number of moments, Inverse Problems 3 (1987) 501–517. [3] S. Karlin, L.S. Shapley, Geometry of moment spaces, AMS Memoirs, Providence, RI 12 (1953). [4] H.K. Kesavan, J.N. Kapur, Entropy Optimization Principles with Applications, Academic Press, London, 1992. [5] B. Beckermann, The condition number of real Vandermonde, Krylov and positive definite Hankel matrices, Numerische Mathematik 85 (2000) 553–577. [6] J.M. Borwein, A.S. Lewis, Convergence of best entropy estimates, SIAM J. Optim. 1 (1991) 191–205. [7] S. Kullback, A lower bound for discrimination information in terms of variation, IEEE Trans. Inform. Theory 13 (1967) 126–127. [8] G.D. Lin, Characterizations of Distributions via moments, Sankhya: The Ind. J. Statist., Ser. A 54 (1992) 128–132. [9] E. Romera, J.C. Angulo, J.S. Dehesa, The Hausdorff entropic moment problem, J. Math. Phys. 42 (2001) 2309–2314. [10] J.A. Shohat, J.D. Tamarkin, The problem of moments, AMS Mathematical Survey, Providence, RI 1 (1963).