Efficiency of finite state space Monte Carlo Markov chains

Efficiency of finite state space Monte Carlo Markov chains

Statistics & Probability Letters 54 (2001) 405 – 411 Eciency of nite state space Monte Carlo Markov chains  Antonietta Mira Universit a degli Stud...

85KB Sizes 0 Downloads 54 Views

Statistics & Probability Letters 54 (2001) 405 – 411

Eciency of nite state space Monte Carlo Markov chains  Antonietta Mira Universit a degli Studi dell’Insubria, Via Ravasi 2, 21 100, Varese, Italy Received July 2000; received in revised form February 2001

Abstract The class of nite state space Markov chains, stationary with respect to a common pre-specied distribution, is considered. An easy-to-check partial ordering is dened on this class. The ordering provides a sucient condition for the dominating Markov chain to be more ecient. Eciency is measured by the asymptotic variance of the estimator of the integral of a specic function with respect to the stationary distribution of the chains. A class of transformations that, when applied to a transition matrix, preserves its stationary distribution and improves its eciency is dened and studied. c 2001 Elsevier Science B.V. All rights reserved  Keywords: Eciency ordering; Markov chain Monte Carlo; Metropolis–Hastings algorithm; Peskun ordering; Stationarity preserving and eciency increasing probability mass transfers

1. Introduction Consider a distribution of interest (x); x ∈ X, possibly known only up to a normalizing constant. Assume that X is a nite set with N elements. In order to gather information about  we construct an irreducible Markov chain with transition matrix P (we will identify Markov chains with the corresponding transition matrices), that has  as its unique stationary distribution:  = P. In particular, following the Markov chain Monte Carlo (MCMC) literature, we estimate E f =  with the ergodic average of f along a realized path of a chain of length n: n

ˆn =

1 f(Xi ): n i=1

Let L2 () indicate the space of functions that have nite variance under  and assume that all the functions of interest belong to this space. Let 1 denote the 1 × N vector of ones and, likewise, 0 denote the 1 × N 

This work has been supported by EU TMR network ERB-FMRX-CT96-0095 on “Computational and Statistical methods for the analysis of spatial data” and by the F.A.R. 2000, of the University of Insubria. E-mail address: [email protected] (A. Mira). c 2001 Elsevier Science B.V. All rights reserved 0167-7152/01/$ - see front matter  PII: S 0 1 6 7 - 7 1 5 2 ( 0 1 ) 0 0 1 1 5 - 8

406

A. Mira / Statistics & Probability Letters 54 (2001) 405 – 411

vector of zeros. The inner product on L2 () is (f; g) = fg =



f(x)g(x)(x);

x∈X

where  is an N × N diagonal matrix with i as the ith element on the diagonal and g is the transpose of g. A measure of the eciency of the estimator, ˆn , is the asymptotic variance v(f; P), the limit, as n tends to innity, of n Var  [ˆn ]. It can be shown (Peskun, 1973) that −1  v(f; P) = f[2l−1 P − I − A]f = (f; [2lP − I − A]f);

(1)

where I is the N ×N identity matrix, A = 1  is the limiting matrix and lP = (I −P +A) is called the Laplacian of P. Notice that, in the formula for the asymptotic variance (1), the only quantity that involves the transition matrix (and thus the only quantity on which we can intervene to improve the performance of our estimates) is the inverse Laplacian. Given a distribution of interest  there is more than one transition matrix that has  as its unique stationary distribution. The primary criterion used to choose among them is the asymptotic variance of the (asymptotically unbiased) estimators. To justify the choice of this criterion consider that, in classical statistics, estimates are compared in terms of their asymptotic relative eciency; likewise here we will prefer a Markov chain if it produces estimates that are asymptotically more ecient. If the chain is irreducible and the state space is nite the corresponding MCMC estimates are strongly consistent and normally distributed, therefore eciency will be measured by the asymptotic variance of estimates. We thus give the following denitions. Denition 1. If P and Q are Markov chains with stationary distribution , then P is at least as e.cient as Q, relative to the particular function f, P ¡f Q, if v(f; P) 6 v(f; Q). Denition 2. If P and Q are Markov chains with stationary distribution , then P is at least as uniformly e.cient as Q, P ¡E Q, if v(f; P) 6 v(f; Q) for all f ∈ L2 (). Uniform eciency has already been studied quite extensively by Peskun (1973), Tierney (1998), Mira and Geyer (1998) and Frigessi et al. (1992). Denition 3. Given two transition matrices P and Q, both reversible with respect to the same distribution ; P dominates Q in the Peskun sense if each of the oN-diagonal elements of P is greater than or equal to the corresponding oN-diagonal elements of Q. Tierney (1998) extends this ordering from nite to general state spaces. If P dominates Q in the Peskun sense then P is at least as uniformly ecient than Q (Peskun, 1973; Tierney, 1998). Denition 4. Given two transition matrices P and Q, both reversible with respect to the same distribution ; P dominates Q in the covariance sense if Q − P is a positive denite matrix. Mira and Geyer (1998) extend this ordering from nite to general state spaces. P is uniformly more ecient than Q if and only if P dominates Q in the covariance sense (Mira and Geyer, 1998). In this paper we focus on relative eciency and try to answer the following question: if we have a specic function f in mind and we are only interested in estimating its expected value with respect to , which

A. Mira / Statistics & Probability Letters 54 (2001) 405 – 411

407

Markov chain should we use for our simulation study? Intuitively, the transition matrix that we choose, having in mind a specic function to estimate, will perform better than a generic one chosen to minimize the asymptotic variance for all possible functions with nite asymptotic variance under . 2. Sucient conditions for relative eciency Assume that the function of interest f is monotone. This is not a restrictive hypothesis since we can always rearrange the state space X, and everything else accordingly, to make it monotone. Notice that if P has stationary distribution  and we permute the ith and the jth entries of  we have to permute both the corresponding columns and rows of P to get a new matrix stationary with respect to the permuted  distribution. Dene the summation matrix T to be an N × N upper triangular matrix with zeros below the main diagonal and ones elsewhere. Dene the south-west sub-matrix of the matrix M to be the sub-matrix of M obtained by deleting the rst row and the last column. Denition 5. P dominates Q in the south-west ordering (with respect to some re-ordering of the state space), P ¡SW Q, if all the elements of the south-west sub-matrix of T(P − Q)T are non-negative. Theorem 1. Let P and Q be two Markov chains having  as their unique stationary distribution. If l−1 P ¡SW l−1 then P ¡ Q. f Q −1  Proof. It is sucient to show that, for the given function, f(l−1 P − lQ )f 6 0. Consider the identity −1 −1 −1 −1 −1  −1 −1  f(lP − lQ )f = fT T(lP − lQ )TT f and note that: the last column of T(l−1 P − lQ )T equals  −1  −1 −1 −1 −1 −1 −1 T(l−1 P −lQ )1 = 0 ; the rst row of T(lP −lQ )T equals 1(lP −lQ )T = (lP −lQ )T = 0; f monotone −1  is equivalent to the rst (N − 1) elements of T f and the last (N − 1) elements of fT −1 having opposite −1 signs. It follows that l−1 P ¡SW lQ implies P ¡f Q.

We note that, in the above theorem, the dependence on the function f of the sucient condition for relative (to f) eciency, is hidden in the reordering of the state space implicit in the denition of the south-west ordering. All the orderings dened so far are partial, that is, do not allow ranking of all transition matrices having a specied stationary distribution. A possible limitation of Theorem 1 is that it requires one to work with inverse Laplacians and computing inverses of matrices can be computationally intensive (especially on large state spaces). A way to work with the transition matrices directly is to realize that (I + P − A) provides a rst degree approximation to the inverse Laplacian since (Peskun, 1973) ∞  l−1 = I + (P − A)i ; P i=1

and limi→∞ (P − A)i equals, by construction, the N × N zero matrix. Thus a rst degree approximation to v(f; P) is given by v(f; P) ≈ v(f; A) + 2f(P − A)f ; where v(f; A) is the theoretical independence sampling variance of ˆn and f(P − A)f can be interpreted as a rst order covariance if  is the distribution of the initial state of the Markov chain. Notice that, the smaller the eigenvalues of P in absolute value, the better this rst degree approximation to the asymptotic variance is.

408

A. Mira / Statistics & Probability Letters 54 (2001) 405 – 411

This approximation justies the attempt to nd P such that f(P − A)f is large (in absolute value) and negative. This is nothing more than an attempt to induce negative correlation into the Markov chain Monte Carlo sampler. We thus give the following denition. Denition 6. If P and Q are Markov chains with stationary distribution , then P is at least as 3rst degree e.cient as Q for a particular function f, P ¡1f Q, if f(P − Q)f 6 0. A reasoning similar to the one in the proof of Theorem 1 shows that, if P ¡SW Q, then P ¡1f Q. 3. Stationarity preserving transfers Denote by SPT a stationarity preserving (and eciency increasing) transfer performed on a transition matrix P (with stationary distribution ) of the following kind: given integers 1 6 i ¡ j 6 N and 1 6 k ¡ l 6 N and a quantity h ¿ 0, increase pi; l and pj; k by h and hi =j , respectively, and decrease pi; k and pj; l by h and hi =j , respectively. Thus, an SPT is completely dened by the four indices, i; j; k; l, and the amount by which to increase=decrease the appropriate entries of the matrix on which the transfer is performed. We will thus identify an SPT by P(i; j; k; l; h). The following proposition follows from the denition of SPT and of south-west ordering. Proposition 1. If P is derived from Q via a sequence of stationarity preserving and e.ciency increasing transfers then P ¡SW Q. We can thus increase the eciency of a transition matrix (provided the rst degree approximation to the asymptotic variance is good), while preserving its stationary distribution, via a sequence of SPT. Indeed, we can keep moving probability mass around until there exist two indices i ¡ j such that pi; k and pj; l ¿ 0, for some k ¡ l. Of course, every time we move some probability masses around, we need to check that the resulting matrix remains a proper (all entries between 0 and 1) irreducible transition matrix. As a matter of fact irreducibility of the resulting matrix is sometimes not required if f takes constant values on diNerent parts of the state space (see Example 2 in Section 4). Notice that knowing  only up to a normalizing constant does not constitute a limit to this theory since only the ratio of specic  values is needed to dene an SPT. It is interesting to study the extreme transition matrix obtained by applying a sequence of SPT that is a matrix that cannot be further improved by SPT. The resulting matrix has at most one non zero element along the main diagonal and it presents a path of positive entries connecting the north-east to the south-west corner of the matrix. Which specic pattern is optimal depends on  and f, as the examples in the next section shows. If f does not assume the same value on any two points of the state space (so that the reordering of the space that makes f monotone is unique), the nal result of a sequence of SPT is unique and corresponds to the matrix that minimizes the linear function f(P − A)f under the set of linear constraints that ensure that the resulting matrix is stochastic and has the proper stationary distribution (these constraints dene a convex region and thus the minimum is unique). We will refer to such a matrix as a rst degree f-optimal matrix, 1-f-opt-matrix in short. Furthermore, the 1-f-opt-matrix is reversible with respect to the stationary distribution  even if the starting point is only stationary with respect to . This may suggest that also in general (without stopping the geometric expansion of the asymptotic variance to the rst degree) the optimal transition matrix with respect to the relative (and possibly also absolute) eciency ordering is reversible. There are a few papers that have discussed the issue of reversibility versus non-reversibility relative to both asymptotic variance and convergence to stationarity (Diaconis et al., 1997; Hwang et al., 1993; Neal, 1998; Mira and Geyer, 2000). The above remark should be helpful in this discussion.

A. Mira / Statistics & Probability Letters 54 (2001) 405 – 411

409

4. Examples of rst degree optimal matrices The rst example refers to a state space with N = 2. The reason we conne ourselves to such an easy setting is that every function is monotone here. This helps one to understand how the 1-f-opt-matrix depends on . Furthermore, the resulting matrix is not only rst degree optimal but also optimal in both the relative and absolute eciency ordering. In the second example we focus instead on the dependence of the 1-f-opt-matrix on f by xing the stationary distribution of interest and by considering a function which takes constant values on diNerent parts of the state space so that more than one reordering makes it monotone. Example 1. Consider a state space with N = 2 and assume that  is properly normalized. Consider rst the case where 1 = 12 . In this setting a transition matrix P has the proper stationary distribution if and only if it is doubly stochastic. For example, a Metropolis–Hastings algorithm (Tierney, 1994) with proposal matrix K given by k12 = p and k21 = q, with 0 6 p; q 6 1, has transition matrix P with p12 = p21 = min{q; p}. How do we optimally choose the proposal distribution, that is the values of p and q? The rst-degree-optimal transition matrix has p11 = 0, p21 = 1 thus, the path of positive entries is on the north-east to south-west diagonal. This is nothing but a Metropolis–Hastings algorithm where the proposal distribution always proposes to jump to the other state (notice that this proposal is always accepted). The structure of the rst-degree-optimal transition matrix is the same for any uniform stationary distribution, that is, we always have a pattern of ones on the north-east to south-west diagonal. The resulting rst-degree-optimal transition matrix is periodic since it has an eigenvalue equal to −1. This is a problem for convergence in total variation distance to stationarity but it is not an issue if we are interested in convergence of the ergodic average estimate ˆn . If 1 ¡ 12 then to obtain the rst-degree-optimal transition matrix we have to set p11 = 0 and p21 = 1 =(1 − 1 ). Finally, if 1 ¿ 12 then we have to set p22 = 0 and p12 = (1 − 1 )=1 for rst-degree optimality. Example 2. Let the state space be the set

= {(x; y): x; y = ± 1} with  dened by

P((1; 1)) = P((−1; −1)) = 38 and P((−1; 1)) = P((1; −1)) = 18 : Let f(x; y) = x be the function of interest. In this setting there is more than one reordering of the state space that makes f monotone. If we consider the following one: (−1; −1); (−1; 1); (1; −1); (1; 1); then the 1-f-opt-matrix gives rise to a reducible Markov chain that oscillates between the pair of states (−1; −1); (1; 1) or the pair (−1; −1); (1; −1) depending on the starting point of the chain. For the purposes of estimating f by the ergodic average along the realized path of the chain, this is perfectly ne and very ecient: it gives an asymptotic variance equal to zero regardless of the starting point! Of course the chain is periodic (it has two eigenvalues equal to −1) thus we do not have convergence to total variation distance of any of the marginals. Note that MCMC using the 1-f-opt-matrix beats i.i.d. sampling in terms of eciency (this is true in general) since the asymptotic variance of a standard Monte Carlo estimate of f equals the variance of f under , v(f; A), which, in this setting, is one. Of course, we cannot beat i.i.d. sampling in terms of convergence to stationarity. Thus, if i.i.d. sampling is available, the suggestion is to get the starting value of the Markov chain by i.i.d. sampling, so that convergence to stationarity is guaranteed, and then use the 1-f-opt-matrix to run a Markov chain. All the other reorderings of the state space that make f monotone, give also rise to Markov chains that provide estimates with zero asymptotic variance regardless of the starting point of the chain. The other

410

A. Mira / Statistics & Probability Letters 54 (2001) 405 – 411

1-f-opt-matrices are all obtained by permutating the rows and the corresponding columns of the following matrix:   0 0 0 1   0 0 1 2  3 3   :  0 1 0 0   2 1 0 0 3 3 These matrices are all irreducible and periodic. 5. Conclusions Two partial orderings are dened on the space of irreducible transition matrices having a common stationary distribution . The south-west ordering involves the inverse of the transition matrices (which might be hard to obtain on large state spaces) and provides a sucient condition for the dominating Markov chain to produce MCMC estimates of E f with a smaller asymptotic variance. The second ordering is dened on the transition matrices themselves and provides a sucient condition for the dominating Markov chain to be rst degree more ecient. Finally, we dene stationarity preserving and eciency increasing transfers of probability mass within a transition matrix. The optimal transition matrix that results after a sequence of such transfers resembles the matrix that we would observe when studying the distribution of two negatively correlated characters. The literature on this topic might help to extend the ideas explored in the present paper from nite to general state spaces. A diNerent criterion to judge the performance of MCMC samplers is the speed of convergence to stationarity. The rate of convergence in total variation distance of P n to (x) is governed, in nite state spaces, by the second largest eigenvalue in absolute value (Besag and Green, 1993). On the other hand, the asymptotic variance of MCMC estimates is a decreasing function of the eigenvalues. We thus have two diNerent notions of optimality: fast convergence to equilibrium is obtained by having all eigenvalues small in absolute value while good properties in terms of asymptotic variance of ergodic averages are obtained by having small positive and large negative eigenvalues. We would thus suggest to use a transition matrix with good convergence properties for the rst part of the simulations (until we believe stationarity is achieved) and then switch to an ecient transition matrix such as the one obtained by performing a sequence of SPT. Acknowledgements I would like to thank Jim Fill, Peter Green and Gareth Roberts for illuminating discussions and Valentino Dardanoni for bringing to my attention the economics literature on inequality comparisons and Pigou–Dalton transfers that has inspired this research. References Besag, J., Green, P.J., 1993. Spatial statistics and Bayesian computation. J. Roy. Statist. Soc. Ser. B 55, 25–37. Diaconis, P., Holmes, S., Neal, R.M., 1997. Analysis of a non-reversible Markov chain sampler, Technical Report Cornell University, BU-1385-M. Frigessi, A., Hwang, C., Younes, L., 1992. Optimal spectral structure of reversible stochastic matrices, Monte Carlo methods and the simulation of Markov random elds. Ann. Appl. Probab. 2, 610–628.

A. Mira / Statistics & Probability Letters 54 (2001) 405 – 411

411

Hwang, C., Hwang-Ma, S., Sheu, S., 1993. Accelerating Gaussian diNusions. Ann. Appl. Probab. 3, 897–913. Mira, A., Geyer, C., 1998. Ordering Monte Carlo Markov chains. Technical Report 632, University of Minnesota. Mira, A., Geyer, C., 2000. On non-reversible Markov chains. Fields Institute Communications, Vol. 26, Monte Carlo Methods, Madras, N. (Ed.), American Mathematical Society, Providence, RI, pp. 93–108. Neal, R.M., 1998. Suppressing Random Walks in Markov chain Monte Carlo using Ordered overrelaxation. In: Jordan, M.I. (Ed.), Learning in Graphical Models. Kluwer Academic Publishers, Dordrecht, pp. 205–225. Peskun, P.H., 1973. Optimum Monte Carlo sampling using Markov chains. Biometrika 60, 607–612. Tierney, L., 1994. Markov Chains for Exploring Posterior Distributions. Ann. Statist. 22, 1701–1762. Tierney, L., 1998. A Note on Metropolis–Hastings kernels for general state spaces. Ann. Appl. Probab. 8, 1–9.