A new cluster algorithm for the Baxter–Wu model

A new cluster algorithm for the Baxter–Wu model

Physica A 390 (2011) 24–30 Contents lists available at ScienceDirect Physica A journal homepage: www.elsevier.com/locate/physa A new cluster algori...

273KB Sizes 0 Downloads 68 Views

Physica A 390 (2011) 24–30

Contents lists available at ScienceDirect

Physica A journal homepage: www.elsevier.com/locate/physa

A new cluster algorithm for the Baxter–Wu model I.N. Velonakis ∗ , S.S. Martinos University of Athens, Department of Physics, Section of Solid State Physics, GR 15784 Athens, Greece

article

info

Article history: Received 6 October 2009 Received in revised form 19 February 2010 Available online 15 May 2010 Keywords: Monte Carlo methods Cluster algorithm Critical phenomena Baxter–Wu model

abstract We propose a new cluster algorithm for the Baxter–Wu model that significantly reduces critical slowing down. We examine the behavior of the created clusters as we vary the temperature and then specify dynamic exponents. Comparison is made with the Metropolis algorithm and with the other existing cluster algorithm. © 2010 Elsevier B.V. All rights reserved.

1. Introduction Algorithms with single-spin-flip dynamics, such as the Metropolis Algorithm, are widely used for simulating a variety of models in Statistical Physics. The Metropolis Algorithm was first introduced by Nicolas Metropolis and his co-workers in a 1953 paper on simulations of a hard-sphere gas [1]. Since then, because of its simplicity and efficiency, it became the first choice for computer simulations. But though a single-spin-flip algorithm works well for temperatures away from the critical point it is not the proper tool for the study of a second order phase transition. As we approach the critical conditions, the combination of large critical fluctuations and long correlation times makes the errors on measured quantities to grow significantly and we must spend very long ‘‘time’’ (measured in Monte Carlo steps) to reduce them. Of course, critical fluctuations are an intrinsic property of a second order phase transition for all models, so there is little to be done about them. In addition, they are one of the most interesting physical effects near such a transition and we desire to study them. On the other hand, the increase of the correlation time close to the phase transition, known as critical slowing down effect, is a function of the particular algorithm we are using and it has been proved that we can greatly reduce it by using appropriate algorithms. Among the most interesting algorithms that reduce the problem of critical slowing down are the so-called clusterflipping algorithms or simply cluster algorithms [2,3]. Especially, the Wolff Algorithm [4] and the more general algorithm of Niedermayer [5] have been used for Ising model [6,7] in two and three dimensions as well as for Potts models [8] with various numbers of states and the results were very successful. For the more complicated Baxter–Wu model a cluster algorithm has been proposed by Novotny and Evertz [9]. It is based on the reduction of three-spin interactions that characterize the model into two-spin interactions as in the usual Ising model. The reduction is achieved by ‘‘freezing’’ one of the three sublattices into which the system can be decomposed [9–11]. Then, a Wolff-like approach is used with some necessary adjustments. The main idea behind these algorithms is the creation of a cluster of spins in a proper way and then flipping the whole cluster instead of the flipping of single spins. Their main disadvantage is their complexity, which makes their implementation not straightforward in cases of more complicated models. In this article we propose a new cluster algorithm for the Baxter–Wu model starting from a different point. Instead of reducing the interactions into two-spin ones we build the cluster by adding entire triangles. Of course we must ‘‘freeze’’ one



Corresponding author. Tel.: +30 2107276760; fax: +30 2104611774. E-mail address: [email protected] (I.N. Velonakis).

0378-4371/$ – see front matter © 2010 Elsevier B.V. All rights reserved. doi:10.1016/j.physa.2010.05.006

I.N. Velonakis, S.S. Martinos / Physica A 390 (2011) 24–30

25

of the three sublattices as we flip the cluster but not as we build it. In Section 2 we review the main features of the model and we describe the mechanism of the proposed algorithm. Our results and conclusions are exhibited in Section 3. 2. Model and method The model that is known as ‘‘Baxter–Wu model’’ is a triangular spin lattice with only a triplet interaction [12–17]. Such a system consists of Ising-like spins σi = ±1 located at the vertices of a triangular lattice. The three spins of every triangular face (i, j, k) interact with a three-body interaction of strength −J so that the Hamiltonian of the model, in the absence of an external magnetic field, is H = −J



σi σj σk ,

(2.1)

⟨i,j,k⟩

where the sum is over all triangular faces of the triangular lattice and the spins σi , σj , σk take on the values ±1, so that

σi σj σk = ±1.

(2.2)

Of course if an external magnetic field H is present the Hamiltonian function changes to H = −J



σi σj σk + H

⟨i,j,k⟩



σi .

(2.3)

i

The triangular lattice can be decomposed into three sub-lattices [12], so that any triangular face (i, j, k) contains one spin from each of them. Because of this fact, although the model has no spin-reversal symmetry, it displays a kind of symmetry by reversing all spins in any two of the three sub-lattices. It follows from the Hamiltonian (2.1) that the energy of the system remains unchanged and the resultant state has equal probability with the initial one. The ground state of the Baxter–Wu model is four-fold degenerate. There is one ferromagnetic state with all the spins up, while the three ferrimagnetic ground states have the spins in one of the three sub-lattices up and the spins in the other two down. The magnetization per site for the ferromagnetic state is +1 at zero temperature while for each of the ferrimagnetic states it is − 13 . Various thermodynamic and statistical quantities of this model have been calculated exactly since 1973, when Baxter and Wu published the first analytic calculation for its partition function [13]. Exact solutions for some of them have been given in Refs. [13–17]. The most important of them was the calculation of the critical temperature and the critical exponents. The dependence of magnetization per site on temperature has been obtained by Baxter and Wu [15] and a critical temperature is predicted at the same point as for a square Ising lattice, which means kB Tc J

=

2



ln 1 +

√  ≈ 2.269.

(2.4)

2

The corresponding critical exponents however are different [12–17] and clearly show that this model, although it has the same critical temperature with the 2-dimensional Ising model, belongs to the same universality class with 2-dimensional 4-state Potts model [18]. On the other hand, for some other quantities computational methods such as Monte Carlo simulations are necessary. Novotny and Landau [19–22] have used Monte–Carlo techniques to study the model in pure form, as well as with quenched impurities. Also, phase diagrams of a more general triangular lattice are obtained by Malakis [23,24] and in Ref. [25] the short-time dynamics of the model has been investigated through the relaxation of the order parameter at the critical temperature. Finally, in relatively recent papers [26–28] Martinos et al. have studied the behavior of the magnetization distribution function as well as other properties of the model at its first and second order phase transitions. In the remaining of the section we exhibit the procedure of implementation of the new cluster algorithm that we propose. As we have mentioned the main idea is the building of the cluster by successive addition of triplets of spins instead of single spins as it is the usual way. The method consists of the following steps: 1. We choose a spin at random from the lattice. 2. We look at each of the six neighboring triangles of that spin. We add to the cluster the entire triangle with probability Padd (E ), where E = −J σi σj σk is the interaction energy of the triplet of spins (obviously E = ±J). The exact value of these probabilities is going to be calculated later. If all of the six triangles are rejected we flip the initial spin and the procedure is terminated. 3. For each spin that was added to the cluster in the last step we look at the neighboring triangles and we add them to the cluster with the same probabilities. A main difference from other cluster algorithms is the exclusion of the triangles that have been considered for addition before, as neighbors of other spins in the cluster, but rejected. Each triangle is checked only once. If there are no spins whose neighboring triangles are candidates for inclusion in the cluster, the procedure is terminated. 4. We choose at random one of the three sublattices of the lattice and we flip the spins of the cluster that belong to the other two.

26

I.N. Velonakis, S.S. Martinos / Physica A 390 (2011) 24–30

Notice that every Monte Carlo algorithm is based on a Markov process [2,3] which has to satisfy both the conditions of ergodicity and detailed balance. The first one is the requirement that it should be possible for the Markov process to reach any state of the system from any other state if it is run for long enough. It is clear that the algorithm will satisfy this condition for any sensible choice of the addition probabilities Padd (J ), Padd (−J ), since there is a finite probability that any spin of the lattice will find itself the sole member of a cluster of one. Flipping a succession of such clusters will clearly get us from any state to any other in a finite number of moves. On the other hand, the condition of detailed balance [2,3] ensures that after the simulated system has come to equilibrium we get the correct probability distribution, usually the Boltzmann one. If µ and ν are two states of a simulated system, with non-zero probabilities pµ and pν respectively, the general condition of detailed balance for Monte Carlo algorithms is pµ P (µ → ν) = pν P (ν → µ).

(2.5)

In this equation P (µ → ν) is the transition probability of going from state µ to state ν , which is usually given as a product of two parts [2,3] P (µ → ν) = g (µ → ν)A(µ → ν).

(2.6)

Here g (µ → ν ) is the selection probability, which is the probability, given an initial state µ, that the algorithm will generate a new target state ν , and A(µ → ν) is the acceptance ratio (sometimes called acceptance probability). This ratio says that if we start off in a state µ and our algorithm generates a new state ν from it we should accept that state and change our system to it with probability equal to A(µ → ν). Using the relation (2.6) the Eq. (2.5) can be written pµ g (µ → ν)A(µ → ν) = pν g (ν → µ)A(ν → µ).

(2.7)

Considering that pµ and pν are in our application Boltzmann probabilities this becomes g (µ → ν)A(µ → ν) g (ν → µ)A(ν → µ)

= exp[−β(Eµ − Ev )].

(2.8)

Here

β=

1 kB T

,

(2.9)

where T is temperature in Kelvin degrees and kB is the Boltzmann constant. In the neighborhood of the cluster there are triangles with one or two vertexes in the cluster. These triangles have been considered for addition to the cluster and have been rejected. We can divide them in four categories with number mA , mB , nA , nB respectively: mA is the number of triangles with energy −J that remains unchanged as we flip the cluster. mB is the number of triangles with energy −J that will be changed into +J as we flip the cluster. nA and nB are the corresponding numbers of triangles with energy +J. Let’s call µ the initial state of the lattice and ν the resultant state from the flipping of the cluster. The selection probability g (µ → ν) is the probability of not adding to the cluster the above mentioned triangles. That is g (µ → v) = [1 − Padd (J )]nA +nB [1 − Padd (−J )]mA +mB .

(2.10)

For the selection probability g (µ → ν), on the other hand, we will have an interchange between mB and nB triangles. Therefore g (ν → µ) = [1 − Padd (J )]nA +mB [1 − Padd (−J )]mA +nB .

(2.11)

On the other hand, the energy change is Eν − Eµ = 2JmB − 2JnB .

(2.12)

The equation of detailed balance becomes: [1 − Padd (J )]nA +nB [1 − Padd (−J )]mA +mB A(µ → ν) [1 − Padd (J )]nA +mB [1 − Padd (−J )]mA +nB A(ν → µ)

= e2β J (nB −mB ) .

(2.13)

Finally: A(µ → ν)

[ ]n −m 1 − Padd (−J ) B B = e2 β J . A(ν → µ) 1 − Padd (J )

(2.14)

I.N. Velonakis, S.S. Martinos / Physica A 390 (2011) 24–30

27

Any choice of acceptance ratios A(µ → ν) and A(ν → µ) which satisfies the relation (2.14) will satisfy detailed balance. All we need to do is to choose Padd to satisfy the equation 1 − Padd (−J ) 1 − Padd (J )

= e−2β J

(2.15)

and then we can set both the acceptance ratios equal to unity: A(ν → µ) = A(µ → ν) = 1.

(2.16) −2 β J

In the present work we have chosen Padd (−J ) = 1 − e and Padd (J ) = 0, so we have a Wolff-like algorithm. Our choice could be more general (Niedermayer like). In order to generalize the algorithm to the case of an external magnetic field [3] we simply consider again two states µ and ν separated by the flipping of a single cluster and we suppose that the magnetization of this cluster before its flipping is Mcluster , so after the flipping the magnetization is — Mcluster . The appropriate generalization of the equation of the energy change (2.12) in this case is Eν − Eµ = 2J (mB − nB ) + 2HMcluster .

(2.17)

Substituting this into Eq. (2.8) and rearranging, we find that Eq. (2.14) is now modified as follows e2β HMcluster

A(µ → ν) A(ν → µ)

[[ =

1 − Padd (−J ) 1 − Padd (+J )

]

e−2β J

]nB −mB

.

(2.18)

The simplest way to satisfy this equation is to choose Padd as in the normal Wolff (2.15) or Niedermayer algorithm, so that the factor inside the square brackets becomes 1. Then the ratio of the acceptance probabilities for forward and backward moves is exp[−2β HMcluster ]. The most efficient choice of acceptance ratio to satisfy this constraint is A(µ → ν) =



exp[−2β HMcluster ], if HMcluster > 0 1, if HMcluster ≤ 0.

(2.19)

Notice that (2.19) reduces to (2.16) when H = 0. Of course, with this method the acceptance ratio A(µ → ν) is not always unity, so the algorithm is not as effective as with zero external magnetic field. So, for rather weak magnetic fields it is usually better to extrapolate the simulation results for H = 0 using the single-histogram method [3]. For the application of the previous algorithm in computer programs it is used a data structure known as First in/First out (FIFO) buffer or queue [3], applied in a way known as circular buffering [3]. We also implement characteristics of structured and object-oriented programming in our FORTRAN 2003 code, like pointers, structures and classes. In order to avoid systematic errors from the combination of the selected algorithm and the random-number generators involved in the programs [29], we have tested many modern fast random-number generators, such as the sequential ones ran2 (a linear congruential generator improved by a shuffling scheme), ran3 (a subtractive lagged Fibonacci generator) [3,30] and ran (a combination of Marsaglia shift registers) [3,31], as well as the parallel random number generator ran2 [31], which, according to Press et al. [31], combines three different generators, a subtractive Fibonacci generator, a Marsaglia shift sequence and a linear congruential generator. All of them have very long periods [3,30,31], between 232 –255 , which are considered enough for the needs of our simulations. Also, the results of the simulations under different conditions did not suffer from systematic errors like the ones referred in [29]. For most of the simulations of the next section we used the parallel random number generator ran2 [31] because it combines long period with the ability to be used efficiently in parallel programming. 3. Results and discussion In this section we investigate the characteristics of the new cluster algorithm to find whether it works better than the other widely used algorithms, like the Metropolis algorithm, at least under specific conditions. At first we simulate a 30 × 33 and a 60 × 63 Baxter–Wu lattice with periodic boundary conditions [2,3] for a wide range of temperatures under zero external magnetic field. Note that whenever in this section we refer to the lattice size L of the Baxter–Wu lattice we mean an L × (L + 3) lattice. We avoid using square L × L lattices because then the overall shape of the finite lattice is a rhombus, which has different symmetries than the infinite one [3]. At each temperature we use 107 Metropolis MCS (Monte Carlo Sweeps per lattice site) for equilibration and 108 Cluster Monte Carlo Steps for simulation. In order to estimate the dynamic exponent of the new algorithm (the meaning of this exponent will be explained later) we begin by calculating the time-displaced magnetization autocorrelation function χ (t ) during our simulations for all possible triangular Baxter–Wu lattices from 24 × 27 up to 102 × 105 at critical conditions, which are well known [12], and we plot each of them individually as a function of time. After that we replot autocorrelation function on semi-logarithmic axes, so that the slope of the line gives us the correlation time τ . Now τ can be estimated by fitting the straight line portion of the plot using a least-square method. To check our results, the above work is repeated, this time with a well-known algorithm with single-spin-flip dynamics, the Metropolis algorithm, using 107 MCS for equilibration and 109 MCS for each simulation. Note that the proposed algorithm must be compared not only to the Metropolis algorithm but also to the cluster algorithm

28

I.N. Velonakis, S.S. Martinos / Physica A 390 (2011) 24–30

Fig. 1. Average cluster size (as a fraction of the lattice size) vs. temperature for L = 30 and L = 60. We see that at critical temperature the cluster size is still too large. Tc is the critical temperature and T (pC ) is the temperature which corresponds to the percolation threshold pC .

Fig. 2. The magnetization autocorrelation function vs. time for L = 36 at critical point on semi-logarithmic axes. The linear form of the diagram verifies the exponential dependence of the autocorrelation function on time.

of Novotny and Evertz [9]. We will restrict the comparison in the critical region because, as it is well known [2,3], at low and high temperatures all the cluster algorithms become almost equivalent to the Metropolis algorithm. The main feature that describes the efficiency of an algorithm in the critical region is the so-called ‘‘dynamic exponent’’. Before we proceed to the estimation of this exponent, however, we will deal with another feature of a cluster algorithm. That is the mean size of the created clusters at various temperatures. The dependence of this size on the temperature is shown in Fig. 1. As it is expected, at low temperatures the clusters become percolating, i.e. they fill the whole lattice. At high temperatures on the other hand they reduce to isolated points. It is a notable ascertainment that at the critical temperature the mean cluster size is only a little smaller than the whole lattice size while the percolation threshold is lying well above the critical temperature. At this point Baxter–Wu model differs from the Ising model where this threshold coincides with the critical temperature. We must point out, however, that there are some other cases with a similar behavior [10,11]. Concerning now about dynamic exponent we have in brief the following [2,3]: Every algorithm that simulates a system gives, after equilibration, successive estimates for some quantity, the magnetization for example. Let m(t ) be such an estimate at the instant of ‘‘time’’ t. Here by speaking for ‘‘time’’ we mean the number of steps during the implementation of the algorithm while the exact meaning of ‘‘step’’ depends on the kind of the algorithm. Because of the nonindependence of successive states created by the algorithm, there exists a correlation between two estimates given with a ‘‘time difference’’ t. We define as ‘‘autocorrelation function’’ exactly that:

  χ (t ) = (m(t ′ ) − ⟨m⟩)(m(t ′ + t ) − ⟨m⟩) = ⟨m(t ′ )m(t ′ + t )⟩ − ⟨m⟩2 .

(3.1)

I.N. Velonakis, S.S. Martinos / Physica A 390 (2011) 24–30

a

29

b

Fig. 3. (a) Energy correlation time τe vs. linear lattice size L on logarithmic axes for cluster and Metropolis algorithm at critical point. (b) Same for magnetization correlation time τm (In both the above figures squares correspond to the results of the Metropolis algorithm and the dashed line is their least-square line while circles correspond to results of the new cluster algorithm and the solid line is their least-square line. We have taken all possible Baxter–Wu lattices from 24 × 27 up to 102 × 105 at critical point).

The brackets denote mean values for the ‘‘initial instants’’ t ′ .The expected dependence of χ (t ) on ‘‘time’’ will have the form: t

χ ( t ) ∼ e− τ .

(3.2)

Indeed, as we see in Fig. 2, the dependence of log χ (t ) on t is linear, although not for large t. This is because in practice the initial instant of time t ′ moves up to some tmax and not to infinity. Thus, the relation (3.2) is expected to hold only for t ≪ tmax . Now, by a linear fitting of log χ (t ) we can determine the factor τ that is called ‘‘correlation time’’. It represents the ‘‘time interval’’ that is needed between two estimates so that they could be supposed as independent. The dynamic exponent z describes the divergence of the correlation time at the critical temperature (This divergence is the essence of critical slowing down). The divergence follows the law τ ∼ ξ z and given that correlation length ξ diverges Tc according to the power law ξ ∼ |∈|−ν (∈ is the reduced temperature T − ) the corresponding power law for the correlation T c

time will be τ ∼ |∈|−z ν . On the other hand correlation length obeys a definite finite size scaling relation. From this we can directly conclude a corresponding scaling relation for correlation time that is

τ ∼ Lz ,

(3.3)

where L is the linear size of the lattice. We point out that dynamic exponent differs from other critical exponents. For a definite model it depends on the used algorithm and not on the universality class to which the model belongs. Of course, it is easily understood that for every different quantity, such as energy and magnetization, we can define a different correlation time and a corresponding dynamic critical exponent [3,32]. These exponents may or may not have the same values and the lowest of them is considered as the system’s dynamic critical exponent for a specific algorithm [32]. In this article we are going to use the dynamic critical exponents for magnetization and energy zm and ze respectively. The first is measured for the time displaced magnetization-magnetization autocorrelation function of Eq. (3.1) while the second is measured for the energy–energy autocorrelation function, given by the relation

  χE (t ) = (E (t ′ ) − ⟨E ⟩)(E (t ′ + t ) − ⟨E ⟩) = ⟨E (t ′ )E (t ′ + t )⟩ − ⟨E ⟩2 .

(3.4)

Their values turn out to be almost equal, as expected [32] because the observable E is not ‘‘orthogonal’’ (completely independent) to m. In practice, the dynamic exponent z is determined using the scaling relation (3.3). In Metropolis algorithm ⟨⟨time⟩⟩ is usually measured in Monte Carlo sweeps. In a cluster algorithm this ⟨⟨time⟩⟩ could be measured in cluster reversions and let τrev be the relative correlation time. But this τrev is not proper for comparison of the two algorithms because at each step ⟨n⟩ of the cluster algorithm we reverse only a part of the lattice. In Wolff algorithm for Ising model this part is Ld , where ⟨n⟩ is the average cluster size. In our algorithm for Baxter–Wu model the corresponding part is correlation time will be

τ = τrev

2⟨n⟩ 3Ld

.

2 3

of previous. Thus the proper

(3.5)

However, this choice does not ensure equivalence with Metropolis algorithm at high temperatures but this region is not important. After finishing the necessary calculations for all lattices we can find the relation between log τ and log L, which is shown in Fig. 3. The dynamic exponent is specified from the slope of the linear part of the corresponding diagram. Fig. 3(a) is

30

I.N. Velonakis, S.S. Martinos / Physica A 390 (2011) 24–30

concerned with the correlation time for the energy (per lattice site) while Fig. 3(b) for the magnetization (per lattice site). The corresponding dynamic exponents for the proposed cluster algorithm arise with almost the same value: zm = 1.272 ± 0.003 and ze = 1.267 ± 0.002 (the statistical errors were calculated by linear fitting). The fact that their values are almost equal is something expected [32]. For the Metropolis algorithm we found that these exponents are zm = 2.30 ± 0.11 and ze = 2.09 ± 0.07. We conclude that our cluster algorithm reduces significantly the critical slowing down in comparison to the Metropolis algorithm and it is a little better than Novotny and Evertz cluster algorithm where the dynamic exponent for magnetization is 1.37 [9]. This reduction however is not as effective as it is in Wolff algorithm for Ising model (z = 0.25 [3]). The reason is probably the position of percolation threshold, as it has been mentioned above. At critical temperature the created clusters are still too large with a resultant decrease of the efficiency of the algorithm. Also, Li and Sokal have proved the inequality z ≥ αν [32]. For 2-dimensional, 4-states Potts models’ Universality Class, in which Baxter–Wu model belongs to, we have αν = 1 [12], while in 2-d Ising Model Universality Class we have α = 0, so our result for dynamic critical exponent z is considered satisfactory (in fact, z ≈ 0.25 for the Ising model according to the best known results of Cluster Monte Carlo Algorithms [3]). Acknowledgements This work was partly supported by the Special Account for Research Grants of the University of Athens under Grant no. 70/40/7677. Also, Ioannis N. Velonakis was supported during this work by the State Scholarships Foundation (I.K.Y.), Greece. References [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12] [13] [14] [15] [16] [17] [18] [19] [20] [21] [22] [23] [24] [25] [26] [27] [28] [29] [30] [31] [32]

N. Metropolis, A.W. Rosenbluth, M.N. Rosenbluth, H. Teller, E. Teller, J. Chem. Phys. 21 (1953) 1087. K. Binder, D.P. Landau, A Guide to Monte Carlo Simulations In Statistical Physics, University Press, Cambridge, 2000. M.E.J. Newman, G.T. Barkema, Monte Carlo Methods in Statistical Physics, Clarendon Press, Oxford, 2001. U. Wolff, Phys. Rev. Lett. 62 (1989) 361. F. Niedermayer, Phys. Rev. Lett. 61 (1988) 2026. P.D. Coddington, C.F. Baillie, Phys. Rev. Lett. 68 (1992) 962. R. Matz, D.L. Hunter, N. Jan, J. Stat. Phys. 74 (1994) 903. P.D. Coddington, C.F. Baillie, Phys. Rev. B 43 (1992) 10617. M.A. Nonotny, H.G. Evertz, in: D.P. Landau, K.K. Mon, H.-B. Schüttler (Eds.), Computer Simulation Studies in Condensed-Matter Physics, vol. VI, Springer, Berlin, 1993, p. 188. Y. Deng, H.W.J. Blöte, B. Nienhuis, Phys. Rev. E 69 (2004) 026114. H.W.J. Blöte, J.R. Heringa, E. Luijten, Comp. Phys. Com. 147 (2002) 58–63. R.J. Baxter, Exactly Solved Models in Statistical Mechanics, Academic Press, 1989. R.J. Baxter, F.J. Wu, Phys. Rev. Lett. 31 (1973) 1294. D.W. Wood, H.P. Griffiths, J. Phys. C 5 (1972) 253. R.J. Baxter, M.F. Sykes, M.G. Watts, J. Phys. A 8 (1975) 245. R.J. Baxter, I.G. Enting, J. Phys. A 9 (1976) 149. H.E. Stanley, L.L. Luke, Phys. Rev. B 10 (1974) 2958. J.M. Yeomans, Statistical Mechanics of Phase Transitions, Clarendon Press, Oxford, 1992. M.A. Novotny, D.P. Landau, Phys. Rev. B 24 (1981) 1468. M.A. Novotny, D.P. Landau, R.H. Swendsen, Phys. Rev. B 26 (1982) 330. M.A. Novotny, D.P. Landau, Phys. Rev. B 32 (1985) 3112. M.A. Novotny, D.P. Landau, Phys. Rev. B 32 (1985) 5874. A. Malakis, J. Phys. A 14 (1981) 2767. A. Malakis, J. Stat. Phys. 27 (1982) 1. M. Santos, W. Figueiredo, Phys. Rev. E 63 (2001) 042101. S.S. Martinos, A. Malakis, I. Hadjiagapiou, Physica A 331 (2004) 182–188. S.S. Martinos, A. Malakis, I. Hadjiagapiou, Physica A 352 (2005) 447–458. S.S. Martinos, A. Malakis, I. Hadjiagapiou, Physica A 355 (2005) 393–407. A.M. Ferrenberg, D.P. Landau, W.J. Wong, Phys. Rev. Lett. 69 (1992) 3382. W.H. Press, S.A. Teukolsky, W.T. Vettering, B.P. Flannery, Numerical Recipes (in FORTRAN): The Art of Scientific Computing, 2nd edition, University Press, Cambridge, 1992. W.H. Press, S.A. Teukolsky, W.T. Vettering, B.P. Flannery, Numerical Recipes (in FORTRAN 90): The Art of Parallel Scientific Computing, 2nd edition, University Press, Cambridge, 1996. A.D. Sokal, Nuclear Phys. B (Proc. Suppl.) 20 (1991) 55–67.