Optimizing kernel methods for Poisson integrals on a uniform grid

Optimizing kernel methods for Poisson integrals on a uniform grid

Computer Physics Communications 215 (2017) 1–6 Contents lists available at ScienceDirect Computer Physics Communications journal homepage: www.elsev...

493KB Sizes 0 Downloads 20 Views

Computer Physics Communications 215 (2017) 1–6

Contents lists available at ScienceDirect

Computer Physics Communications journal homepage: www.elsevier.com/locate/cpc

Optimizing kernel methods for Poisson integrals on a uniform grid D. Gabay a , A. Boag a , A. Natan a,b,∗ a

Department of Physical Electronics, Tel-Aviv University, Tel-Aviv, 69978, Israel

b

The Sackler Center for Computational Molecular and Materials Science, Tel-Aviv University, Tel-Aviv, 69978, Israel

article

info

Article history: Received 13 November 2016 Received in revised form 23 January 2017 Accepted 24 January 2017 Available online 2 February 2017

abstract We analyze the error and error propagation in the calculation of the Poisson integral on a uniform grid within Density Functional Theory (DFT) real-space calculations. We suggest and examine several schemes for near neighbors’ interaction correction for the Green’s function kernel to improve the accuracy. Finally, we demonstrate the effect of the different kernels on DFT eigenvalues and Hartree energy accuracy in systems such as C60 and C40 H82 . © 2017 Elsevier B.V. All rights reserved.

Keywords: Poisson solver Density-Functional-Theory Electrostatic potential Ab-initio

1. Introduction The solution of the Poisson equation plays an important role in first principles methods such as Density Functional Theory (DFT) [1], time dependent DFT (TDDFT) [2], Hartree–Fock [3], GW [4], and others. In recent years this is becoming even more important as many DFT methods utilize hybrid functionals that include a fraction of Fock exchange or screened exchange [1,5–11]. The Poisson equation for the electrostatic potential, V , is given by:

∇ 2 V (r) = −4π ρ(r) (1) where ρ is the electronic density. A common approach to solve Eq. (1) is to use iterative solvers such as the conjugate gradient (CG) method [12] after representing it in a given basis. For isolated systems, Dirichlet boundary conditions are used to supplement the equation, by calculating the electrostatic potential outside the domain. In this work, we focus on the solution of Eq. (1) on a uniform discrete grid. In such a representation, the density, ρ , is sampled on the grid and the differential Laplacian operator is replaced by a high order finite difference expression [13–15]. An alternative approach to solving Eq. (1) is to write the equivalent Poisson integral: V (r) =

 D

ρ(r′ ) ′ dr |r′ − r|

(2)

∗ Corresponding author at: Department of Physical Electronics, Tel-Aviv University, Tel-Aviv, 69978, Israel. E-mail addresses: [email protected] (D. Gabay), [email protected] (A. Boag), [email protected] (A. Natan). http://dx.doi.org/10.1016/j.cpc.2017.01.016 0010-4655/© 2017 Elsevier B.V. All rights reserved.

where D is the computational domain. The advantage of Eq. (2) is that there is no need to calculate the boundary conditions. A disadvantage is that a direct calculation of the integral is too expensive, i.e. O(N 2 ), where N is the number of grid points. This computational difficulty can be resolved by efficient schemes such as Fast Fourier Transform (FFT) [16,17], Fast Multipole Method (FMM) [18,19], auxiliary grids [20,21] and tensor methods [22,23] that achieve O (N log N ) or even O(N ) scaling. Another potential problem in the numerical evaluation of Eq. (2) is the correct discretization of the integral. A naive discretization would lead to a too large error (as is demonstrated later). Mathematically this is because a high-order integration and smooth representation of the charge are needed, Physically — the Green’s function, 1/|r′ − r|, is correct only in the continuum limit and a different function should be used when doing the calculation on a grid. There could be several ways to solve this problem, mathematically – higher order integration methods such as Gaussian quadrature can improve the result, physically – one can suggest Green functions (or kernels) that are more suitable for the discrete grid. In this manuscript, we analyze the error of different kernel methods in evaluating the integral of Eq. (2) and compare to the analytical solution for a Gaussian charge distribution and to the solution of Eq. (1) with high order finite difference methods. We use FFT for the analysis, but the conclusions can be applied to any other numerical integration method. We start with a description of the different kernel methods, we then analyze the errors and finally demonstrate how they propagate into errors in eigenvalues of DFT calculations. An efficient approach which we do not compare here is the transformation of the integral in Eq. (2) into an equivalent

2

D. Gabay et al. / Computer Physics Communications 215 (2017) 1–6

integral [24,25]:

ρ(r ) ′ 2 dr = √ |r′ − r| π ′

 R3





 dt

0

R3

2 ′ 2 dr′ e−t |r−r | ρ(r′ ).

(3)

The spatial integral in Eq. (3) can be easily separated to 1D integrals in each coordinate. This transformation can then be used, together with tensor decomposition methods for the density, to yield efficient and accurate calculations [22–29] with implementations for multi-resolution wavelets basis, uniform grids and more general mesh structures. 2. Discrete integration schemes We assume a uniform discrete grid where xi = ih, yj = jh, zk = kh, where i, j, k are integers and h is the grid spacing. We further assume that there is a reversible index function n(i, j, k) that gives a unique and reversible integer index mapping for every point rn = (xi , yj , zk ) in the grid. For simplicity, we assume a box shaped domain. With this, we can write the naive discrete approximation for Eq. (2) as: V (rn ) ≃ h3

 ρ(rm ) . |rn − rm | m̸=n

(4)

However, Eq. (4) has several problems. First it does not include the self interaction term, second, the near field potential of a cubic voxel with some charge density can deviate significantly from 1/r behavior due to the cube finite size. The summation in Eq. (4) can be written in a more general way as: V (rn ) = h3



ρ(rm )G(rn − rm ).

(5)

m

It is easy to choose a G function that will reduce Eq. (5) to Eq. (4), but we can also choose different discrete Green functions that will make the discrete summation of Eq. (5) closer to the exact integral of Eq. (2). Furthermore, Eq. (5) retains the form of a convolution and so it is easy to efficiently calculate it with FFT as is further discussed in the next paragraph. 2.1. FFT formulation A direct calculation of Eq. (5) can be too time consuming for large grids (O (N 2 )), we therefore use FFT for this calculation. Eastwood and Brownrigg [30] have shown that by zero padding of the density and doubling of the domain in each direction, Eq. (5) can be turned exactly into a cyclic convolution summation. This can be used with FFT, the convolution theorem and inverse FFT to yield a very efficient (O (N log N )) procedure for calculating the summation of Eq. (5). Such schemes have already been implemented in DFT codes [16,17,31]. We use it for a fast calculation of Eq. (5) to analyze the accuracy of different choices of the kernel function G(rn ). 2.2. Finite difference delta kernel One possible way to define a corrected kernel is to numerically solve the Poisson equation with the CG method for a delta function density [21]. This is done once at the beginning of the simulation.

∇ GCG (r) = −4π δ(r). 2

(6)

The solution of the continuous form of Eq. (6) is GCG (r) = 1/r. In the discrete form, we replace the Laplacian with its high order finite difference equivalent [15] and for the density we have the Kronecker discrete delta function. The solution GCG (rn ), of the discrete high order finite difference equation, can now be used as the Green’s function kernel in Eq. (5). The solution of the discrete

form of Eq. (6) with CG can be done on a domain that is twice larger in each dimension, compared to the original domain, and in that case we can use the solution directly as G(rn ). Another possibility is to, instead, solve the equation on a smaller domain, ΩCG and then extend the kernel according to: G(rn ) =

GCG (rn ), 1/rn ,



rn ∈ ΩCG otherwise.

(7)

2.3. Semi-analytical kernel corrections Wilton et al. [32] have suggested an effective analytical approximation for the Green’s function kernel on discrete grids. The method transforms the three dimensional integral into a single dimensional one by consecutively applying the Gauss integral theorem. This leads to a new analytical approximation for the Green’s function kernel, G(rn ), of which the details are provided in the article by Wilton et al. [32]. We call this method the Wilton kernel in future references in this paper. 2.4. Numerically optimized kernel While the Wilton’s method has an elegant analytical form, the accuracy it achieves is typically not sufficient for DFT calculations. One approach, can be to start with a given kernel (which can be Wilton’s) and use numerical methods to minimize the error relative to the known analytical solution of a given Gaussian charge density with a specific value of h/σ , where h is the grid step size and σ is the Gaussian standard deviation. We can then check the performance of such an optimized kernel with other Gaussian charge densities (different values of h/σ ). We have performed this procedure with the Wilton kernel as a starting point. We minimized the total error, ϵV , as defined later in the text by Eq. (13), relative to the analytical solution for a Gaussian charge density with h/σ = 0.5. We have used as optimization parameters the values of the kernel function at points inside a cube of size of 1 to 3 neighbors in each direction. The optimization results in a set of correction factors, F (rn ), that give the new, numerically optimized, kernel: GNOPT (rn ) = F (rn ) · GWilton (rn ).

(8)

For example, the number of parameters in a cube of 1 neighbor in each direction is 3 × 3 × 3 = 27, but it can be reduced to 4 independent parameters by symmetry considerations. It should be noted that the factors, F (rn ), are different from unity only within the defined cube of near neighbors. 2.5. High order spline basis functions Another possible way to calculate the Poisson integral is to represent the charge density by known basis functions and calculate the Poisson integral for those basis functions. There can be many choices for such basis functions. In our analysis, we have chosen to use linear and cubic spline basis functions [33]. We can write:

ρ(r) dr | r n − r| D   B(r − rm ) = ρ(rm ) dr |r − rn | m

V (rn ) =



(9)

where we represent ρ(r) by:

ρ(r) =

 m

ρ(rm )B(r − rm )

(10)

D. Gabay et al. / Computer Physics Communications 215 (2017) 1–6

Fig. 1. Average relative error in potential, ϵV , for the different kernels as defined by Eq. (12). σ = 0.8 au, L = 16 au.

3

Fig. 2. Relative error in electrostatic energy, ϵE , for the different kernels as defined by Eq. (13). σ = 0.8 au, L = 16 au.

and B(r) is the spline basis function. It is easy to show that the integral in Eq. (9) depends only on the difference rn − rm and so we can write: Gspline (rn ) ≡

B(r)

 ΩB

|r − rn |

dr

(11)

where ΩB is the support of the spline function, B(r). The numerical calculation of the integral of Eq. (11) is performed only once prior to the simulation. 3. Results We test the different kernels’ performance by using Gaussian charge densities, with varied width, σ , relative to the grid step, h. A Gaussian charge distribution of the form ρ(r ) =



e−r /2σ /σ 3 (2π )3/2 produces the potential V (r ) = erf(r / 2σ )/r. This allows us to quantify the error for different methods as was performed in a recent comparison of Poisson solvers performance and accuracy [31]. In our analysis, we keep the Gaussian width, σ , fixed and small compared to the domain size to avoid artifacts. We therefore change the grid step h relative to σ and present the error vs. h/σ . Following Ref. [31], we define measures for the error in the following ways: First the average error in potential is defined as: 2

2

 ϵV =

n

|Va (rn ) − Vk (rn )|  |Va (rn )|

(12)

n

where Va is the analytical potential and Vk is the kernel based calculated potential. In addition, we define the relative error in electrostatic energy as:

ϵE =

|Ea − Ek | |Ea |

(13)

where Ea = 21 h3 n Va (rn )ρ(rn ) and Ek = 12 h3 n Vk (rn )ρ(rn ) are the analytic and numerical electrostatic energies, respectively. We have calculated Ea numerically in all of the analyses to follow. In the Supporting Information (SI), we show that the relative difference between numerical  evaluation of Ea and analytical evaluation of the integral 21 Va (r)ρ(r) is significantly smaller than all the kernels errors in the range of parameters that were checked.





Fig. 3. Effect of domain size on the average error of the potential.

3.1. Different kernels accuracy comparison We have calculated the kernel accuracies as a function of grid step-size, h, while keeping the Gaussian width, σ = 0.8 au, and side length of our cubical domain, L, fixed to L = 16 au. Fig. 1 shows the average error in the potential. It is evident from Fig. 1 that the spline representation method is the most accurate, however, it is possible to also get very good accuracies with the numerically optimized kernel. The CG kernel method and the CG solver (solving the differential equation) produce less accurate results, but are still much better than the Wilton kernel. The results are very similar when we examine the average relative error in the electrostatic energy as shown in Fig. 2. Another effect that is interesting to check is that of domain size on the error. In Figs. 3 and 4, we show the effect of domain size on the potential and energy errors, respectively, for a Gaussian charge density with σ = 0.8 au and h/σ = 0.5 for the different kernels. One behavior that is obvious from Fig. 3, is that all kernels besides the CG solver and CG kernel improve in relative potential error as the domain gets larger. A possible reason for the different behavior

4

D. Gabay et al. / Computer Physics Communications 215 (2017) 1–6

Fig. 4. Effect of domain size on the relative error of the electrostatic energy.

Fig. 6. Error in potential as a function of multipole order — the dashed lines show the errors of the CG differential equation solver for orders 1 (red) and 2 (blue). Higher orders improve accuracy only slightly for a Gaussian charge density. The solid line shows the error for the CG kernel. (For interpretation of the references to color in this figure legend, the reader is referred to the web version of this article.)

It is evident that already with a single neighbor (27 pts) a dramatic improvement is achieved. A larger number of neighbors will typically achieve better accuracy, although there could be exceptions which can happen because of numerical errors in the optimization process. See the SI for an example and discussion.

Fig. 5. Potential relative error as a function of h/σ for different number of neighbors in the numerically optimized method. σ = 0.8 au, L = 16 au.

of the CG iterative solver is its default convergence criterion which is based on the average relative error. When this value is fixed, the change of domain has a much smaller effect since the solver exists at a fixed error level. This also affects the accuracy of the CG kernel. The effect of domain size on the energy error is almost negligible in all methods, making it consistent with the results of Garcia et al. [31]. 3.1.1. Numerically optimized kernel error analysis The parameter that mostly affects the performance of the numerically optimized kernel is the number of neighbors that are corrected. The accuracy will improve as we take more neighbors but there can be a computation cost. This computational cost is negligible when we perform FFT for the integration, but can be more pronounced in other methods such as FMM [34,18] or nonuniform auxiliary grids [21]. In Fig. 5, we show the average errors of the potential as a function of h/σ with a varied number of neighbors correction — 0 (correcting just the self term), 1 (3 × 3 × 3, 27 pts), 2 (5 × 5 × 5, 125 pts), 3 (7 × 7 × 7, 343 pts). We also show the performance of the Wilton kernel for comparison.

3.1.2. CG kernel specific parameters The CG method (solving the differential equation) and the CG kernel naturally give very similar results and are affected by the same parameters. To solve the Poisson equation, proper boundary conditions should be defined, this is usually done with the multipole expansion [35,36]. The accuracy of the multipole expansion depends on its order and the shape of the electronic density. For spherical domains, we typically take in DFT calculations a multipole order that is around 10 [36]. Unlike the effect of the multipole order on the CG solver differential method, the effect on the CG kernel is very minor. This is because we use the CG solver once to calculate the potential of a delta function and because the delta function does not have moments higher than zero. The effect of multipole order on the CG solver and CG kernel potential error for a Gaussian charge density is shown in Fig. 6. The effect on the energy is shown in the SI. Another parameter that affects accuracy is the order of the finite difference expansion for the Laplacian operator [15]: n=N  ∂ 2 f (x) cn ≃ f (x + nh) + o(h2N ). 2 ∂ x2 h n=−N

(14)

The effect of the finite difference order is demonstrated in Fig. 7. Finally, it should be noted that an important parameter that affects the CG solver accuracy (and hence also the CG kernel accuracy) is the iterative solver convergence criterion which is based on the relative error. As the CG solver is an iterative solver [12] there is a natural trade-off between accuracy and the number of required solver iterations. We have carried this work with a constant solver relative error level. Depending on h/σ , this produces a given error relative to the analytical solution. With the CG solver, improving the accuracy comes with an increased computational cost. However, for the CG kernel, since we only run the CG solver once before the simulation, it is possible to improve the accuracy without much additional computational cost.

D. Gabay et al. / Computer Physics Communications 215 (2017) 1–6

Fig. 7. Potential error as a function of FD order (2N). Solid lines show the error for the CG kernel, dashed lines show the error for CG finite difference solver.

Fig. 8. Average potential and energy errors for spline representation as a function of the number of integration points. Linear spline functions are shown in red, cubic spline functions are shown in blue. (For interpretation of the references to color in this figure legend, the reader is referred to the web version of this article.)

3.1.3. Spline basis specific parameters Naturally a higher order spline basis function representation can yield more accurate results, however, the computational cost will be higher. In our analysis, we have found that cubic spline representation can already achieve a relatively high accuracy. Another practical question is the numerical integration of the Poisson integral for the spline basis functions. This integration is carried out once and then stored as a table. We have used a numerical integration scheme suggested by Khayat and Wilton [37] to integrate the singularity. An important parameter in this scheme is the number of Gauss–Legendre radial sampling points that are used for the integration [37]. The final error for a given h/σ is shown in Fig. 8 as a function of the spline order and number of sampling points. 3.2. Error propagation in Density Functional Theory calculations The different methods were implemented and tested within the PARSEC DFT real-space package [13,14,38]. We have tested the error in the highest occupied eigenvalue and the Hartree

5

Fig. 9. Hartree electrostatic energy (in Ry) for C60 as a function of grid space size for the different kernels.

energy as a function of the kernel that was used. The tests were performed for C60 and C40 H82 . C60 initial geometry was taken from a force field optimized data [39]. Molecules geometry was relaxed with the Local Density Approximation (LDA) and LDA ground state calculations were performed with the different kernels for the relaxed structure. We have used norm conserving pseudopotentials [40] for the C and H atoms with cutoffs (in atomic units) radii of 1.6/1.6 for C s/p orbitals and 1.39 for H s orbital. We then solved self consistently the Kohn–Sham equations with grid spacing of 0.2, 0.3, 0.4, 0.5 and 0.6 atomic units and checked the values of the different kernels for the Hartree energy and the Highest Occupied Molecular Orbital (HOMO) in the two molecules. The domain size was fixed at 28 × 28 × 28 for the C60 and 20 × 20 × 140 for the C40 H82 , all in atomic units. The values for both molecules are shown in Table 1, the Hartree energy for C60 is shown in Fig. 9, additional figures are given in the SI. It is evident that apart from the Wilton method, all other methods agree well with each other in all grid space values. The NOPT and spline methods give similar results and were also found in the previous sections to be the most accurate. The CG kernel and CG solver also give similar results to each other. 4. Summary and discussion We have analyzed the error in calculations of the electrostatic potential on a uniform grid for several choices of kernel Green functions. We have shown that by a correction of the near neighbors’ interaction, it is possible to get accuracy that is higher than typical solutions of the high order finite difference equation with the CG method. This accuracy is also comparable to that of the spectral kernel method [31]. When we examine the error propagation into the DFT results, we can see that with typical grid steps and common tolerance settings, the CG solver and the derived CG kernel have also a sufficient accuracy for the calculations. This is mostly because with typically used pseudopotentials other errors become more dominant when the grid spacing becomes larger. However, in hybrid-DFT and Hartree–Fock, most of the calculation time is the evaluation of the Fock exchange [41], this can motivate using the largest possible grid step for this calculation and so the ability to reach better accuracy at larger grid steps can be very important. Both the numerically optimized method and the spline method are relatively easy to implement and can be used with any integration method for the Poisson integral. We have included in the SI full

6

D. Gabay et al. / Computer Physics Communications 215 (2017) 1–6

Table 1 Calculation of Hartree energy in Ry and HOMO in eV for C60 (rows 1-5 and 6-10 respectively) and C40 H82 (rows 11-15 and 16-20 respectively). Wilton

CG solver

CG kernel

Spline

NOPT

0.2 0.3 0.4 0.5 0.6

7896.8168 7897.2627 7897.7619 7899.4439 7903.1310

7896.5905 7896.7778 7896.9328 7898.1917 7901.3520

7896.5969 7896.7794 7896.9302 7898.1863 7901.3437

7896.5848 7896.7670 7896.9305 7898.2105 7901.3899

7896.5846 7896.7665 7896.9296 7898.2091 7901.3878

0.2 0.3 0.4 0.5 0.6

−5.9742 −6.0083 −6.0402 −6.1086 −6.1112

−5.9520 −5.9580 −5.9499 −5.9646 −5.9003

−5.9521 −5.9582 −5.9503 −5.9652 −5.9013

−5.9502 −5.9536 −5.9421 −5.9526 −5.8832

−5.9502 −5.9536 −5.9420 −5.9524 −5.8830

0.2 0.3 0.4 0.5 0.6

4248.1061 4248.4061 4248.9849 4251.1889 4253.5848

4247.8296 4247.7879 4247.9141 4249.5326 4251.2939

4247.8245 4247.7858 4247.9115 4249.5260 4251.2807

4247.8088 4247.7642 4247.8967 4249.5313 4251.3157

4247.8084 4247.7636 4247.8956 4249.5294 4251.3130

0.2 0.3 0.4 0.5 0.6

−6.9443 −6.9840 −7.0358 −7.0053 −7.0150

−6.9155 −6.9182 −6.9187 −6.8254 −6.7564

−6.9151 −6.9182 −6.9189 −6.8259 −6.7588

−6.9127 −6.9130 −6.9103 −6.8131 −6.7390

−6.9127 −6.9129 −6.9101 −6.8129 −6.7387

numerical values for the numerically optimized method with one and three neighbors. Those values can be used as is for the purpose of kernel near neighbors correction. Acknowledgment A.N. acknowledges financial support from ISF grant 1722/13. Appendix A. Supplementary data Supplementary material related to this article can be found online at http://dx.doi.org/10.1016/j.cpc.2017.01.016. References [1] W. Koch, M. Holthausen, A Chemist’s Guide to Density Functional Theory, Wiley-VCH, 2000. [2] M. Marques, N. Maitra, F. Nogueira, E. Gross, A. Rubio, Fundamentals of TimeDependent Density Functional Theory, in: Lecture Notes in Physics, Springer, Berlin, Heidelberg, 2012. [3] A. Szabo, N. Ostlund, Dover Books on Chemistry, Dover Publications, 1989. [4] L. Hedin, Phys. Rev. 139 (3A) (1965) A796. [5] J. Heyd, G.E. Scuseria, M. Ernzerhof, J. Chem. Phys. 118 (18) (2003) 8207–8215. [6] J. Heyd, G.E. Scuseria, J. Chem. Phys. 120 (16) (2004) 7274–7280. [7] J. Heyd, G.E. Scuseria, J. Chem. Phys. 121 (3) (2004) 1187–1192. [8] J. Toulouse, F. Colonna, A. Savin, Phys. Rev. A 70 (6) (2004) 062505. [9] R. Baer, D. Neuhauser, Phys. Rev. Lett. 94 (4) (2005) 043002. [10] E. Livshits, R. Baer, Phys. Chem. Chem. Phys. 9 (23) (2007) 2932–2941. [11] L. Kronik, T. Stein, S. Refaely-Abramson, R. Baer, J. Chem. Theory Comput. 8 (5) (2012) 1515–1531. [12] Y. Saad, Iterative Methods for Sparse Linear Systems, second ed., Society for Industrial and Applied Mathematics (SIAM, 3600 Market Street, Floor 6, Philadelphia, PA 19104), 2003. URL https://books.google.co.il/books?id=h9nwszYPblEC. [13] J. Chelikowsky, N. Troullier, K. Wu, Y. Saad, Phys. Rev. B 50 (16) (1994) 11355–11364. [14] J. Chelikowsky, N. Troullier, Y. Saad, Phys. Rev. Lett. 72 (8) (1994) 1240–1243. [15] B. Fornberg, Math. Comp. 51 (184) (1988) 699–706. [16] A. Cerioni, L. Genovese, A. Mirone, V.A. Sole, J. Chem. Phys. 137 (13) (2012) 134108. http://dx.doi.org/10.1063/1.4755349, URL http://www.ncbi.nlm.nih. gov/pubmed/23039586.

[17] N.D.M. Hine, J. Dziedzic, P.D. Haynes, C.-K. Skylaris, J. Chem. Phys. 135 (20) (2011) 204103. http://dx.doi.org/10.1063/1.3662863, URL http://www.ncbi.nlm.nih.gov/pubmed/22128924. [18] L. Greengard, V. Rokhlin, J. Comput. Phys. 73 (2) (1987) 325–348. http://dx.doi. org/10.1016/0021-9991(87)90140-9. [19] E.A. Toivanen, S.A. Losilla, D. Sundholm, Phys. Chem. Chem. Phys. 17 (47) (2015) 31480–31490. [20] S. Li, R. Chang, A. Boag, V. Lomakin, IEEE Antennas Propag. Mag. 54 (5) (2012) 71–87. [21] M. Zuzovski, A. Boag, A. Natan, Phys. Chem. Chem. Phys. 17 (47) (2015) 31550–31557. [22] V. Khoromskaia, B.N. Khoromskij, Comput. Phys. Comm. 185 (12) (2014) 3162–3174. [23] V. Khoromskaia, B.N. Khoromskij, Phys. Chem. Chem. Phys. 17 (47) (2015) 31491–31509. [24] D. Sundholm, J. Chem. Phys. 122 (19) (2005) 194107. [25] R.J. Harrison, G.I. Fann, T. Yanai, Z. Gan, G. Beylkin, J. Chem. Phys. 121 (23) (2004) 11587–11598. [26] S. Losilla, D. Sundholm, J. Jusélius, J. Chem. Phys. 132 (2) (2010) 024102. [27] T. Yanai, G.I. Fann, Z. Gan, R.J. Harrison, G. Beylkin, J. Chem. Phys. 121 (14) (2004) 6680–6688. [28] A. Durdek, S.R. Jensen, J. Juselius, P. Wind, T. Flå, L. Frediani, Appl. Numer. Math. 92 (2015) 40–53. [29] S.R. Jensen, T. Flå, D. Jonsson, R.S. Monstad, K. Ruud, L. Frediani, Phys. Chem. Chem. Phys. 18 (31) (2016) 21145–21161. [30] J. Eastwood, D. Brownrigg, J. Comput. Phys. 32 (1) (1979) 24–38. [31] P. García-Risueño, J. Alberdi-Rodriguez, M.J. Oliveira, X. Andrade, M. Pippig, J. Muguerza, A. Arruabarrena, A. Rubio, J. Comput. Chem. 35 (6) (2014) 427–444. [32] D.R. Wilton, A. Glisson, D. Schaubert, O. Al-Bundak, C.M. Butler, IEEE Trans. Antennas and Propagation 32 (3) (1984) 276–281. [33] A. Peterson, S. Ray, R. Mittra, I. Antennas, P. Society, IEEE Press Series on Electromagnetic Wave Theory, Wiley, 1998. [34] V. Rokhlin, J. Comput. Phys. 60 (2) (1985) 187–207. [35] J. Jackson, Classical Electrodynamics, Wiley, 1975. [36] W.R. Burdick, Y. Saad, L. Kronik, I. Vasiliev, M. Jain, J.R. Chelikowsky, Comput. Phys. Comm. 156 (1) (2003) 22–42. [37] M.A. Khayat, D.R. Wilton, IEEE Trans. Antennas and Propagation 53 (10) (2005) 3180–3190. [38] L. Kronik, A. Makmal, M.L. Tiago, M.M.G. Alemany, M. Jain, X. Huang, Y. Saad, J.R. Chelikowsky, Phys. Status Solidi (b) 243 (5) (2006) 1063–1079. http://dx.doi.org/10.1002/pssb.200541463. [39] Cn Fullerenes, David Tomanek and Nick Frederick at the Michigan State University Computational Nanotechnology Lab. http://www.nanotube.msu. edu/fullerene/fullerene-isomers.html. (Accessed on 1 June 2016). [40] N. Troullier, J.L. Martins, Phys. Rev. B 43 (3) (1991) 1993–2006. http:// dx.doi.org/10.1103/PhysRevB.43.1993, URL http://link.aps.org/doi/10.1103/ PhysRevB.43.1993. [41] N.M. Boffi, M. Jain, A. Natan, J. Chem. Theory Comput. 12 (8) (2016) 3614–3622.