Computers & Geosciences 70 (2014) 147–153
Contents lists available at ScienceDirect
Computers & Geosciences journal homepage: www.elsevier.com/locate/cageo
Development of a SaaS application probe to the physical properties of the Earth's interior: An attempt at moving HPC to the cloud Qian Huang n School of Earth Sciences and Engineering Nanjing University, 163 Xianlindadao Road, Nanjing 210046, China
art ic l e i nf o
a b s t r a c t
Article history: Received 29 June 2013 Received in revised form 4 June 2014 Accepted 5 June 2014 Available online 14 June 2014
Scientific computing often requires the availability of a massive number of computers for performing large-scale simulations, and computing in mineral physics is no exception. In order to investigate physical properties of minerals at extreme conditions in computational mineral physics, parallel computing technology is used to speed up the performance by utilizing multiple computer resources to process a computational task simultaneously thereby greatly reducing computation time. Traditionally, parallel computing has been addressed by using High Performance Computing (HPC) solutions and installed facilities such as clusters and super computers. Today, it has been seen that there is a tremendous growth in cloud computing. Infrastructure as a Service (IaaS), the on-demand and pay-asyou-go model, creates a flexible and cost-effective mean to access computing resources. In this paper, a feasibility report of HPC on a cloud infrastructure is presented. It is found that current cloud services in IaaS layer still need to improve performance to be useful to research projects. On the other hand, Software as a Service (SaaS), another type of cloud computing, is introduced into an HPC system for computing in mineral physics, and an application of which is developed. In this paper, an overall description of this SaaS application is presented. This contribution can promote cloud application development in computational mineral physics, and cross-disciplinary studies. & 2014 Elsevier Ltd. All rights reserved.
Keywords: High performance computing Cloud computing Software as a Service Phonon calculations Mineral physics
1. Introduction We have been involved in research on computational mineral physics for nearly a decade (Zhang et al., 2007; Yin et al., 2008, 2012). Quantifying the physical properties of the minerals that comprise the Earth's lower mantle is important for interpreting seismic observations and providing constraints for geodynamics models. Our research uses first-principles calculations to determine the properties of lower mantle minerals at the extreme temperatures and pressures of the deep Earth. These calculations are based on the density functional theory (DFT), plane waves and pseudopotentials (Hohenberg and Kohn, 1964; Kohn and Sham, 1965; Parr and Yang, 1989; Dreizler and Gross, 1990). This firstprinciples calculation is known as an ideal tool to complement and help in the analysis of experimental data with its predictive power. The most famous example may be the finding of the MgSiO3-postperovskite phase transition, which was predicted with firstprinciple calculations (Oganov and Ono, 2004), and then confirmed in an experiment (Iitaka et al., 2004). The phonon vibrational contribution to the Gibbs free energy is believed to be significant in first-principles computational studies.
n
Tel.: þ 86 25 89680700; fax: þ 86 25 83686016. E-mail address:
[email protected]
http://dx.doi.org/10.1016/j.cageo.2014.06.002 0098-3004/& 2014 Elsevier Ltd. All rights reserved.
In our research, phonon calculations from first-principles usually rely on density functional perturbation theory (DFPT) (Baroni et al., 1987; Gonze, 1995; Baroni et al., 2001), where the zero temperature crystal structure is assumed to be dynamically stable. DFPT is a new and effective method for calculating phonon properties. It is not limited to crystal structures stable at 0 K, in contrast to the DFT method. As it is known that any first-principles calculations are extremely time-consuming, an accurate account of the vibrational effect requires the calculation of phonon spectra, which has been an extremely time-consuming task up until now. Fortunately, the problem can be separated into a number of parallel tasks with little effort. An example of the computational effort to perform a parallel simulation is presented below. The pressure and temperature effect on the phase transition has scientific significance in mineral physics. As shown in Fig. 1, there are five resulting polymorphs of MgAl2O4 from firstprinciples calculations, spinel (Sp), corundum (Cor), periclase (Per), spinel with calcium–ferrite structure (CF), and spinel with calcium–titanate structure (CT). To obtain this phase diagram, it is needed to calculate thermodynamic properties of MgAl2O4 at finite temperature with the DFPT method. Phonon calculations should be consequently performed for at least 9 different volumes of each crystal. For each volume, it is needed to perform phonon calculations at 8 or more q-points in reciprocal space. For each q-point, it is needed to perform self-consistent field calculations
148
Q. Huang / Computers & Geosciences 70 (2014) 147–153
Fig. 1. P-T phase diagram of MgAl2O4 polymorphs. Key to phases: Sp – spinel-type; Cor – Corundum; Per – Periclase; CF – CaFe2O4-type; CT – CaTi2O4-type.
for 3N IR, in which N is the number of atoms in the primary unit cell, which are 42 for Sp and CT, 84 for CF, 90 for Cor, 6 for Per. Taking CF for example, the wall time for parallel calculating one IR is about 2 h in average on a computer server with 8 CPU cores (Intel Xeon 5560). That means it needs about 7 days to complete calculation for just one q-point. If only one compute server with 8 CPU cores (Intel Xeon 5560) is available, it needs 504 days to complete the calculation for just one mineral. Therefore, the time spent on simulations must be greatly shortened by taking advantage of the power of tens, hundreds, even thousands of CPU cores. As described above, phonon calculations can be split into independent subtasks at q-point level or even further at IR level. Therefore, if splitting the calculations at q-point level and distribute the subtasks concurrently on 8 servers, the time is reduced to about 63 days for CF. Furthermore, if splitting the calculations at IR level and distributing the subtasks on 84 servers, the time is reduced further to about 6 days for CF. Phonon calculation tasks at different q-points of the reciprocal space or different irreducible representations (IR) can be executed on multiple processors in a computer cluster without dependency or communication between these parallel tasks. This makes it possible to perform massively parallel processing in efficiency and scalability. However, it is important to note that a calculation at an IR is still time consuming on a single processor, and is in need of 8 or more processors to perform in general. In addition, the problem at a q-point or IR is a distributed computing problem that requires communication between tasks, especially communication of intermediate results. This fine-grained parallelism suffers from parallel slowdown. Conventionally, only specified parallel computing solutions are available to perform phonon calculations, such as High Performance Computing (HPC). An HPC cluster, which works under 64-bit Linux operating system and has 20 nodes with Intel Xeon 5560 processors, is used in our group GEOHPCLAB1 to perform first-principles calculations. It provides an efficient, flexible, cost effective environment for simulations. The computer clustering approach connects a number of readily available computing nodes via a fast local area network, e.g. by utilizing Gigabit Ethernet connections. The activities of computing nodes are orchestrated by “clustering middleware” TORQUE resource manager2, a software layer that sits atop the nodes and allows the user to treat the cluster as one cohesive
1 2
GEOHPCLAB, http://geohpclab.nju.edu.cn. http://www.clusterresources.com/products/torque-resource-manager.php.
computing unit. TORQUE resource manager is on the original Portal Batch System (PBS) project integrated with Moab Workload Manager to improve overall utilization, scheduling and administration on infrastructure services. The computer cluster of GEOHPCLAB is typically configured for computation-intensive scientific calculations, and uses a high-availability approach. In recent years, an increasing number of organizations – including universities, research centers, and business – have begun to use cloud computing technology as an essential and promising tool for optimizing existing computing resources and expanding more computing resources, virtual machines offered by providers of Infrastructure as a Service (IaaS), in a more cost effective manner. However, virtualization may induce significant performance penalties for the demanding scientific computing workloads. In Section 2, a feasibility report of HPC on a cloud infrastructure is presented. It is found that it is unfeasible to run computation-intensive scientific programs in current framework of IaaS cloud, as the low performance resulted in an impractical moving from HPC to the cloud in IaaS. On the other hand, Software as a Service (SaaS) is introduced into the HPC cluster, which is another type of cloud technique. In Section 3, an overall description of the SaaS application that we are developing currently is given. In Section 4, this paper is concluded.
2. HPC on a cloud infrastructure There are many types of cloud computing: Software as a Service (SaaS), Platform as a Service (PaaS), and Infrastructure as a Service (IaaS), Storage as a service (STaaS), etc. (Buyya et al., 2011). Cloud infrastructure services deliver multi-user virtual environments as services providing multiple independent portals, processing and storage spaces. It allows any user to make a large number of compute instances fairly easily. Providers of Infrastructure as a Service (IaaS) offer virtual machines, which eliminate the need of maintaining expensive computing hardware. It offers a scalable and low-cost computing system, which is adapted to the needs of the client, who only pays for the resources used. Technically speaking, through the use of virtualization, clouds promise to address with the same shared set of physical resources a large user base with different needs. Thus, clouds promise to be for scientists an alternative to clusters, grids, and supercomputers. To pay for extra computer capacity, researchers now have two solutions to choose. One is buying more blade servers to integrate existing HPC cluster, the other is renting virtual computers running personalized applications from commercial providers. The latter is more cost-effective and very emerging nowadays. In addition, using such technology (cloud computing) as an essential and promising tool can optimize existing computing resources. Therefore, many scientific researchers are trying to adopt the cloud architectural style. For example, Huang et al. (2010) deployed GEOSS clearinghouse on Amazon EC2, which is to test the utilization of cloud computing for Geosciences applications. However, it is known that many scientific applications, which have complex communication patterns, require low latency communication mechanisms and rich set of communication constructs offered by message-passing systems such as Message Passing Interface (MPI). The underlying implementation of clouds is very different from the counterpart of traditional HPC clusters. It is thus important to evaluate the performance of HPC applications in today's cloud environment to understand the tradeoffs inherent in migrating to the cloud. Several groups have reported studies of the applicability of cloud-based environments for scientific computing on Amazon EC2 (Hazelhurst, 2008; Deelman et al., 2008; Keahey et al., 2008; Keahey, 2009). Various groups have run both standard benchmark suites such as Linpack and NAS (Napper and Bientinesi,
Q. Huang / Computers & Geosciences 70 (2014) 147–153
149
2009; Evangelinos and Hill, 2008; Ostermann et al., 2008; Walker, 2008), and network performance tests (Wang and Ng, 2010). Jackson et al. (2010) presented a comprehensive evaluation to compare conventional HPC platform to Amazon EC2, using real applications representative of the workload at a typical supercomputing center. Their results indicated that EC2 is six times slower than a typical mid-range Linux cluster, and 20 times slower than a modern HPC system. Another evaluation showed that interconnect on the EC2 cloud platform severely limits performance (Ostermann et al., 2010), which also indicated that the cloud services need an order of magnitude in performance improvement to be useful to the scientific community.
However, most of the large scale simulation applications, whether commercial applications (e.g. ANSYS, FLUENT) or open source applications (e.g. VASP, BLAST) are traditional simulation software without web-based GUI, and they are usually operated at the command line. There is a lack of substantial precedents for developing a SaaS application of large-scale scientific computing applications, which poses challenges to develop it. Thus, many technical difficulties must be overcome to initiate the development project. The developed SaaS layer is divided into two sub-layers, a backend layer (storage, analysis, computation) and a front-end layer (input file creation, task requests, visualization of data). It is described in detail in Sections 3.1 and 3.2 respectively.
3. Software as a service
3.1. The back-end layer: cloud application services
As noted above, in most scientific applications, the conventional HPC solution has a satisfactory quality relative to the cloud solution in the infrastructure layer. However, the cloud solution also has many advantages in other layers like SaaS layer, which could help improve the HPC cluster system. A significant drawback of the conventional HPC solution is its lack of user-friendly graphic user interface (GUI) in the software layer. It is generally in need of operating at the command line in Linux/Unix System to carry out simulations and post-processing. It is not easy to learn basic Linux/ Unix commands however, especially to learn to write specific shell scripts. It thus leads to be difficult for young researchers and undergraduate students to get started in our research group, especially for some undergraduates having no experience with Linux/Unix and fearing about the learning curve. Getting visual results easier with use of GUI could help them intuitively understand numerical simulations methods. On the other hand, it is not convenient for experienced researchers to analyze huge amounts of data generated. A set of post-processing scripts is needed to calculate subsequent results of physical properties and present the final results with figures and tables. Moreover, operating a fleet of post-processing scripts is a potential source of errors, because the data analysis scripts must be consistent with each other concerning the input data. Particularly, it is thus very troublesome to examine if they are useful, as we have to download all of these files from servers at first, whereas in fact only some subsets of data are useful and needed. If taking all the steps on the servers, including calculating, creating figurers and tables, analyzing data and verifying results, sifting out the relevant files and saving to our local disk, it will be more efficient. For these reasons it is desirable to introduce Software as a Service (SaaS) into the conventional HPC system. That is the motivation to develop a SaaS model for our computer cluster. The resulting SaaS layer is described below. As another type of cloud computing, Software as a Service (SaaS) is also considered to be the part of the nomenclature of cloud computing, along with Infrastructure as a Service (IaaS), Platform as a Service (PaaS) and so on (Buyya et al., 2011). It is a software delivery model in which software and associated data are centrally hosted on the cloud. SaaS is typically accessed by users via a thin client, e.g. a web browser. This results in the efficient utilization of software and hardware infrastructure by multiple users without the need of local software installation and maintenance. Bürger et al. (2012) applied the concept of SaaS and extended it to the integrated hydrologic simulation platform ParFlow via an intuitive Web interface. SaaS-like application web-based GUI had existed before the concept of cloud computing were proposed, e.g. EnginFrame Grid Portal produced by Nice Inc. It removes all complexity from installation, deployment, and maintenance. In addition, it provides easy access over intranet, extranet or the Internet using Internet languages and protocols. It is also appealing for the cluster user.
‘Quantum ESPRESSO’ package (Quantum-espresso, 2011) is used as our first-principles code. ‘Quantum ESPRESSO’ is an integrated suite of computer codes for electronic structure calculations and materials modeling, based on density-functional theory, plane waves, and pseudo-potentials. To develop an automated cloud Service workflow, PWscf is considered as its starting point. PWscf refers to the Plan-Wave Self-Consistent Field code within the ESPRESSO suite. PWscf is a parallelized application, usually submitted to supercomputing resources via a batch queuing system. An automation script for PWscf was written and integrated into the developed cloud application services. The following process of this workflow is using the PHonon package in the ‘Quantum ESPRESSO’ distribution implementing density-functional perturbation theory (Baroni et al., 1987, 2001; Gonze, 1995) for the calculation of second- and third-order derivatives of the energy with respect to atomic displacements and to electric fields. Using these two programs, PWscf and PHonon, constitutes the majority of our simulations. After the simulations with PWscf and PHonon, to render simulation results, e.g. thermal properties, a number of programs are written for post-processing. All input and output files are in text format (ASCII), making it easy to adapt the calculations to regional specifics by replacing some default input parameters and changing mineral physical constants. These programs are written, mostly, in C þ þ, with some parts in C. In the back-end layer, all the requiring codes have been previously installed and tested on our dedicated HPC cluster, where the calculations run as soon as the user submits tasks in the front-end layer. 3.2. The front-end layer: a web-based GUI Corresponding to the back-end layer, the front-end layer consists of three major components: web-based PWgui, web-based PHgui, post-processing GUI. They are built using various Java Web technologies, e.g. JavaBeans technology and embedding Java code inside Java Server Pages files. JavaScript is used to add interactivity to the front end, and generate a parameter file containing the full set of execution parameters. 3.2.1. Web-based PWgui and PHgui Web-based PWgui and PHgui are very similar to the PWgui of the Quantum ESPRESSO package in terms of functionality. They additionally allow direct access to the codes via a web interface however. They facilitate the access and use of the PWscf and PHonon codes, and remove the need to use a Linux/Unix environment or the details of the job queuing and submission procedure. The user utilizes a web browser to build input files. A PWscf input file is generated and moved to a back-end resource, usually the
150
Q. Huang / Computers & Geosciences 70 (2014) 147–153
Fig. 2. Snapshot of the Web-based PHgui web user interface.
computer that will run PWscf. PHgui retrieves the output data of PWscf and provides the parameter list for PHonon. Web-based PWgui and PHgui are still under development. A representative screen shot of PHgui is shown in Fig. 2. 3.2.2. Post-processing Post-processing provides output data analysis and visualization for simulations. It consists of four parts: Equation of State (EOS) Fit (Section 3.2.3), Thermodynamics Properties (Section 3.2.4), Phases Diagram Plotting (Section 3.2.5), and Phonon Dispersion Curves Plotting (Section 3.2.6). 3.2.3. EOS fit EOS Fit verifying the validity of phonon calculations is a key step in the post process, as it is used many times for verifications in a workflow. In the back-end layer, there are a number of programs to calculate the Equation of States (EOS). It supports the Murnaghan EOS (Murnaghan, 1944) and the Birch–Murnaghan EOS (Birch, 1947). The Murnaghan EOS is popular because of its simple functional form, but it does not accurately describe the compression of solids beyond about 10% and should not therefore be used beyond this regime. The Birch–Murnaghan EOS is also one of the most commonly used formulations, especially in the treatment of experimental data in mineral physics. EOS parameters are
obtained by fitting of volume–energy or volume–energy–temperature data. They can determine the relationship between volume and total energy of the target system. Some codes about nonlinear models from “Numerical Recipes 3rd” (Press et al., 2007) are referenced as part of EOS Fit code. The web interface accessing EOS Fit is shown in Fig. 3. It manages optional setting to create an EOS Fit input file. After submitting the form of volume–energy data set, an output file is generated. The output file records fitted values of four EOS parameters, including static energy E0, equilibrium volume V0, bulk modulus B0 and its derivative B0'. It also records resulting pressures in every sample V–E point by recalculating with the four parameters, and mean squared errors of this fitting calculation.
3.2.4. Thermodynamics properties Calculating thermodynamics properties of mineral within the quasi-harmonic approximation is also a key step in the post process. Thermodynamics properties are developed. In the backend layer, there are also a number of programs to calculate thermodynamics properties. In the front-end layer, it involves web-based graphing programs creating plots of data and Java Server Pages (JSP) web-interfaces to the back-end codes. The calculation of thermodynamics properties includes the cell volume, thermal expansion, isothermal bulk modulus, adiabatic
Q. Huang / Computers & Geosciences 70 (2014) 147–153
151
Fig. 3. Snapshot of the input form of EOS Fit web user interface.
bulk modulus, isochoric specific heat, isobaric specific heat, gruneisen parameter, entropy, Gibbs free energy values, etc. In traditional way, it needs a series of interactive operations to calculate each thermodynamic quantity under Linux/Unix environment. The simulation output data cannot yet show if it is useful. Therefore, the user must download the data, graph and analyze it. Although there have been many data analysis and graphing tools, it is still a tedious process to calculate thermodynamics properties, download and create plots. Moreover, a slight adjustment to the temperature range makes the user to repeat the entire process. Thermodynamics properties help the user to finish the entire process automatically. A representative screen shot of Thermodynamics Properties GUI is shown in Fig. 7. There are a drop-down list for selecting the EOS type, 3 parameters (temperature specific), 8 checkboxes for specifying thermodynamics quantities to calculate and a “Calculations” button. After pressing the button, the resulting web is generated as shown in Fig. 4. The user can skim through low quality figures on the website, which requires a low data transfer to the web browser and is used to examine the results of calculations in form of images. Meanwhile, higher quality figures (Fig. 5) and the data package are available for download. Fig. 5 shows a graph of relations of isochoric specific heat V.S. temperature for andalusite at different pressures. If we have checked all the 8 checkboxes, we can also get other 7 thermodynamics quantities figures and data packages simultaneously.
The phase relations of different phases at finite temperatures can be determined by their relative values of the Gibbs free energy. It is the chemical potential that is minimized when a system reaches equilibrium at constant pressure and temperature. Thus, the most stable phase at a given pressure and temperature is the one that has the lowest Gibbs free energy. The calculation of Gibbs free energy is subsequent to Thermodynamics Properties. A twodimensional phase diagram was obtained by sampling the P-T phase space by dense mesh spacing with 0.01 GPa and 1 K. Direct comparison of the Gibbs free energies of two phases establishes their thermodynamic phase boundary. Therefore, after the process of Thermodynamics Properties, Phase Diagram Plotting is available for use. Fig. 1 shows four different phase areas in T-P diagram with the color scale. It is only for the use to view his results, as it is in lower quality and thus requires a lower data transfer to the web browser. In addition, a high quality figure is available for download. 3.2.6. Phonon dispersion curves plotting The “Plot Phonon Dispersion Relations” code is written by Eyvaz Isaev3 (also included in the latest Quantum ESPRESSO distribution). The code is integrated into the back-end layer, and its output figures can be shown via web in front-end layer (Fig. 6).
4. Conclusions and future work 3.2.5. Phase diagram plotting Phase diagrams have great scientific significance for the research of mineral assemblage by the method of first-principles computation (Gonze et al., 2002). Phase diagrams greatly contribute to the pressure and temperature effect on the phase transition.
This paper presents an attempt to move the conventional HPC system to the current state-of-the-art cloud system. An application in the SaaS layer of the cloud system has been developed on the 3
http://www.quantum-espresso.org/user_guide/node4.html.
152
Q. Huang / Computers & Geosciences 70 (2014) 147–153
Fig. 4. Snapshot of the figure viewer of thermodynamics properties web user interface.
Fig. 7. Snapshot of the thermodynamics properties web user interface.
Fig. 5. Phonon dispersion curves of FeAFM at zero pressure.
HPC platform, which is called Fonon4. Fonon allows the user to access scientific software and hardware resources on-demand from anywhere in the world with a web browser, omitting complicated, multi-staged job submissions in original HPC environments and removing the need to use a Unix/Linux environment. It provides students of different backgrounds with an overview of numerical simulations methods to study mineral physics, and makes it easier to analyze output data of simulations and to manage useful results. Web-based PWgui and PHgui services are only available to our group users due to the permission to submit a computational task on the HPC cluster, but the post-processing service is available to non-group users, which requires little computing resources. The web-based PWgui and PHgui services are currently being further improved in terms of robustness and usability. In future work, more postprocessing items will be developed and integrated to current workflow, such as calculations of elastic properties.
Isocoric Specific Heat 180 160 140
Cv(J/mol K)
120 100
"and_Cv-T_1.out" "and_Cv-T_2.out" "and_Cv-T_3.out" "and_Cv-T_4.out" "and_Cv-T_5.out" "and_Cv-T_6.out" "and_Cv-T_7.out" "and_Cv-T_8.out"
80 60 40 20 0
0
200
400
600
800
1000
1200
1400
T(K) Fig. 6. Calculated isochoric specific heat V.S. temperature curves for andalusite at different pressures.
4
Fonon, http://219.219.113.45:8080/.
Q. Huang / Computers & Geosciences 70 (2014) 147–153
Acknowledgment This paper is part of research financially aided by National Natural Science Foundation of China (Contract nos. 40472029 and 40973003). Appendix A. Supporting information Supporting information associated with this article can be found in the online version at http://dx.doi.org/10.1016/j.cageo.2014.06.002. References Baroni, S., Giannozzi, P., Testa, A., 1987. Green's-function approach to linear response in solids. Phys. Rev. Lett. 58, 1861. Baroni, S., de Gironcoli, S., Dal Corso, A., Giannozzi, P., 2001. Phonons and related properties of extended systems from density functional perturbation theory. Rev. Mod. Phys. 73, 515. Birch, F., 1947. Finite elastic strain of cubic crystals. Phys. Rev. B 71, 809–924. Bürger, C.M., Kollet, S., Schumacher, J., Bösel, D., 2012. Introduction of a web service for cloud computing with the integrated hydrologic simulation platform ParFlow. Comput. Geosci. 48 (2012), 334–336. Buyya, R., Broberg, J., Goscinski, A.M., 2011. Cloud Computing: Principles and Paradigms. Wiley Press, New York, pp. 1–44. Deelman, E., Singh, G., Livny, M., Berriman, B., Good, J., 2008. The cost of doing science on the cloud: the montage example. In: Proceedings of the 2008 ACM/ IEEE Conference on Supercomputing. IEEE Press, pp. 1–12. Dreizler, R.M., Gross, E.K.U., 1990. Density Functional Theory. Springer, Berlin. Evangelinos, C., Hill, C., 2008. Cloud computing for parallel Scientific HPC applications: feasibility of running coupled atmosphere-ocean climate models on Amazons EC2. Ratio 2 (2.40), 2–34. Gonze, X., 1995. Adiabatic density-functional perturbation theory. Phys. Rev. A 52, 1096. Gonze, X., et al., 2002. First-principles computation of material properties: the abinit software project. Comput. Mater. Sci. 25, 478–492 (accessed 30.11.11.) 〈http://www.abinit.org〉. Hazelhurst, S., 2008. Scientific computing using virtual high-performance computing: a case study using the Amazon elastic computing cloud. In: Proceedings of the 2008 Annual Research Conference of the South African Institute of Computer Scientists and Information Technologists on IT research in Developing Countries: Riding the Wave of Technology. ACM, 2008, pp. 94–103. Hohenberg, P., Kohn, W., 1964. Inhomogeneous electron gas. Phys. Rev. 136, B864–B871. Huang, Q., Yang, C. Nerbert, D., Liu, K., Wu, H., 2010. Cloud computing for geosciences: deployment of GEOSS clearinghouse on Amazon's EC2. In:
153
Proceedings of the ACM SIGSPATIAL International Workshop on High Performance and Distributed Geographic Information Systems. San Jose, California. Iitaka, T., Hirose, K., Kawamura, K., Murakami, M., 2004. The elasticity of the MgSiO3 post-perovskite phase in the Earth's lowermost mantle. Nature 430, 442–445. Jackson, K.R., Ramakrishnan, L., Muriki, K., et al., 2010. Performance analysis of high performance computing applications on the Amazon web services cloud. In: Proceedings of Cloud Computing Technology and Science (CloudCom), 2010 IEEE Second International Conference, pp. 159–168. Keahey K., Figueiredo R., Fortes J. Freeman T., and Tsugawa M., 2008. Science clouds: early experiences in cloud computing for scientific application. In: Cloud computing and its applications, Chicago, IL., USA. pp. 201-206. Keahey, K., 2009. Cloud computing for science. In: Proceedings of the 21st International Conference on Scientific and Statistical Database Management. Springer-Verlag, pp. 478. Kohn, W., Sham, L.J., 1965. Self-consistent equations including ex-change and correlation effects. Phys. Rev. 140, A1133–A1138. Murnaghan, F.D., 1944. The compressibility of media under extreme pressures. Proc. Natl. Acad. Sci. 50, 244–247. Napper, J., Bientinesi, P., 2009. Can cloud computing reach the top500? In: Proceedings of the Combined Workshops on UnConventional High Performance Computing Workshop Plus Memory Access Workshop. ACM, pp. 17–20. Oganov, A.R., Ono, S., 2004. Theoretical and experimental evidence for a postperovskite phase of MgSiO3 in Earth's D'' layer. Nature 430, 445–448. Ostermann, S., Iosup, A., Yigitbasi, N., Prodan, R., Fahringer, T., Epema, D., 2008. An early performance analysis of cloud computing services for scientific computing. Delft University of Technology, Tech. Rep. Ostermann, S., Iosup, A., Yigitbasi, N., Prodan, R., Fahringer, T., Epema, D., 2010. A performance analysis of EC2 cloud computing services for scientific computing. Cloudcomp 2009, LNICST 34, 115–131. Parr, R.G., Yang, W., 1989. Density Functional Theory of Atoms and Molecules. Oxford University Press, New York. Press, W.H., Teukolsky, S.A., Vetterling, W.T., Flannery, B.P., 2007. Numerical Recipes. Cambridge University Press, Cambridge, England. Quantum-espresso, 2011. An Integrated Suite of Computer Codes ForelectronicStructure Calculations and Materials Modeling at the Nanoscale. 〈http://www. pwscf.org〉, (accessed 30.11.11.). Walker, E., 2008. Benchmarking Amazon EC2 for high-performance scientific computing. USENIX Mag. 33 (5). Wang, G., Ng, T.E., 2010. The impact of virtualization on network performance of amazon ec2 data center. In: Proceedings of IEEE INFOCOM. Yin, K., Zhou, H.Q., Huang, Q., Sun, Y.C., Xu, S.J., Lu, X.C., 2012. First-principles study of high-pressure elasticity of CF- and CT-structure MgAl2O4. Geophys. Res. Lett. 39, L02307. Yin, K., Zhou, H.Q., Xu, S.J., 2008. Phonon dispersion relations and thermodynamic properties of magnesium aluminium spinel: a first principle study. J. Nanjing Univ. (Nat. Sci.) 06, 574 (in Chinese). Zhang, W.X., Zhou, H.Q., Wang, R.C., Wang, D., Yin, K., 2007. Molecular dynamic simulation of MgSiO3 perovskite: the effects of sizes on elasticity properties and equations of state. Acta Petrol. Mineral. 26 (1), 5 (in Chinese).