REVIEWS Using Molecular Simulations to Probe Pharmaceutical Materials YONG CUI Small Molecule Pharmaceutical Sciences, Genentech, Inc., San Francisco, California 94080 Received 17 August 2010; revised 4 October 2010; accepted 5 October 2010 Published online 24 November 2010 in Wiley Online Library (wileyonlinelibrary.com). DOI 10.1002/jps.22392 ABSTRACT: Evolved through the past 60 years, molecular simulations have become one of the most important analytical tools in many theoretical and applied scientific disciplines. This paper provides a brief introduction to molecular simulations as a means of addressing important scientific questions of interest to pharmaceutical scientists. The focus is on fundamental questions such as: (1) Why do simulations work? (2) How to simulate? (3) How to make the results of simulations “real?” (4) Where can simulations be applied? To demonstrate the fundamental rationale of molecular simulations, three perspectives, thermodynamics, statistical mechanics, and general statistics, are compared. The concept of stochasticity is introduced, followed by a brief account of the two major methods used in simulations, molecular dynamics and Monte Carlo simulations. A brief discussion is then given on force fields to indicate their central importance. To facilitate the discussion about possible applications to pharmaceutical systems, the characteristics of molecular simulations are first compared with those of laboratory experiments. Case studies are then introduced to demonstrate the strengths of simulations. Some frequently encountered questions also are presented and discussed. ©2010 Wiley-Liss, Inc. and the American Pharmacists Association J Pharm Sci 100:2000–2019, 2011 Keywords: molecular modeling; molecular dynamics; Monte Carlo; stochasticity; force field; thermodynamics/statistical mechanics; materials science
INTRODUCTION Starting from the first publication of Monte Carlo (MC) simulations (the Metropolis algorithm) in 1949,1 molecular simulations have evolved through the past 60 years and have become one of the most important analytical tools in many theoretical and applied scientific disciplines. Founded upon the theoretical principles of statistical mechanics, simulation approaches have thrived, as advanced by the increasing power of modern computers, to tackle difficult problems in various areas such as physics, physical chemistry, material sciences, and structural biology. In a survey conducted in 2000,2 the Metropolis algorithm1 was named among the 10 algorithms that have had the greatest influence on the development and practice of science and engineering in the 20th century. In the context of theoretical studies, molecular simulations serve as first-line tests for new theories and hypotheses, whereas in the context of experimental studies, they function as so-called “computer experiments.” Correspondence to: Y. Cui (Telephone: 650-467-1402; Fax: 650467-2179; E-mail:
[email protected]) Journal of Pharmaceutical Sciences, Vol. 100, 2000–2019 (2011) © 2010 Wiley-Liss, Inc. and the American Pharmacists Association
2000
A variety of phase transitions and phenomena have been successfully reproduced in simulations, including the ones that are of direct interest to the pharmaceutical sciences such as melting,3 crystallization,4 condensation,5 liquid–liquid phase separation,6 liquid crystal formation,7 and cyclodextrin inclusion.8 Through these efforts, a great many microscopic details that were once difficult to access have been revealed. Currently, molecular simulation approaches are an indispensable tool in many scientific fields. In this context, this paper attempts to provide a brief introduction to molecular simulations. The focus is more on fundamentals than on applications. We will try to discuss the following fundamental questions: (1) Why do simulations work?; (2) How are simulations performed?; (3) How are the results of simulations made physically “real?”; (4) Where can simulations be applied? The first question is approached by a brief review of the theoretical foundation upon which simulations are constructed, followed by an account of the two major simulation approaches, molecular dynamic (MD) and MC simulations. The third question is addressed by invoking some quantum mechanical concepts. Finally, some examples, both from the literature and from our own studies, are discussed
JOURNAL OF PHARMACEUTICAL SCIENCES, VOL. 100, NO. 6, MAY 2011
MOLECULAR SIMULATIONS TO PROBE PHARMACEUTICAL MATERIALS
to demonstrate the applicability and power of this technique. It should be stressed that this presentation is an introduction and a perspective rather than a comprehensive review. Given the significance of molecular simulations in many scientific fronts and the vast amount of literature, it is impossible to do justice to all aspects of this topic in a paper of this size. The author elected to focus on addressing the abovementioned fundamental questions, whereas leaving detailed methodology to be obtained from more general references. Furthermore, with statistical mechanics as their foundation, molecular simulations have traditionally been presented in a statistical–mechanical paradigm. With the rigor and convenience in all regards, this tradition usually demands a familiarity of statistical–mechanical concepts and languages. To bypass this demand, the author seeks an alternative approach in this work. Taking advantage of the basic statistical concepts familiar to most scientific disciplines, we try to demonstrate why simulations work. This approach, though being more reader friendly, will necessarily compromise mathematical rigor to some extent. For the sake of minimizing the burden on the reader, the author believes that this is an appropriate compromise. We hope this can be compensated for the reader interested in pursuing this topic further through the references provided in the text.
WHY DO MOLECULAR SIMULATIONS WORK? Why Do Systems Seek to Attain Equilibrium? Physical states of pharmaceutical systems have long been recognized to influence the physicochemical properties and clinical performance of drug products. Hence, understanding and control of such physical states are critical steps to ensuring the quality of pharmaceutical products. To place this in a broader perspective, although certain material properties are of particular interest to pharmaceutical scientists, such as the solubility and compressibility of a drug substance, they are only a portion of the picture. Material systems exhibit an almost incredible diversity in our observations. Materials, for example, can be in different phases such as gases, liquids, crystals, glasses, and liquid crystals (mesophases). Each object may have a color, an odor, a density, an electric conductivity, a hardness, a compressibility, a brittleness, a shear modulus, a viscosity, a solubility, a saturated vapor pressure, and so on.9 Thus, it is of great interest to study material properties comprehensively and to find the common laws underlying this diversity. Phenomenogically, when given sufficient time, all systems can reach certain steady states under prescribed thermodynamic conditions (e.g., temperature, DOI 10.1002/jps
2001
pressure, and composition), wherein their macroscopic (thermodynamic) properties, such as the ones listed above, are no longer variable with time. This final destiny, under a given set of thermodynamic conditions, is termed the equilibrium state. The journey for systems to move from out-of-equilibrium states to the equilibrium state is the equilibration process. A simple example would be to drop some sucrose crystals in a cup of water. The sucrose–water system sets off from a two-phase state (out-of-equilibrium), undergoes dissolution of sucrose crystals (equilibration), and finally reaches a solution state (equilibrium) wherein system properties no longer change over time. As in this example, most material systems encountered in the pharmaceutical field are either in equilibrium or in the equilibration process. To name just a few, processes reaching an equilibrium that are well known among pharmaceutical scientists include drug solubility and solution properties, crystalline states, cocrystallization (hydrates, solvates, and cocrystals), hygroscopicity, mesophases, and liquid crystals. Metastable polymorphs, amorphous solids, emulsions, and liposomes, on the contrary, are in the equilibration process, meaning that they are metastable states and will eventually evolve to equilibrium states. From this incomplete list, it is evident that equilibrium states and equilibration processes are of great concerns in the pharmaceutical sciences, and a fundamental understanding of them is of ultimate importance to pharmaceutical scientists. Given this information, a fundamental question arises naturally: Why do all systems equilibrate? In the case of the sucrose–water system, for instance, why do the crystals have to dissolve once dropped into water? This question in the context of molecular simulations will be considered from three perspectives given below. These perspectives provide the basis for the successful application of molecular simulations.
Thermodynamic Perspective The Second Law of Thermodynamics deals with this question. It points out that the direction of a process or reaction is determined by the entropy of an isolated system, which always increases in a spontaneous process. Consequently, every system equilibrates to a steady state, wherein the system entropy maximizes under a given set of thermodynamic conditions, and thereafter the system properties stay invariable over time. This law, the Second Law of Thermodynamics, is the foundation of physical pharmacy, so it is not new to pharmaceutical scientists. For the purpose of differentiating it from the other perspectives, we will only make a few comments without getting into the details. Historically, the Second Law of Thermodynamics has achieved tremendous success in many scientific JOURNAL OF PHARMACEUTICAL SCIENCES, VOL. 100, NO. 6, MAY 2011
2002
CUI
disciplines, in general, and in the pharmaceutical sciences, in particular. Formulated initially by Clausius in 1854 for studying the cycles of heat engines,10 this law was later proven to be of great importance to a wider range of systems by providing a convenient yet fundamental direction, through which material states and processes can be accounted. A remarkable advantage of this law is that it takes just a couple of parameters, for example, temperature, pressure, and concentration, to specify the macroscopic states of complicated systems with billions and billions of particles (molecules).10 For instance, in a one-component system of fixed size, once any two of the three variables, namely, temperature, pressure, and volume, are specified, the equilibrium state of the system is fixed. Furthermore, these variables relate directly to experimental observations, and therefore are relatively easy to interpret, to measure, and to control. Even the most abstract quantity, the entropy, can be accessed indirectly through the relation of reversible heat versus temperature, both parameters are measurable through experimental techniques. This feature reinforces the wide application of this law. The application of the Second Law of Thermodynamics is so prevalent in chemistry and the pharmaceutical sciences that the conveniences it offers are sometimes taken for granted. Despite many of its advantages, thermodynamics also has some limitations. Being a generalization of experimental observations at macroscopic scales, thermodynamics cannot supply a detailed picture of the microscopic mechanisms that give rise to the collective (macroscopic) properties of a system. As a result, even though it well describes the system phenomenology, it does not illustrate at the molecular level why one system behaves in a specific manner. For instance, thermodynamics uses the concept of free energy, or chemical potential, to explain why a compound presents a specific solubility in water under a given set of conditions. It is not able, however, to supply a microscopic and quantitative reason why one compound has a solubility of, say, 10 mg/mL, whereas another compound only has 1 mg/mL solubility, in the same solvent. Another example is associated with phase transitions. Although the Second Law of Thermodynamics explains phase transition phenomena, the fundamental molecular-level details underlying these transitions are absent, and can not be derived directly from thermodynamic principles. This limitation seriously restricts thermodynamics from many mechanistic and predictive tasks. In fact, thermodynamics does not provide a mechanistic proof for its own laws. The Second Law of Thermodynamics, for example, is taken as a basic postulate rather than a rigorously proven theorem in thermodynamics (see “Statistical–Mechanical Perspective”). The concept of entropy also has not been given a clear physical meanJOURNAL OF PHARMACEUTICAL SCIENCES, VOL. 100, NO. 6, MAY 2011
ing in thermodynamic terms. As we shall see below, these questions can be addressed by statistical mechanics.
Statistical–Mechanical Perspective Statistical mechanics has its origin rooted in the history of advancing an understanding of observed system properties in terms of the microscopic states of their constituent particles.11 To do so, statistical mechanics has to, as a first step, define the microscopic states of a system, which include momentum, spatial position, and if applicable, the rotational and conformational states of each particle (either atom or molecule) in the system. This effort, in contrast to thermodynamics, entails a formidably large number of variables as the number of particles in the system approaches the large-system limit, say, close to the Avogadro’s number. Statistical mechanics then tries to relate these variables at a particle level to the macroscopic properties of the whole system. Evidently, the large number of microscopic variables makes a statistical presentation essential. Fortunately, one underlying postulate is in play, that is, all system properties are either statistical sums (for extensive properties) or averages (for intensive properties) of the relevant particle properties across the whole system. For instance, the internal energy of the system is the sum of the energy of all particles in the system, whereas temperature is the average kinetic energy of all particles. Without getting into more details, we note here that in statistical mechanics, all thermodynamic variables can be related to the microscopic states of particles (Ref.10 chapter 16). We now turn our attention to how statistical mechanics addresses questions concerning equilibrium. The central principle of statistical mechanics is the Boltzmann Distribution Law , which states that in a large system at equilibrium, the probability, pi , of a particle exploring a given energy state i, with energy Ei , is proportional to the Boltzmann factor e−Ei/kT : p i = Ke−Ei/kT
(1)
where T is temperature, k is the Boltzmann constant, and K is a normalizing factor so chosen that the summation of the probabilities of all possible microscopic states of the system, pi , amounts to one. This law indicates that particle energy states in an equilibrium state follow a particular distribution. When such a distribution has not yet been established, the system is out of equilibrium and is in the process of shifting to this distribution via equilibration. As such, the Boltzmann Distribution Law points out a direction toward which the system will evolve, similar to what the Second Law of Thermodynamics does. To understand the relation between the Boltzmann Distribution Law and equilibria, it should be noted that, while DOI 10.1002/jps
MOLECULAR SIMULATIONS TO PROBE PHARMACEUTICAL MATERIALS
an equilibrium state presents time-independent thermodynamic (macroscopic) properties, the microscopic states of particles in the system are inherently time dependent and heterogeneous. Specifically, microscopic states of particles vary with their position, momentum, and energy. Nevertheless, despite that individual particle states change with time constantly, the distribution of particles in various energy states follows the Boltzmann distribution and is time independent. Consequently, the macroscopic properties of the system, the statistical sums and averages over all particles, are time independent. As such, the Boltzmann Distribution Law harmonizes time-independent thermodynamic properties and time-dependent particle states, thereby bridging the gap between microscopic and macroscopic characteristics of a large system. Nonetheless, one question remains. The Boltzmann Law indicates that more particles are found at states with lower energies, and that fewer particles exist at higher energies, when systems are at equilibrium. But, why do particles tend to sample lower energy states? Particles are not aware of high or low energy, how could they show this intrinsic preference? To explain this, we need to invoke the Equal a priori Probability Postulate, known also as the “Fundamental Postulate,” in statistical mechanics.12 This postulate states that for a given set of thermodynamic conditions, for example, a given total energy, the system at equilibrium is equally likely to be in any of its accessible microstates satisfying the macroscopic conditions of the system.12 In other words, systems have no preference for sampling any particular microstates, as long as they satisfy the macroscopic constraints, for example, a given total energy. Here, the concept of the “microstate” should be clarified. Take as an example, a simple system containing five particles of the same type. Assume that the total system energy is set as E = 5, and each particle can stay at any of the five energy levels, that is, Ei = 1, 2. . .5, where i denotes the ith particle. To satisfy the total system energy E = 5, there are a number of ways, counted by , to assign energy to the five particles, such as (5, 0, 0, 0, 0), (0, 5, 0, 0, 0), (1, 1, 1, 1, 1), (1, 2, 0, 1, 1), and so on. Each of these arrangements is called a “microstate.” According to the Equal a priori Probability Postulate, each microstate enjoys the same probability, 1 , in which the system may exist. Considering only the first particle, the microstate for Particle 1 to sample the highest energy state, E1 = 5, is only (5, 0, 0, 0, 0), whereas the microstates for Particle 1 to sample a lower energy state, for example, E1 = 1, are numerous, including (1, 1, 1, 1, 1), (1, 2, 0, 1, 1), (1, 0, 2, 1, 1), (1, 0, 0, 3, 1), and so on. Because all five particles are identical, this applies to all of them. The result is that there are many more microstates, and accordingly, a higher probability, for particles to sample the lower energy states.13 Thus, DOI 10.1002/jps
2003
as a system, the particles exhibit preference for lower energy states, and the probability distribution over particle energy follows the Boltzmann Distribution Law. Note that the highest probability occurs at Ei = 1 instead of at Ei = 0 because the total system energy has to amount to 5. The analysis above indicates a critical and fundamental point: Although the Boltzmann Law points out a macroscopic direction that a system or a process must follow, and thus explains the Second Law of Thermodynamics, the Boltzmann Law itself is simply a result of statistical behavior of many particles in the system rather than from any specific particle physicochemical properties. This is a very important insight because it reveals the root cause of equilibrium. There must be a large number of particles in a system, so that particles will collectively show the Boltzmann distribution, and as a result, the system as a whole will gravitate toward a particular direction (equilibration). This statistical behavior will be further discussed in the next section. One more comment can be made about the concept of entropy. The Second Law of Thermodynamics introduces the concept of entropy without endowing it with a clear physical meaning. This was accomplished by the celebrated expression S = kln derived by Boltzmann.13 Here, is exactly the one defined in the example above, counting the number of microstates that a system may sample. It is often interpreted as “the degree of disorder.” More discussion on the relation between the Second Law of Thermodynamics and the Boltzmann Distribution Law can be found in chapter 6 of Ref.14 Statistical mechanics relates microscopic particle states, such as velocity and position, to macroscopic system properties. This approach has achieved a great success in offering theoretical and mechanistic explanations to many physical phenomena, for example, offering a mechanistic explanation for thermodynamics. However, applications of this approach to specific material systems have encountered great difficulty because microscopic particle states can not be accessed easily through any experimental tools. Molecular simulations, on the other hand, can carry out this task: Generating microstates in computerized model systems, from which the macroscopic system properties can be extracted. Of course, the underlying assumption for this to work is that the computer-based model systems must also equilibrate, just as real material systems do, as demonstrated above. To further demonstrate this characteristic, we turn to the next section.
Statistical Perspective The discussion in the last section precludes the fact that the Boltzmann distribution, in fact, represents the statistical behavior of a many-body system. To advance this idea, we consider a basic statistical law: the JOURNAL OF PHARMACEUTICAL SCIENCES, VOL. 100, NO. 6, MAY 2011
2004
CUI
Central Limit Theorem, which states that the mean of a sufficiently large number of independent random variables, each with finite mean and variance, will be approximately normally (Gaussian) distributed.15 This statistical law is familiar to every scientific discipline. In fact, almost all of our observations, measurements, surveys, and trials follow this law. We now consider the adaptability of the Central Limit Theorem to material systems. The Central Limit Theorem says that when a system contains a large number of independent variables, the population will gravitate toward a normal distribution, which, once attained, remains stable.16 Imagine a material system containing a large number of molecules, each moving independently with translational, orientational, and rotational freedom. This certainly is a large number of independent variables. The motion states of particles in this system, therefore, should gravitate toward a normal distribution according to the Central Limit Theorem. This inference clearly shows that a many-body system has a direction, following which the population will evolve. Once having reached this particular mode of particle motion distribution, the system is stabilized, meaning that the distribution, and thereby the averaged particle properties, no longer change over time. This is exactly what a macroscopic equilibrium state is. Therefore, without getting into detailed mathematical derivations, the logical connections between the Central Limit Theorem and the Boltzmann Distribution Law, and thereafter the Second Law of Thermodynamics, are apparent, meaning that the Central Limit Theorem is the foundation of the Boltzmann Law, which in turn is the foundation of the Second Law of Thermodynamics. The latter, the most familiar one to us, is just a macroscopic presentation of the former statistical laws. In the last section, the link between the Second Law of Thermodynamics and the Boltzmann Distribution Law was discussed qualitatively and a reference was provided (chapter 6 of Ref.14 ). For the connection between the Central Limit Theorem and the Boltzmann Distribution Law, we shall note here that they are physically equivalent.17 The difference is that the Boltzmann Law describes the particle distribution as a function of particle energy, Ei , whereas a normal distribution applies to particle motion such as velocities or momenta. Because the kinetic energy is proportional to the square of velocity, that is, Ek = 12 mv2 , substituting this into Eq. (1) (this applies when particle’s energy is simply kinetic energy such as in gases) will lead to a normal distribution of particle velocities (a general form of normal distribution is 2 p = ae−bx , where p is the probability, and a and b are constants). In fact, the normal distribution of particle velocities was discovered by Maxwell in 1860, before the Boltzmann Law was proposed.13 This distribution was later shown to be equivalent to the BoltzJOURNAL OF PHARMACEUTICAL SCIENCES, VOL. 100, NO. 6, MAY 2011
mann distribution in gases, and is now known as the Maxwell–Boltzmann distribution. Interestingly, in the field of statistics, Maxwell’s finding in 1860 is considered to be one of first evidences for the Central Limit Theorem (the Herschel–Maxwell derivation16 ). This fact itself probably already signifies the inherent statistical nature of the equilibration behavior of many-body systems. In fact, the Boltzmann Law can be derived strictly through a statistical approach,18 in a way similar to the microstate-counting example given in the last section on the Equal a priori Probability Postulate. Meanwhile, the physicochemical nature of the system does not even need to be considered. The above-mentioned statistical perspective demonstrates that the equilibration of material systems is nothing mysterious or strange. It is simply a manifestation in material systems of the Central Limit Theorem, a general statistical law that governs all multivariable systems. This insight greatly facilitates our understanding on the equilibration process, and stimulates the desire to model it. Indeed, because the applicability of the Central Limit Theorem demands no more than the presence of multiple independent variables, it seems logical and reasonable to reproduce equilibria or equilibrations by artificial models, as long as the models contain multiple degrees of freedom for particle motions. Historically, this idea had been explored before the advent of computer simulations. For example, so-called “mechanical simulations” were conducted to study atomic liquids.19 The models in these studies were constructed by using a large number of plastic or steel balls to represent atoms, and the results were found in some aspects to be quite realistic. Nevertheless, the mechanical models had limited applicability, and studying them was very tedious and laborious. How Can System Equilibration Be Simulated on Computers? So far, it has been shown that many-body statistics govern the equilibration of material systems. We now consider the scope of validity of these statistics. The Central Limit Theorem is known to apply to a wide range of systems, including physical, biological, social, and economical systems. Yet, all these cases are real systems. Is the theorem also valid for an “imaginary” many-particle system constructed on a computer? This question is central to the validity of computer simulations. If computer-based systems do not obey the Central Limit Theorem, they are not going to equilibrate. Without spontaneous equilibrations, the computer-based models are completely subjected to arbitrary manipulations and various computer algorithms, and, therefore, the value of such simulations is necessarily doubtful. DOI 10.1002/jps
MOLECULAR SIMULATIONS TO PROBE PHARMACEUTICAL MATERIALS
To answer this question, we may first try to build such a system. Suppose we construct a container box of a certain volume on a computer, followed by placing a few tens of particles in this box. We then arbitrarily assign a position and a velocity to each particle, for example, positioning particles on a crystal lattice with half of the particles moving toward the left and the other half to the right (see Fig. 1a). The particle speed distribution in this case is plotted in Figure 1b. We then let particles move following the Newtonian laws, with the computer at work now to calculate the trajectories of the movement of particles. When two particles collide with each other, or collide with the container wall, new directions and velocities caused by the collisions are calculated, by following the Newtonian laws of motion. This system presents multiple independent motional degrees of freedom, and, hence, there seems to be no reason why the Central Limit Theorem would not apply. To show this, we turn back to the computer-run system. Without actually calculating the detailed particle trajectories, it is conceivable that, after a while, the particles will ex-
2005
change energy and momentum via collisions, and the system will gradually become disordered (Fig. 1c). If a detailed calculation of particle trajectories is conducted, a normal distribution of particle speeds will gradually appear as shown in Figure 1d,20 signifying that an equilibrium state arises. In other words, the computer-run system equilibrated, under the same law as would any macroscopic equilibrations, for example, sucrose crystals and water equilibrating to a solution. Although the computerized model does not show any perceivable macroscopic indications of equilibration that are familiar to us, such as crystal dissolving or temperature variations, the physical nature of the process is the same as any macroscopic equilibration. To further demonstrate this, suppose we arbitrarily change the initial state of the system to, for example, the one in Figure 2a. The initial speed distribution is plotted in Figure 2b. In this case, all particle positions are changed from Figure 1a, and some particles are assigned zero or higher than average velocities. The average particle speed, however,
Figure 1. Equilibration of a multiparticle system constructed on computer. (a) Initial state, (b) particle speed distribution of the initial state, (c) final state, and (d) particle speed distribution of the final state. DOI 10.1002/jps
JOURNAL OF PHARMACEUTICAL SCIENCES, VOL. 100, NO. 6, MAY 2011
2006
CUI
Figure 2. Equilibration of a multiparticle system with a different initial states. (a) Initial state, (b) particle speed distribution of the initial state.
remains unchanged (this means that the temperature of the system is unchanged). We then restart the run. Again, it is easy to imagine, without more extensive calculation, that after a while the system will become disordered again. When plotting particle distribution against their speeds, the same normal distribution will appear identical to that in Figure 1d.20 This means that the direction of the process and the final state are independent of the initial state and the equilibration path, a fundamental characteristic of equilibria portrayed in thermodynamics. This evidence shows convincingly that models so constructed on computers will also equilibrate, a fact setting the stage for simulating the equilibration process via computerized models. Stochastic Versus Deterministic Processes The sections above discussed the scientific rationale for computer simulations. One philosophical question that remains is how can a deterministic tool such as a computer follow and reproduce a stochastic process such as an equilibration? Here “deterministic” refers to a process wherein the states of a system are completely dependent on their former states. For example, the Newtonian motion laws describe a particle trajectory where the position and velocity of the
particle at any given time are determined by that at an earlier time point. The computer algorithms employed in simulations carry out these Newtonian motion calculations. Hence, they are deterministic in nature. “Stochastic,” on the contrary, refers to a process wherein the latter states of a system are independent of the earlier states. For example, as exemplified in the systems shown in Figures 1 and 2, no matter how the initial states (i.e., the particle position and velocity) are assigned, the final states are always the same. Evidently, therefore, equilibrations are stochastic processes. The question now is how to reconcile the conflict between the deterministic computer algorithms and the stochastic equilibrations? To answer this question, we consider another example. Suppose the box shown in Figure 1 contains only one particle. We initiate the particle motion by arbitrarily assigning a position and a velocity for the particle. The trajectory of this particle is completely calculable and predictable, and thereby deterministic, as shown in Figure 3a. Now, we add a second particle to the container, and initiate the motions again. Although the particle trajectories are still calculable and predictable, the prediction is made with some difficulty because it requires solving two motion equations analytically. The difficulty arises from occasional collisions between the two particles, which
Figure 3. The effect of multiple degrees of freedom of motions in creating system stochasticity. (a) system containing only one particle, (b) system containing two particles; and (c) system containing multiple particles. JOURNAL OF PHARMACEUTICAL SCIENCES, VOL. 100, NO. 6, MAY 2011
DOI 10.1002/jps
MOLECULAR SIMULATIONS TO PROBE PHARMACEUTICAL MATERIALS
may dramatically change the subsequent trajectories (Fig. 3b). If more particles are added to the system (Fig. 3c), the trajectories are still calculable, but less and less predictable due to the growing frequency of particle collisions. When there are a sufficiently large number of particles with motional freedom, the particle positions and velocities at a given time depend on their earlier positions and velocities only within a short period of time (i.e., correlation time), beyond which the dependence deteriorates. In other words, the system will “forget” where it was a short while ago and turn toward a disordered state, which is a normally distributed equilibrium. This example shows clearly the power of multiple random variables, from which the stochastic equilibration arises. Even the deterministic computer algorithms do not compromise the stochasticity of a many-body system. This is exactly why computer simulations can work: Although computer-based models are heavily charged with subjective manipulations and deterministic computer algorithms, the final equilibrium states and the gravitation of the systems toward this endpoint are beyond their control. Rather, they are dictated by the statistical laws. The factors determining the final equilibria are just the thermodynamic conditions and the interactions between particles (in the case shown, the interaction is a hard sphere collision). The former are defined based upon the study needs, and the latter represent the physicochemical nature of the specific system being studied (see more in “The Role of Particle Interaction Functions”). Furthermore, this example also demonstrates that although equilibria are calculable, and therefore can be simulated, they are unpredictable. That is, the particle trajectories over time, from which the macroscopic properties can be extracted, can not be predicted or be obtained by analytically solving the straightforward Newtonian motion equations. The only way to acquire this information is to calculate step by step the particle positions and velocities, and trace them into trajectories. This approach, known also as the “numerical method” for solving many-body motion equations, is in fact the primary task for computers in simulations. The physics, or statistics, of equilibration processes actually has nothing to do with computers. It should also be noted that, because of this unpredictability, the results of simulations are often unexpected, and this is another reason for conducting simulations. Sometimes, when a result of a simulation is consistent with our expectation or speculation, one might think that the simulation is not worthwhile. The fact is that one may speculate about the potential results based upon current theoretical understandings, but without a confirmation by a relevant simulation, or by some sort of experiments if feasible, the result is indeed unpredictable and will always remain merely as a speculation, even if it is a reasonable one. We will revisit DOI 10.1002/jps
2007
this topic in “Where Can Molecular Simulations Be Applied.” More discussions on the topic of this section can be found in chapter 1 of Ref.20
HOW TO SIMULATE? Molecular Dynamic Simulations On the basis of the design of the methods, molecular simulations can be categorized into two classes: the molecular dynamics (MD) and Monte Carlo (MC) approaches. The example given in “How Can System Equilibration Be Simulated on Computers” and Figure 1 provides a brief description of the MD simulation. Essentially, on the basis of the system in study, we first need to identify the components (molecules and their concentrations) that will be built into the model system. Then, we will need to define the interaction functions that exist between these component molecules. These interaction functions, also known as “force fields,” are the potential energy or interaction force as a function of interatomic distance. Some well-known force fields include: the Lennard–Jones function (p 427 of Ref.10 ), which is often used to approximate the van de Waals force, and Coulomb’s law, which applies to electrostatic interactions. Further discussions on force fields will be provided in the next section. For now, let us assume that an appropriate force field is available for the system to be studied. The next step is to construct on the computer a model system containing the desired number of components under the relevant thermodynamic conditions, typically density, pressure, and temperature. The initial positions of molecules are assigned in a way close to the system under study. For example, if a crystal is involved, the molecules should be placed on the relevant crystal lattice. For a solution, the solute molecules are typically homogeneously dispersed with the solvent molecules. The initial velocities can be randomly assigned, with the averaged kinetic energy corresponding to the study temperature. Here “randomly” may mean that the velocities are randomly assigned to particles, but made to follow a normal distribution. Although this action is not necessary, it makes the initial state be closer to equilibrium, which, in turn, often reduces the equilibration time. After the simulation starts, thermodynamic parameters such as temperature, pressure, and density may change or fluctuate. This is normal and is no different from a macroscopic equilibration process for a real material system. However, if an isothermal–isobaric condition is intended for the simulation, temperature and pressure controllers are needed. Similar to a real laboratory sample whose temperature and pressure are held constant by a controlled environment, computerized models can be subjected to various temperature and pressure controllers JOURNAL OF PHARMACEUTICAL SCIENCES, VOL. 100, NO. 6, MAY 2011
2008
CUI
(computer algorithms). Furthermore, an important technique used is the design of the boundary conditions. Compared with any laboratory samples, which contain billions and billions of molecules, the computer models are very small in size, typically containing only hundreds or thousands of molecules. Hence, the fraction of molecules locating on system surfaces (or interfaces) is considerably larger in these small models. To minimize this artifact, the model system is often defined as periodic, with its boundary placed next to a secondary system identical to the primary one. When one molecule moves out of one side of the primary system (molecules are not bounced back by the boundary so-designed), another molecule of the same type will be forced to enter the system with the same velocity from the opposite side. This design is referred to as the “periodic boundary conditions”. Although this design reduces surface effects, intrusion of a molecule into the system in this way is at least not completely physical, and sometimes artifacts can be observed. One way to alleviate this impact is to increase the system size. Thus, a large system size is always preferred in simulations. Of course, this preference demands higher computation power and cost, and therefore a balanced selection of system size should be kept in mind. Detailed introduction to MD simulation methods can be found in chapters 2 and 3 of Ref.20 Computer algorithm examples are analyzed in detail in Ref.21 With the introduction above, we can see that MD simulation is designed to mimic exactly what is happening in a real material system, if their initial states are managed to be identical. It not only provides information on the final equilibrium state but also tracks the equilibration process, which could be valuable for the mechanistic elucidation of the process. Thus, when constructing the model it is desirable to make its initial state as close as possible to the real material system under study. Monte Carlo Simulations Monte Carlo simulations take a different approach from MD simulations, and are considered logically more challenging despite the fact that the MC method historically appeared before the MD method. Recall, in “ Statistical-Mechanical Perspective”, we introduced the concept of microstates that a system may sample. At equilibrium, the Boltzmann distribution not only applies to particles but also to microstates because, for example, a microstate with a higher energy must arise from particles in this microstate sampling higher energy level. If particles follow the Boltzmann distribution , the microstates should also do so. By grouping the microstates based on their energy levels, the probability, p <, for a microstate group v, with an energy E<, can be calcuJOURNAL OF PHARMACEUTICAL SCIENCES, VOL. 100, NO. 6, MAY 2011
lated via the Boltzmann Law. Any observable macroscopic property of the system, Gobs , can be calculated through the probability-weighted summation of the same property over all microstates (p 57 of Ref.11 ): Gobs =
p
(2)
<
where G< is property G of the microstate group v. This formula allows macroscopic properties of the equilibrium states to be extracted from the microstates, as long as all microstates of a system can somehow be captured. However, it is generally difficult to capture all microstates that a system may sample. Recall at the beginning of “Statistical–Mechanical Perspective,” it was pointed out that the number of independent degrees of freedom of motion for a sizable system is formidably large. Even for a small model system, it is difficult to guarantee that all possible arrangements (microstates) of particles can be captured in a certain operation. This makes it difficult to carry out calculations using Eq. (2). The way to overcome this difficulty is to let the system sample a representative set of microstates, whose probability distribution approaches the distribution of a complete ensemble of all microstates. Take the model system in Figure 1 as an example. Starting from the initial state as in Figure 1a, we can randomly pick a particle and let it move randomly a small distance from its original position. This generates a new microstate, whose energy can be calculated on the basis of the force field assigned. We then compare this energy to that of the previous state. If the new arrangement produces a lower energy, the computer will accept this move (because the new microstate presents a higher probability according to the Boltzmann Law), and repeat this operation on another randomly selected particle. Otherwise, the computer will only accept this move with a probability calculated from the Boltzmann Law (the Boltzmann probability is calculated on the basis of the energy difference between these two states). In this way, the system will sample lower energy microstates more frequently, and those with higher energy less frequently. The frequency of the sampling follows the Boltzmann distribution. Because the initial states are often randomly assigned, they are likely out of equilibrium and will present higher energies. Thus, at the initial stages of the MC simulation, the random moves often generate lower energy microstates, which will be accepted immediately. Consequently, the lattice structure in Figure 1a will dissolve quickly, signifying that the system is moving toward equilibrium. After a certain number of steps, the microstates sampled by the system show the feature of a steady state, suggesting that the system has arrived at equilibrium. Further sampling at the steady state should generate a set of microstates representative of the equilibrium. The DOI 10.1002/jps
MOLECULAR SIMULATIONS TO PROBE PHARMACEUTICAL MATERIALS
macroscopic properties at the equilibrium can then be calculated via Eq. (2). In a sense, the MC simulation is similar to a card game, wherein a new card is compared with the old one, and the next move is determined on the basis of which card has a smaller number and how big is the difference. Note that the computer algorithms are always designed to comply with the Boltzmann Law. Hence, once the initial state is assigned, the simulation process is no longer in control of the simulator or the computer. This is similar to the design of the MD simulations. More detailed methodology concerning MC simulations can be found in Ref.22 In contrast to the MD approach, the MC simulation only probes the equilibrium state. Although it does have an equilibration stage, this process is not physical in that the equilibration path generated by accepting a series of qualified random moves is not what a real material system undergoes from the same initial state. In other words, it is an artificial path that will arrive at the same equilibrium state. Thus, MC simulations are often used to study equilibrium problems rather than equilibration processes, whereas MD simulations can be applied to both fronts, as discussed earlier. Because of this advantage, MD is generally favored over the MC method (p 321, Ref.22 ). Nonetheless, this drawback of the MC method can be taken advantage of for more advanced simulation needs as will be shown in the next section. It has been demonstrated that, despite the significant differences in their designs, the MD and MC simulations generate remarkably consistent equilibrium properties (pp 139–140, Ref.20 ). This fact indicates that there must be a certain fundamental rule in play, which, obviously, is the Boltzmann Distribution Law. Thus, from a fundamental perspective, the MD and MC algorithms are no more than just Boltzmann sampling schemes, irrespective of how they are designed. This is why we say that simulation schemes are independent of subjective manipulations and computer algorithms, assuming that the algorithms are implemented correctly. In a sense, molecular simulation schemes are strictly defined testing protocols (may be called a “Boltzmann test”), to which different particle interaction functions (force fields) can be subjected. The latter (i.e., the force fields), on the contrary, are to some extent prone to subjective manipulation. We will discuss force fields further in “The Role of Particle Interaction Functions.” Advanced Simulation Methods To strengthen the capability and improve the efficiency of conventional MD and MC methods, a variety of advanced simulation approaches have been proposed over the years. The core algorithms of these approaches, however, are all based upon the MD and MC schemes. Here, we will only mention two cateDOI 10.1002/jps
2009
gories of methods that are designed to deal with the two major drawbacks of conventional simulation approaches. The first drawback is that entropy and free energy are not directly accessible by conventional MD and MC methods. These properties are related to the total number of microstates (that a system can sample at equilibrium) rather than to simple averages of any of the properties of the microstates. Thus, sampling just a representative ensemble of microstate, as is done in conventional simulations, cannot directly yield these values. This drawback indeed brings about many inconveniences because the free energy is often desired in many chemical and pharmaceutical problems. Note, however, this drawback is not unique to simulations. In experimental studies, the same problem occurs: Entropy or free energy also cannot be directly measured by experiments (p 169, Ref.22 ). Instead, experiments measure a derivative of free energy such as pressure, internal energy, or enthalpy. To solve this problem experimentally, the free energy difference between the state under study and a selected reference state is often evaluated by integrating a measurable derivative of free energy as given above. To do so, it is necessary to designate a reference state, and to design a reversible path between the reference and the state under study. For example, to measure the free energy difference between two polymorphs under isobaric conditions, one polymorph at certain temperature T is held as reference. The reversible path could be that of heating the second polymorph from T to its melting point → melting → cooling (or heating) to the melting point of the reference polymorph → crystallization to the reference polymorph → cooling to T. The free energy difference can be obtained by appropriately integrating enthalpy (heat capacity) along this path. Similarly, this approach can be adopted in computer simulations. The difference is that simulations offer more flexibility in the selection of reference states and integration paths. For example, to evaluate the free energy change arising from the formation of a certain hydrogen bond (HB), the state devoid of this bond can be designated as the reference state. One feasible path could be that the HB is gradually (reversibly) “turned on” from zero strength (the reference state) to its full strength (the state under study). In practice, the hydrogen-bonding strength is divided into multiple levels, and the system energies at equilibrium are measured at each level via simulations, and are then integrated appropriately to yield the free energy difference. Obviously, this reversible path is not physical and cannot be accomplished through experiments (i.e., the HB of varying strengths for the same molecules cannot be attained in experiments). In simulations, however, this is readily achievable, and the physical principle remains valid. This approach is known as “thermodynamic integration.” Detailed introduction to this JOURNAL OF PHARMACEUTICAL SCIENCES, VOL. 100, NO. 6, MAY 2011
2010
CUI
method and its variations can be found in chapter 7 of Ref.22 The second issue to consider with conventional MD and MC simulations is a time limitation. Designed to follow exactly how real systems equilibrate, the MD approach in theory should simulate for at least as much equilibration time as that is needed for the event to occur in the real world, discounting the much smaller system sizes in simulations. In contrast, current computation power can typically support an equilibration time of only a few to hundreds of nanoseconds. Therefore, any events or processes that take longer than a few hundreds of nanoseconds to occur will be difficult to reproduce directly by MD simulations. Meanwhile, the MC approach encounters similar problems. Slow equilibration processes often arise from high-energy barriers between the nonequilibrium states and equilibrium. In MC simulations, higher energy barriers result in a low probability for the algorithm to accept the trial move over the barrier, so that the system spends too much time in finding its chance to overcome the barrier. This certainly slows down the equilibration process. The time limitation is a physical one because high-energy barriers also result in rare events in real material systems, which indeed take longer to occur. From this perspective, we may say that the simulation techniques so designed are pretty “realistic” in that they not only mimic “good” behaviors but also copy “bad” behaviors from the real world. The time limitation is one of the most challenging problems facing workers in the simulation field. Many phenomena, especially those occurring in the solid state, have a much longer time frame than what is feasible in conventional simulations. To name just a few, crystallization from either solution or from the amorphous state often takes from days to years to occur. Polymorph transitions in the solid state are another activity whose time frame ranges from days to months. Without a sampling scheme that can computationally speed up the equilibration process, these phenomena will defy simulation efforts. The challenge of designing such schemes is that, on the one hand, sampling should follow representatively what is going on in real material systems, whereas on the other hand, the method must be made artificially faster. This conflict has continuously prompted innovations of novel simulation methods over the years. For the purpose of this work, we will not get into details of these approaches except to outline some general thinking. It seems that most of these developments take advantage of the unphysical nature of MC simulations. For example, to assist the system to overcome high-energy barriers, an unusually large MC move can be introduced, followed by arbitrarily accepting this move (biased MC scheme). The normal simulations, either MD or MC, can thereafter be JOURNAL OF PHARMACEUTICAL SCIENCES, VOL. 100, NO. 6, MAY 2011
continued. These large moves are obviously not physical because particles will not act in this way in the real world. However, an artificial jump may help the system to move out of any energy traps, and thereby accelerate the equilibration process. A variety of advanced schemes aimed at this goal are designed by compounding conventional MD or MC schemes with jumping, insertion, and exchange types of MC moves. More details can be found in chapters 13–16 of Ref.22
THE ROLE OF PARTICLE INTERACTION FUNCTIONS The above sections have shown that equilibrations are a characteristic of many-body systems governed by the laws of general statistics. One obvious question that arises is, what makes one system different from another? In other words, what gives rise to the diverse properties in the material world? The simple answer is that diversity is caused by specific particle interactions. In simulations, particle interaction functions are the only input information representing the specificity of materials considered in any study, and it is this that gives rise to various physicochemical properties in different systems. Although the initial conditions, such as temperature, pressure, concentration, and particle position, are also important input information used in simulations, they are more study specific rather than material specific. That is, the initial conditions are bounded by the study purposes rather than by the nature of the specific molecules. Hence, to make simulation results mimic the behavior of a corresponding physical system, it is imperative to use molecular interaction functions that accurately describe the actual molecular physics in the study. Mathematically, all particle interaction functions represent potential energy as a function of particle separation. Alternatively, the interaction functions can be expressed in terms of interaction forces, thus called “force fields.” These two approaches are equivalent because the interaction force is the gradient of potential energy along the vector connecting the ind teracting particles, fij = − drijij , where f , , and r are interaction force, potential, and the connecting vector (particle separation), respectively, and i and j denote the two interacting particles. More fundamentally, all particle interactions arise from interactions of electrons and nuclei in the system,9 that is, electron–electron, electron–nuclei, and nuclei–nuclei interactions. Hence, particle interaction functions are essentially the summation of the electrical interactions that occur between these charged bodies. Depending on their characteristics, the interaction potential may be further categorized into four classes: Coulombic, polarization, dispersion, and Pauli repulsion energies. The Coulombic potential is given by the DOI 10.1002/jps
MOLECULAR SIMULATIONS TO PROBE PHARMACEUTICAL MATERIALS qq
formula coul = 4Bgi ◦jrij , where q is the particle electrostatic charge, and g◦ is vacuum permittivity (electric constant). This potential decays with charge separation on the order of r−1 . The polarization potential arises from the formation of dipoles induced by an electric field. For example, when approached by the electric field of a neighboring polar molecule or chemical group (dipoles), the electron distribution of an atom will change, resulting in its polarization. The polarization potential at position i imposed by a dipole q2 (g −g )2 at position j may be approximated as pol = C i jr6 i , ij
where g is the strength of the electric field and C is a constant related to the polarizability of the particle.9 This potential decays with charge separation on the order of r−6 . The dispersion potential is fundamentally also a polarization potential. It is, however, generated by a transient dipole (i.e., oscillating electron density) rather than by a static dipole. This potential is more prevalent as a factor in apolar particles, in which static dipoles are minimal and weak. Similarly to the polarization potential, the dispersion potential decays with distance as r−6 .9 Finally, the Pauli repulsion potential originates from the quantum mechanical incompatibility of electrons bearing the same quantum numbers, a rule known also as the “Pauli exclusion principle” (p 66 of Ref.9 ). This potential prevents the electron density of two close-shell atoms from overlapping. The repulsion potential changes sharply, with an approximation of rep = Krij−12 ,9 as the electron density of two atoms starts to overlap. From the discussion above, it is evident that a typical force field should contain a r−1 term representing the Coulombic potential, a r−6 term denoting the polarization and dispersion potentials, and a rij−12 term realizing the repulsion. The repulsion (rij−12 ) term is sometimes replaced by an exponential term, e−rij , with a similar feature of sharply changing potential. Thus, the particle interaction potential may be approximated by:9 ij = A + exp(−Brij ) − Crij−6 +
qi qj −1 r 4Bg◦ ij
(3)
where A, B, and C are parameters representing the Coulombic, polarization, dispersion, and Pauli repulsion interactions of the specific charges involved. In this expression, the dependences of interaction potential on charge separations rij are explicitly provided. The dependence on the specific charge distribution qk(k denotes a spatial location) is explicit in the Coulombic term (rij−1 term), but is implicit in other terms, and is manifested in the parameters A, B, and C. With this expression or many other similar approximations, it is clear that the accuracy of a force field depends heavily on the spatial charge distribution in the material system. DOI 10.1002/jps
2011
There are in general two classes of methods that enable the calculation of a force field as given in Eq. (3) or other related expressions. The first is a more empirical approach. Simply taking the face value of Eq. (3), if A, B, C, and the charge distribution qk can be somehow parameterized for various atoms in diverse chemical environments (e.g., chemical bonding and chemical groups), a set of preliminary force fields applicable to atoms under these environments is then available. For instance, charge distribution of carbon and hydrogen atoms can be parameterized for a methyl group given that the methyl group is connected to another carbon atom, whereas they can be assigned differently if the methyl group is linked to an oxygen atom. With these building blocks, a force field of a whole molecule can be defined. Feeding this set of force fields into molecular simulations allows the equilibrium properties of various materials to be obtained. By comparing the simulated results with the corresponding experimental data, the parameterization of the force fields can be adjusted and calibrated, until the results conform satisfactorily to the experimental observations. Many well-known classical force fields have been developed in this way. For example, CHARMM23 and AMBER24 are force fields that incorporate several tens of atoms in up to a couple of hundreds of bonding types, where almost all building blocks of proteins and nucleic acids can be found. Hence, the atomic interaction functions for proteins and nuclei acids can be effectively constructed. Obviously, this empirical approach suffers from the drawback that its derivation is not based upon the actual physical charge distribution in the molecule. The focus of parameterization of these force functions is to reproduce the macroscopic properties of the molecules and the systems, whereas the real physical meaning of these parameters is blurred during the process of parameter adjustments. Because no information on the actual electron density distribution is fed into the calculation of these parameters, this approach is fundamentally still a phenomenological one, although at a deeper level than thermodynamics. That is, the equilibration process in MD simulations is completely physical and mechanistic when these force fields are employed, whereas the source of the force field is empirical. Additionally, this approach makes the assumption that the same atom engaged in the same chemical bonding environment should have the same charge distribution and thereby the same force field. This is not exactly true as evidenced by slight shifts often observed in various spectroscopic signals of the same chemical groups in different molecules. Nonetheless, various empirical force fields developed in this way have produced great success and robustness in a variety of material systems. It is somewhat surprising that after numerous approximations, the force fields so constructed have been able to reproduce JOURNAL OF PHARMACEUTICAL SCIENCES, VOL. 100, NO. 6, MAY 2011
2012
CUI
a diverse range of material properties with quite reasonable accuracy. In contrast to the above-mentioned empirical approach, Eq. (3) can also be parameterized physically. Recall that the parameters in Eq. (3) are all determined by the actual charge distribution qk. Hence, once qk is known, these parameters can be attained. The charged bodies in a material system are electrons and nuclei. While nuclei have relatively welldefined spatial positions, electrons move at a much faster speed and their positions cannot be accounted for by classical mechanics. Instead, quantum mechanical theories, specifically, the Schr¨odinger equation, delineate the spatial probability distribution of electrons. Unfortunately, the Schr¨odinger equation is not solvable analytically for multielectron particles such as a nonhydrogen atom or a multiatomic molecule. To solve the equation, various approximations and simplifications are necessary. Among these approaches, the density functional theory (DFT) has emerged as a reliable and convenient approach to enable the calculation of electron density distribution.25 Computer algorithms executing DFT calculations have been widely applied recently to predict the electron density distribution of molecules. This information, once attained, can be projected onto a three-dimensional grid with the electron density defined for each lattice point, and together with the positively charged nuclei, the charge distribution of a molecule is available. This information can be directly fed into MD or MC simulations to calculate the interparticle interactions (e.g., the Pixel method, see pp 304–314 of Ref.9 ). The drawback, however, is that each lattice point has to be treated as an independent charged body and thus the computation cost is quite high. A more common alternative is to “condense” the charge grid into the atomic centers, or nuclei, so that a point charge is approximated for each atom. To do this, Eq. (3) or related formulae can be parameterized to data-fit the electric field of an atom with a calculated electron density grid.9 The parameterized function then represents the force field of this specific atom; this approximation greatly reduces the computation workload. Some new generation force fields have been constructed in this way, such as the COMPASS force field.26 One might ask, are these physically parameterized force fields consistent with the empirically derived ones? Indeed, many successful examples have been reported. For example, see Ref.27 However, it is not uncommon that when calibrating the force fields generated from electron density distributions against the experimental data, the parameters need to be further adjusted. These adjustments, unfortunately, often obscure the relation between the parameters obtained and the underlying physics. Regardless of how the force fields are generated, they will need to be calibrated against some sort of JOURNAL OF PHARMACEUTICAL SCIENCES, VOL. 100, NO. 6, MAY 2011
experimental data. Typically, three classes of data are employed to conduct calibrations (p 109 of Ref.9 ). These include thermodynamic data such as sublimation energy, structural data from the crystal structure, or spectrometric data such as vibrational frequencies. Force fields constructed through various approaches, as described above, inevitably involve multiple approximations and simplifications at this stage, and this certainly gives rise to concerns about their reliability. Consequently, what attitude should we hold with regard to these force fields? This question may be considered from the following perspectives. First, as the end users of the force fields, our goal is to investigate pharmaceutical phenomena. From a practical standpoint, a force field may be useful only when it can reproduce the phenomena of interest. Note that careful construction and extensive calibration may improve the reliability of a force field in general, but they cannot guarantee that the particular phenomena of interest are reproduced. Hypothetically speaking, if the special features of the force field that give rise to a particular phenomenon are not captured by the force field construction process, and are unrelated to the experimental data used for calibration, the force field may fail to reproduce the phenomenon under study, even though it is still reliable in predicting many other properties of the system. Thus, the chief concern for reliability is whether the force field works for the phenomenon of interest rather than how it is derived. Second, although a clear and solid physical meaning for force field parameters is important and is the current direction for force field development, it is generally believed that we should not abandon simulations, waiting until a “perfect” force field emerges. One reason for saying this is that the more a force field is used, the better we know where it is likely to be reliable or unreliable. For example, some force fields may be more appropriate to protein systems, and others may be more suitable for small molecules. Some force fields work well in dilute phases, others are good in condensed phases. This knowledge helps in the selection of force fields, and thereby strengthens their reliability. Ideally, one force field should accurately reproduce all material properties as listed in the beginning of “Why Do Systems Seek to Attain Equilibrium.” In reality, however, success in solving one problem does not guarantee its effectiveness in another. Hence, this concern for reliability should be addressed with a dynamic and flexible approach. In summary, reliability can only be proven or disproven by carrying out more simulation studies (pp 112–113 of Ref.9 ). With this in mind, should one be concerned that a reproduction of the targeted phenomena may arise from certain artifacts in the force fields? Certainly, one should always be vigilant. A couple of suggestions DOI 10.1002/jps
MOLECULAR SIMULATIONS TO PROBE PHARMACEUTICAL MATERIALS
from the author are provided below. First, using standard force fields developed for general chemistry purposes may be a good way to initiate a study. Suppose a standard force field is selected, and the particular phenomenon of interest is successfully reproduced via simulations. Because the force field is not derived or adjusted specifically for the studied system or phenomenon, this success is likely more than just a coincidence. It suggests that the particular phenomenon of interest has been captured in the standard force field. Although the possibility of an artifact cannot be excluded immediately, the odds for such a coincidence are probably low. On the contrary, while a customer-adjusted force field may appear to be more accurate for the system and phenomenon under study, the transferability of the study results and the force field to other systems may be in question, and the artifact concern is still not completely ruled out. Additionally, the cost and efforts in customizing the force field for each study are high. Hence, adopting the standard force fields at least in the first trial is not a bad option. Second, the treatment of force fields should always take under consideration the purpose of the study. For example, if we intend to gain an accurate prediction of certain properties, such as equilibrium solubility or melting point, a customer-adjusted force field may be necessary. On the contrary, if a microscopic mechanism of a qualitative nature is of interest, a standard force field may provide sufficient accuracy. Finally, just as would be true in conducting any experiment, a control simulation is helpful. For example, a control simulation, employing an alternative force field, may verify whether or not an artifact exists. This option probably is less expensive and more convenient than customizing the force field for each study. In summary, particle interactions dictate the properties of both everyday material systems and the simulated systems. MD and MC algorithms are essentially Boltzmann sampling schemes. In molecular simulations, the model construction process is primarily the force field assignment and its validation. Once the proper force field is assigned, the simulation outcomes are essentially set, despite of the lengthy and strenuous MD or MC equilibration runs to complete the process.
WHERE CAN MOLECULAR SIMULATIONS BE APPLIED? Molecular simulations offer some unique advantages over conventional experimental tools. The first is the high resolution obtained in both space and time. In simulation studies, the spatial resolution is usually at the atomic level (assuming that the force field is atomic based), whereas the time resolution is often on the order of femtoseconds (10−15 s), a timescale not easily accessible by conventional experimental DOI 10.1002/jps
2013
techniques. Furthermore, simulations yield multiple types of information concurrently, including structural, dynamic, and energetic information. Experimental tools often have to access these data separately, likely with different quality and resolution. Nevertheless, as described above, molecular simulations also suffer from some weaknesses, including smaller system size and shorter equilibration time. Hence, molecular simulations and experiments should be considered to be complementary, wherein one probes microscopic properties and the other characterizes macroscopic properties. Furthermore, from a cost perspective, these two approaches can also be complementary. For property measurements under conventional conditions, experimental tools can provide data rapidly and more easily relative to simulations. For example, when determining a solubility value or a melting point, experiments are much more straightforward and cost-effective than simulations. On the contrary, experiments could present big challenges if they are intended for exotic conditions such as a pressure of 1000 atm or a temperature of one million Kelvin. The simulation tools can tackle these problems at relatively low cost. Hence, the selection of simulation versus experimental approaches must be driven by the study needs. Molecular simulations have been employed in probing some pharmaceutically relevant systems. For example, Xiang and Anderson28 used MD simulations to study the molecular mobility of a peptide in an amorphous solid dispersion. Both temporal and spatial heterogeneity of the translational, rotational, and conformational motions of the peptide were revealed, with resolutions hardly achievable by conventional experimental tools. A similar study29 from the same authors on the effect of water content on molecular mobility in poly(vinylpyrrolidone) glass revealed important microscopic information such as water distribution at the molecular level, the polymer segmental relaxation rates, and the local plasticization effect of water on solute diffusivity. These studies supplied detailed microscopic pictures of the molecular activities, which can be correlated to the macroscopic observables such as the glass transition temperature and the relaxation dynamics. Xiang and Anderson30 also studied the conformational structural changes of small peptides in solvents using MD simulations. The structural, dynamic, and energetic data for different solvation states were collected concurrently to provide a comprehensive description of peptide behavior in these solutions. This study clearly demonstrated that, with a comprehensive package of data supplied by the simulation approach, the conclusion is more affirmative. In another study, employing both MC and MD simulations, Conrad and de Pablo31 investigated the potential cryoprotecting mechanism of trehalose in the presence of residual water. In the study, JOURNAL OF PHARMACEUTICAL SCIENCES, VOL. 100, NO. 6, MAY 2011
2014
CUI
evidences were found to help differentiate between several competing hypotheses that were present. One observation, for example, is that water molecules are found “trapped” in the trehalose matrix as the system approaches its glass transition, a result that had been speculated but not previously confirmed. The simulations thereby lent significant credence to this hypothesis. These studies nicely demonstrated how the simulation approach may be used: When microscopic details are lacking, when a comprehensive picture of any phenomenon is desired, and when experimental observations yield controversial conclusions. Solubility prediction is an area with enormous importance to pharmaceutical industry. The common practice today is to rely on the approach of quantitative structure–property relationship,32 wherein a correlation function is statistically constructed relating the solubility to some compound properties, of which the functional group contributions can be accounted for. Although this approach is often fast, its reliability and accuracy may not be desirable. Westergren and coworkers33–35 employed molecular simulation tools to develop a method for the estimation of the free energy changes upon solubilization. In the paradigm proposed, they divided the solubilization process into steps such as moving a drug molecule from the amorphous state to vacuum, creating a cavity in liquid water, and placing the drug molecule in the cavity followed by turning on the drug–water interactions. The free energy changes in these steps were calculated through MD simulations, and the results were further approximated by a function relating to drug surface tension and internal energy components. This hybrid approach significantly improved the prediction reliability while maintaining physical insight into the solubilization process and keeping the computational cost in check. In predicting drug solubility in lipid excipients, Huynh et al.36 employed molecular simulation tools in combination with the Flory–Huggins theory to estimate the free energy of mixing, which was thereafter used to estimate the drug solubility in lipid excipients. Similar approaches may be extended to the estimation of drug–polymer miscibility in amorphous solid dispersions. Rane and Anderson37 used thermodynamic perturbation and integration approach to calculate the functional group contributions of a set of anthracene derivatives to the free energy changes of transfer between different lipid media. This approach can be effectively employed in rank ordering the solubility of structurally similar compounds in lipids. In addition, they also proposed an appropriate scaling factor for atomic charges of solutes in these lipids to account for the weaker polarization effect of these lipid media relative to water. This result supplies a good reference JOURNAL OF PHARMACEUTICAL SCIENCES, VOL. 100, NO. 6, MAY 2011
for future attempts in predicting drug solubilities in other lipids. The use of cyclodextrin–drug “host–guest” complexation is a technique widely applied in solubility enhancement and compound separation. Although applications of this technique have been widely reported, mechanistic understanding and interpretation, based on experimental results, are not always straightforward. In recent years, however, molecular simulations have been used to probe the mechanisms of this phenomenon. Zhang et al,38 for example, reported a simulation study of cyclodextrin inclusion of puerarin and daidzin, two structurally similar compounds. In addition to revealing a great many details of the inclusion processes, the study also supplied a mechanistic explanation for the chromatographic separation of these compounds. Figueiras et al.39 simulated the synergistic effect of cyclodextrin and L-arginine in enhancing the solubility of omeprazole. They found that the omeprazole molecule was too long to be completely included in the cavity of $-cyclodextrin; the hydrophobic tail of omeprazole that was left outside the cyclodextrin cavity was then surrounded by the arginine molecules in the solution. This aggregation reduced the exposure of the hydrophobic omeprazole tail to the aqueous environment, and therefore further promoted the solubility of omeprazole. Furthermore, Yong et al.40 studied the structural behavior of 2-hydroxypropyl$-cyclodextrin (HP$CD) in aqueous solution. In the case of HP$CD, simulation studies supply critical structural information because HP$CD is an amorphous material, and therefore the structural feature of HP$CD cannot be accessed through the singlecrystal structure determination. Molecular simulations have also been used in studying protein aggregation, which is a major source of instability for antibody therapeutics.41 Recently, for example, Vagenende et al.42 employed MD simulations to study the preventive effect of glycerol on protein aggregation. They found two mechanisms that may account for the stabilization effect of glycerol. The first is the electrostatic interactions between glycerol and proteins, which induces a more compact and stable protein conformation. The second is the preferential interaction between glycerol and the contiguous hydrophobic patches on protein molecules. This effect allows glycerol molecules to form an amphiphilic interface between the hydrophobic patches and the aqueous media, inhibiting the aggregation of protein molecules. A brief account of a similar study on the stabilization effect of arginine is also available.43 Chennamsetty et al.44 employed MD simulations to probe the equilibrium conformations of antibody proteins. They further defined a parameter termed spatial aggregation propensity, which can be used to identify the spatial regions on the proteins DOI 10.1002/jps
MOLECULAR SIMULATIONS TO PROBE PHARMACEUTICAL MATERIALS
that are prone to aggregation. By targeted mutations of these regions, they were able to enhance the stability of the mutants significantly, while still managing to maintain the activity of the antibodies. The strength of simulations may also be demonstrated with another case study involving hydrotropic solubilization, an old technique often used to solubilize poorly water-soluble compounds.45 Several competing mechanisms have been proposed to account for this phenomenon, each with supporting and opposing evidence reported in the literature (for a more detailed account see Ref.46 ). These proposals include, among others, the complexation of solutes and hydrotropic agents via specific interactions, the alteration of solvent properties by the presence of the hydrotropic agent, and molecular aggregation.47,48 This case is a good example of a situation in which controversy arises when the experimental techniques lack the resolution required to reveal the microscopic details. Specifically, for example, the complexation mechanism, which suggests that solubilization is a result of stoichiometric complexation between the solutes and the hydrotropic agents,49 was supported by fitting the phase solubility data to the relevant kinetics50 or by spectrometric data.51 Although both experimental techniques are common tools in the pharmaceutical arena, neither can provide a direct and reliable microscopic picture of the hydrotropic solution. For example, the phase solubility data may result from multiple kinetic phenomena, and therefore one good fit cannot necessarily exclude other potential mechanisms. The spectrometric data, on the contrary, reveal the intermolecular interactions, but based on this information, it is often uncertain as to how to differentiate a mechanism of stoichiometric complexation from that of molecular aggregation (both mechanisms may cause the same type of interactions). In a simulation study of such systems, the author investigated the solubilization effects of several urea derivatives, including urea, methylurea, ethylurea, and butylurea. The study showed that solutes in these solutions undergo aggregation after an equilibration time of 3 ns (see Fig. 4). Estimating energy changes in the system suggested that aggregation is associated with the solubility increase of the model compound nifedipine. In other words, solubilization was reproduced in the simulations, and aggregation appeared to be a major phenomenon associated with the solubilization. By monitoring the evolvement of hydrogen bonding during equilibrations, it was found that aggregation and solubilization were primarily due to hydrophobic interactions. Now one might say, by inspecting the structures of urea derivatives, that this finding is quite apparent and does not need a simulation study. Indeed, with a hydrophilic urea head and a hydrophobic alkyl tail, the aggregation of urea derivatives conforms to the DOI 10.1002/jps
2015
current understanding of hydrophobic interactions. Nevertheless, two points are worth considering: (1) our current understanding of hydrophobic interactions does not necessarily lead to a prediction of how the aggregation of urea derivatives occurs. One apparent alternative that may arise from hydrophobic interactions, for example, is a self-assembly of the amphiphilic solutes in solutions, such as micelle formation. Without a relevant study, either a simulation or an experiment, it is impossible to predict whether it is a nonstoichiometric aggregation, or micelle formation, or other events occurring in the solutions; (2) “predicting” by inference is different from “predicting” through simulations. Although simulations are by nature predictions, they are rigorously derived, Boltzmann Distribution Law-compliant, and quantitatively tested exact projections. The inferred predictions, on the contrary, are qualitative and vague speculations. They often cannot yield an exact and exclusive conclusion such as in this case. The last point may be more evident in a second study, in which another common hydrotropic agent nicotinamide (NA) was simulated.46 In this case, the structure of NA did not directly afford a clear mental prediction of hydrophobic interactions. Yet, the simulations demonstrated that nonstoichiometric aggregation occurred in this solution, which could be accounted for by hydrophobic interactions. Furthermore, in a third simulation study on the solubilization of riboflavin by caffeine,52 it was found that the solutes formed parallel stacks in solution, a result consistent with the projected structure by NMR studies.53 Despite the different modes of solute clustering in these systems, hydrophobic interactions were also found to be the driving mechanism in the caffeine–riboflavin solution. Combining all three studies, simulations were able to provide a unifying mechanism for hydrotropic solubilization despite the structural diversity of the hydrotropic agents. Note that in the simulation studies mentioned above, HBs between drugs and the solubilizing agents were indeed found.46,52 However, they accounted for at most a few percent of the total increases in HBs in the solutions, and therefore could not be the critical factors in the solubilization process (the dominating portion of HB increases was contributed by water–water HBs). This result shows that the specific interactions between drug and the solubilization agents were correctly detected by some spectroscopic experiments.51 However, this experimental evidence was incomplete and could not fully support the conclusion that solubilization arises from specific solute–agent interactions. This example demonstrates that experimental evidence does not always guarantee the validity of a conclusion derived from the experiment. The case studies mentioned above demonstrate the strengths of simulation tools. Although this tool may JOURNAL OF PHARMACEUTICAL SCIENCES, VOL. 100, NO. 6, MAY 2011
2016
CUI
Figure 4. The equilibrium states of hydrotropic solutions contain nifedipine and urea derivatives. Water molecules are deleted for the sake of clarity, nifedipine is in color, and urea derivatives are in black. (a) One nifedipine and 72 urea in 1000 water, (b) one nifedipine and 72 methylurea in 1000 water, (c) one nifedipine and 72 ethylurea in 1000 water, and (d) one nifedipine and 37 butylurea in 1000 water.
greatly strengthen our capability in probing pharmaceutical materials, it should be stressed that molecular simulations and experiments are complementary rather than competing. This point is particularly important in a judicious selection of these tools.
A FEW MORE FREQUENTLY ASKED QUESTIONS Some common questions encountered in molecular simulation studies are given below. The answers contain the viewpoints of the author. They are intended more for further discussions, and should not be considered final. Should Simulation Studies Always Be Confirmed by Experimental Observations? Molecular simulations are by nature predictions. Therefore, ultimately their results cannot be considered to be conclusive until proven or disproven by relevant experimental observations. From this perspective, the answer to the question above should always be yes. Nevertheless, not all experiments are created equal. To really “confirm” the simulation reJOURNAL OF PHARMACEUTICAL SCIENCES, VOL. 100, NO. 6, MAY 2011
sults, the “relevant” experiments have to provide direct and sufficient evidence. For example, in the studies on hydrotropic solubilization, the simulations yielded information on solute clustering schemes (i.e., aggregation versus parallel stacking) and water–water HB changes. To “confirm” these results, experiments should somehow demonstrate unequivocally that solutes indeed cluster into aggregates rather than stacks, and that water–water HBs indeed increase. A thermal calorimetric experiment, for example, may help verify whether the enthalpy change is consistent with what was observed in the simulations. Nevertheless, this experiment is insufficient to prove the simulation results because the same enthalpy change can result from either aggregation, stacking, or other events. Therefore, a calorimetry study may function as a calibration for the force field (see “The Role of Particle Interaction Functions”) rather than as a direct confirmation of the solubilization mechanism obtained from the simulations. Hence, although simulations should always be confirmed by experimental observations wherever possible, not every experiment can be used as explicit confirmatory evidence. In fact, DOI 10.1002/jps
MOLECULAR SIMULATIONS TO PROBE PHARMACEUTICAL MATERIALS
it is probably more often than not that conducting an experimental confirmation is not immediately feasible because simulations are employed exactly when there are no handy experimental tools available or when the experimental tools available do not have sufficient resolution power. This situation necessarily raises the next question. How Should We Treat the Conclusion of a Simulation Study? Because the simulation results are often difficult to prove by experiments, at least immediately, should they be treated as “problem solved” or just as hypotheses? The author thinks that they should always be considered initially as hypotheses requiring further testing, as would be true of any experiment. Consider again the case of hydrotropic solubilization. Numerous experimental studies have prompted several potential mechanisms, all of which are in fact hypotheses. In other words, experimental data do not guarantee the validity of any conclusions. When new evidence emerges, either experimental or simulated, the hypotheses will be subjected to reexaminations. When there are no better or reliable theories, simulations yield hypotheses and insights, which may enrich our understanding. Until these hypotheses are proven wrong or incomplete, either by experiments or by theoretical derivations, or by other simulations, their contributions to our understanding of the topic should be welcomed. Should Simulations Studies Always Be Published Together with Corresponding Experimental Studies? Not necessarily. It is evident from the discussion above that finding an exact experimental proof for a simulation is not always easy. Facing this reality, the question is: Should we wait until a relevant experiment tool is available or should we proceed with simulation studies recognizing that such studies are predictions that need to be confirmed in the future? Apparently, the benefits overweigh the concerns when taking the second approach. Currently, it is accepted in many scientific fields that molecular simulations are a mature science, and that studies in this area can be reported as stand-alone research. Sometimes, simulations are treated as an independent approach in addition to experimental and theoretical researches. For example, the “J Phys: Condens Matter” states in its scope that “. . .Papers may report experimental, theoretical, and simulation studies.” (http://iopscience.iop.org/0953-8984/page/Scope). Furthermore, a quick survey on the current issue (Vol. 114, Issue 22) of “J Phys Chem B” shows that six out of 33 papers are on molecular simulations. Only one contains parallel experimental studies, whereas the rest are all stand-alone simulation reports. Hence, DOI 10.1002/jps
2017
treating simulation studies as stand-alone reports is consistent with current common practice.
CONCLUDING REMARKS This review introduced the scientific rationale, the logical procedures, and the strengths and weaknesses of molecular simulations. It demonstrated that equilibration processes of many-body systems are governed by general statistical laws, which are valid in computerized model systems. These statistical laws can be enforced via MD and MC simulation schemes, which can, in combination with reliable force fields, reproduce equilibration processes through computation. Simulation approaches can provide a comprehensive characterization of the states of materials and the processes they undergo with highly resolved microscopic details. In conclusion, it appears clear that the use of these tools can greatly strengthen our capability to probe pharmaceutical materials, and to enhance our understanding on their molecular mechanisms. As familiarity with such approaches grows, it appears certain that molecular simulations will find more applications in pharmaceutical systems.
ACKNOWLEDGMENTS This review was initially suggested to the author by Professor Emeritus George Zografi from the University of Wisconsin-Madison. The author also thanks Professor Zografi for many important comments and suggestions, and extensive editing of the manuscript.
REFERENCES 1. Metropolis N, Ulam S. 1949. The Monte Carlo method. J Am Stat Assoc 44:335–341. 2. Beichl I, Sullivan F. 2000. The Metropolis algorithm. Comput Sci Eng 2:65–69. 3. Gavezzotti A. 1999. A molecular dynamics view of some kinetic and structural aspects of melting in the acetic acid crystal. J Mol Struct (L.S.Bartell Special Issue) 486:485–499. 4. Svishchev IM, Kusalik PG. 1995. Crystallization of molecular liquids in computer simulations: Carbon dioxide. Phys Rev Lett 75:3289–3292. 5. Laasonen K, Wonczak S, Strey R. 2000. Molecular dynamics simulations of gas-liquid nucleation of Lennard-Joses fluid. J Chem Phys 113:9741–9747. 6. Ferrari ES, Burton C, Davey RJ, Gavezzotti A. 2006. Simulation of phase separation in alcohol/water mixture using twobody force field and standard molecular dynamics. J Comput Chem 27:1211–1219. 7. Yoneya M, Nishikawa E. 2004. Sponteneous threedimensional nanostructure formation of perfluoroalkyl terminated liquid crystal: A molecular dynamics simulation study. J Chem Phys 121:7520–7525. 8. Sellner B, Zifferer G, Kornherr A, Krois D, Brinker UH. 2008. Molecular dynamics simulations of $cyclodextrin−aziadamantane complexes in water. J Phys Chem B 112:710–714. JOURNAL OF PHARMACEUTICAL SCIENCES, VOL. 100, NO. 6, MAY 2011
2018
CUI
9. Gavezzotti A. 2007. Molecular aggregation. New York: Oxford University Press Inc. Chapt. 4. pp87–113. 10. Silbey RJ, Alberty RA, Bawendi MG. 2005. Physical chemistry. 4th ed. Hoboken, New Jersey: John Wiley & Sons, Inc. p77. 11. Chandler D. 1987. Introduction to modern statistical mechanics. New York: Oxford University Press. Chapt. 3. 12. Agarwal BK, Eisner M. 1998. Statistical mechanics. 2nd ed. New Delhi, India: New Age International Ltd. p12. 13. Uffink J. 2004. Boltzmann’s work in statistical physics. In The Stanford encyclopedia of philosophy (Winter 2004 Edition); Edward N. Zalta, Ed. http://plato.stanford.edu/archives/ win2004/entries/statphys-Boltzmann/. The Metaphysics Research Lab, Stanford University, Stanford, CA. last accessed 03-Nov-2010. 14. Dill KA, Bromberg S. 2003. Molecular driving forces: Statistical thermodynamics in chemistry and biology. New York: Garland Science. p175. 15. Grinstead CM, Snell JL. Introduction to probability. 2nd ed. Providence, Rhode Island. American Mathemetical Sociaty. Chapt. 9. pp325–355. 16. Jaynes ET. 2003. In Probability theory: The logic of science; Bretthorst GL, ed. Cambridge, UK: Cambridge University Press, Chapt. 3. 17. Sinha SK. 2005. Introduction to statistical mechanics. Oxford, U.K. Alpha Science International Ltd. pp372–373. 18. Carter AH. 2001. Classical and statistical thermodynamics. New Jersey: Prentice-Hall, Inc. Chapt. 13. pp233–238. 19. Bernal JD. 1964. The Bakerian lecture, 1962. The structure of liquids. Proc R Soc 280:299–322. 20. Haile JM. 1997. Molecular dynamics simulation. New York: John Wiley & Sons, Inc. pp117–127. 21. Rapaport DC. 1995. The art of molecular dynamics simulation. Cambridge, UK: Cambridge University Press. Chapts. 2 and 3. 22. Frenkel D, Smit B. 2002. Understanding molecular simulation: From algorithms to applications. San Diego, California: Academic Press. Chapt. 3. 23. MacKerell AD, Bashford D, Bellott M, Dunbrack RL, Evanseck JD, Field ML, Fischer S, Gao J, Guo H, Ha S, Joseph-McCarthy D, Kuchnir L, Kuczera K, Lau FTK, Mattos C, Michnik S, Ngo T, Nguyen DT, Prodhom B, Reiher WEIII, Roux B, Schlenkrich B, Smith JC, Stote R, Straub J, Watanabe M, WiorkiewiczKuczera J, Yin D, Karplus M. 1998. All-atom empirical potential for molecular modeling and dynamics studies of proteins. J Phys Chem B 102:3586–3616. 24. Weiner SJ, Kollman PA, Nguyen TA, Case DA. 1986. An all atom force field for simulations for proteins and nuclei acids. J Comput Chem 7:230–252. 25. Wallace DC. 2002. Statistical physics of crystals and liquids. Singapore: World Scientific Publishing Co. Pte. Ltd. Chapt. 2. 26. Sun H. 1998. COMPASS: An ab initio force-field optimized for condensed-phase: Applications overview with details on alkane and benzene compounds. J Phys Chem B 102:7338. 27. Mooij WTM, van Duijneveldt FB, van Duijneveldt-van de Rijdt JGCM, van Eijck BP. 1999. Transferable ab initio intermolecular potentials. 1. Derivation from methanol dimmer and trimer calculations. J Phys Chem A 103:9872– 9882. 28. Xiang TX, Anderson BD. 2003. A molecular dynamics simulation of reactant mobility in an amorphous formulation of a peptide in poly(vinylpyrrolidone). J Pharm Sci 93:855– 876. 29. Xiang TX, Anderson BD. 2005. Distribution and effect of water content on molecular mobility in poly(vinylpyrrolidone) glasses: A molecular dynamics simulation. Pharm Res 22:1205–1214. JOURNAL OF PHARMACEUTICAL SCIENCES, VOL. 100, NO. 6, MAY 2011
30. Xiang TX, Anderson BD. 2005. Conformational structure, dynamics, and salvation energies of small alanine peptide in water and carbon tetrachloride. J Pharm Sci 95:1269–1287. 31. Conrad PB, de Pablo JJ. 1999. Computer simulation of the cryoprotectant disaccharide ","-trehalose in aqueous solution. J Phys Chem A 103:4049–4055. 32. Jain N, Yalkowsky S. 2001. Estimation of the aqueous solubility I: Application to organic nonelectrolytes J Pharm Sci 90:234–252. 33. Westergren J, Lindfors L, Hoglund T, Luder K, Nordholm S, Kjellander R. 2007. In solico prediction of drug solubility: 1. Free energy of hydration. J Phys Chem B 111:1872–1882. 34. Luder K, Lindfors L, Westergren J, Nordholm S, Kjellander R. 2007. In solico prediction of drug solubility: 2. Free energy of salvation in pure melts. J Phys Chem B 111:1883– 1892. 35. Luder K, Lindfors L, Westergren J, Nordholm S, Kjellander R. 2007. In solico prediction of drug solubility: 3. Free energy of salvation in pure amorphous matter. J Phys Chem B 111:1872–1882. 36. Huynh L, Grant J, Leroux JC, Delmas P, Allen C. 2008. Predicting the solubility of the anti-cancer agent docetaxel in small molecule excipients using computational methods. Pharm Res 25:147–157. 37. Rane SS, Anderson BD. 2008. Molecular dynamics simulations of functional group effects on salvation thermodynamics of model solutes in decane and tricaprylin. Mol Pharm 5:1023–1036. 38. Zhang HY, Feng W, Li C, Tan TW. 2010. Investigation of the inclusions of puerarin and daidzin with $cyclodextrin by molecular dynamics simulation. J Phys Chem B 114:4876–4883. 39. Figueiras A, Sarraguca JMG, Pais ACC, Carvalho RA, Veiga JF. 2010. The role of L-arginine in the inclusion complexes of omeprazole with cyclodextrin. AAPS PharmSciTech 11:233–240. 40. Yong CW, Washington C, Smith W. 2008. Structural behavior of 2-hydroxypropyl-$-cyclodextrin in water: molecular dynamics simulation studies. Pharm Res 25:1092–1099. 41. Shire SJ, Shahrokh Z, Liu J. 2004. Challenges in the development of high protein concentration formulations. J Pharm Sci 93:1390–1402. 42. Vagenende V, Yap MGS, Trout BL. 2009. Mechanisms of protein stabilization and prevention of protein aggregation by glycerol. Biochemistry 48:11084–11096. 43. Shukla D, Schneider CP, Trout BL. 2010. Mechanism by which arginine inhibits protein aggregation. Abstracts of Papers, 239th ACS National Meeting, San Francisco, California, March 21–25. 44. Chennamsetty N, Voynov V, Kayser V, Helk B, Trout BL. 2009. Design of therapeutic proteins with enhanced stability. Proc Nat Acad Sci 106:11937–11942. 45. Boylan J. 1986. Liquids. In The theory and practice of industrial pharmacy; Lachman L, Lieberman HA, Kanig JL, Eds. 3rd ed. Philadelphia, Pennsylvania: Lea and Febiger, pp 246. 46. Cui Y, Xing C, Ran Y. 2010. Molecular dynamics simulations of hydrotropic solubilization and self-aggregation of nicotinamide. J Pharm Sci 99:3048–3059. 47. Mukerjee P. 1974. Micellar properties of drugs: Micellar and nonmicellar patterns of self-association of hydrophobic solutes of different molecular structures - monomer fraction, availability, and misuses of micellar hypothesis. J Pharm Sci 63:972–981. 48. Ni N, Sanghavi T, Yalkowsky SH. 2002. Solubilization and preformulation of carbendazim. Int J Pharm 244:99–104. 49. Kenley RA, Jackson SE, Winderle JS, Shunko Y, Visor GC. 1986. Water soluble complexes of the antiviral drugs, DOI 10.1002/jps
MOLECULAR SIMULATIONS TO PROBE PHARMACEUTICAL MATERIALS
9-[(1,3-dihydroxy-2-propoxy)methyl]guanine and acyclovir: The role of hydrophobicity in complex formation. J Pharm Sci 75:648–653. 50. Rasool AA, Hussain AA, Dittert LW. 1991. Solubility enhancement of some water-insoluble drugs by nicotinamide and related compounds. J Pharm Sci 80:387–393. 51. Baranovskii SF, Bolotin PA. 2007. Association of riboflavin, caffeine, and sodium slicylate in aqueous solution. J Appl Spectrosc 74:211–218.
DOI 10.1002/jps
2019
52. Cui Y. 2010. Parallel stacking of caffeine with riboflavin in aqueous solutions: The potential mechanism for hydrotropic solubilization of riboflavin. Int J Pharm 397:36– 43. 53. Evstigneev MP, Rozvadovskaya AO, Santiago AA, Mukhina YV, Veselkov KA, Rogova OV, Davies DB, Veselkov AN. 2005. A 1 H NMR study of the association of caffeine with flavine mononucleotide in aqueous solutions. Russ J Phys Chem 79:573–578.
JOURNAL OF PHARMACEUTICAL SCIENCES, VOL. 100, NO. 6, MAY 2011