Rational design of dental materials using computational chemistry

Rational design of dental materials using computational chemistry

Dental Materials (2005) 21, 47–55 www.intl.elsevierhealth.com/journals/dema Rational design of dental materials using computational chemistry Andrew...

266KB Sizes 1 Downloads 103 Views

Dental Materials (2005) 21, 47–55

www.intl.elsevierhealth.com/journals/dema

Rational design of dental materials using computational chemistry Andrew J. Holder*, Kathleen V. Kilway Department of Chemistry, University of Missouri-Kansas City, 5110 Rockhill Rd, RHFH 410H, Kansas City, MO 64110, USA KEYWORDS Rational design; Computational chemistry; Semiemperical methods; Restorative dental materials; Polymer composite

Summary One of our primary research emphases is the rational design of biomaterials. In this effort we apply a combination of theoretical and experimental approaches, both of which contribute directly to the completion of such a project. Computational chemistry has achieved routine status in modern chemical investigations, and the state-of-the-art is briefly summarized. In such a difficult endeavor as the systematic development of biomaterials, computational chemistry is a natural partner for traditional chemists. Herein, we describe several successful applications of this synergy to the design of dental biomaterials. These include reactivity modeling, sensitization, and density. We also report preliminary progress of polymerization volume change experiments on epoxides that have been specifically designed to provide standardized results for computational analysis. Q 2004 Published by Elsevier Ltd on behalf of Academy of Dental Materials.

Introduction The materials available to a society define its level of technology. This can be quite easily seen as many historical periods are named for the new or predominant materials available to artisans to produce objects. For instance [1]: Stone age Neolithic (‘new’ stone age) Pottery/glass age Copper age Bronze age Iron/steel age Electronic materials age * Corresponding author. Tel.: C1 816 235 2293; fax: C1 816 235 6543. E-mail address: [email protected] (A.J. Holder).

‘Composites’ age A further characteristic of materials’ development is that earlier materials are continually improved. The glass available today is certainly much more varied in properties and of immensely better quality than the glass available to the ancient Egyptians. Given this, one of the primary tasks of researchers is to improve and expand the quality, variety, availability and cost-effectiveness of such substances. With this in mind, we here illustrate how the fusion of experimental and computational approaches can accelerate and improve the process of materials development. Computational chemistry as a discipline has matured substantially in recent decades, due largely to the ready availability of high-speed computers. Calculations that theoreticians have known how to do for many years have now become

0109-5641/$ - see front matter Q 2004 Published by Elsevier Ltd on behalf of Academy of Dental Materials. doi:10.1016/j.dental.2004.10.005

48

A.J. Holder, K.V. Kilway

feasible to perform on a routine basis. This has revolutionized many areas of chemical research. The advantages of using computational chemistry as a research partner have come more slowly to materials’ science than other chemical disciplines due to the size of the molecules to be addressed. This has changed in recent years, as ever more powerful computer systems and more rapid and reliable computational algorithms have been developed. It must be stated directly and understood clearly that computational chemistry cannot stand alone. It functions best as part of a cooperative effort, where it can serve as an advisor and tester of ideas. The specific computational methods applied will be quantum mechanics (both semiempirical and ab initio) and quantitative structure activity relationships (QMQSAR). The combination of techniques facilitates the development of predictive models as well as an increased understanding of the underlying physical phenomena governing the performance of a material in different environments. Such computational chemical methods underlie a systematic and progressive molecular rational design process that is intended to achieve performance goals in as few iterations as possible. This process is graphically shown below in Fig. 1 and involves several steps. The objective is to continue to cycle through the steps (specifically D-G) until a material with acceptable characteristics is discovered, and the model system used to discover the material refined and developed for further use. We believe that such a rational design process is the BEST way to develop such new materials.

the purchase and maintenance of computational resources less expensive than most other major chemical instrumentation, and a trained researcher can usually perform an extensive computational study in much less time than is required for a complete experimental counterpart. Given the rising cost of maintaining laboratories and trained personnel, computational chemistry often becomes a cost-effective alternative and companion to traditional bench chemistry. As with any other scientific method, the results from computational chemistry require careful interpretation. The limitations and results of each type of technique must be considered in the broader context of all the information available for a particular system. Computational chemistry can be divided in a number of ways, but one convenient manner is to think of it as four branches (as shown in Fig. 2), all of which can be tied together through graphical input of the molecule and analysis of the results. Each of these branches will be discussed briefly below. Different computational methods provide different types of information at different levels of quality. Examples of the information available from the various methods are listed in Table 1. The level of computational effort also increases (exponentially) with completeness of results, quality of results and size of the treated system. Thus, the objectives of a particular project and the information it will require must be carefully matched

Computational chemistry in context Computational chemistry is one of today’s most rapidly expanding and exciting areas of scientific endeavor. New computer technologies have made

Figure 1

Rational design process.

Figure 2

Branches of computational chemistry.

Rational design of dental materials using computational chemistry Table 1 Information available from computational methods. Data item

Molecular mechanics

Semiempirical

Ab initio/ DFT

Heat of formation Entropy of formation Free energy formation Heat of activation Entropy of activation Free energy activation Heat of reaction Entropy of reaction Free energy reaction Strain energy Vibrational spectra Dipole moments Optimized geometries Electronic bond order Electronic distribution Transition states

† †

† †

† †



















† †

† †

† †









† †

† †

† †

† †















†, property predicted; †, indirectly available.

49

important a component in computational chemistry as the underlying theories of the chemical models themselves.

Molecular mechanics One group of popular computational methods is based on the approximation of the various interactions present in molecular systems in terms of a force field. These procedures fall in the general category of Molecular Mechanics (MM). They are parameterized to fit a large body of experimental data by defining simple, conceptually intuitive algebraic expressions for bond stretching, angle bending, torsion, and other factors. For instance, some form of Hooke’s Law describing the behavior of a spring is almost always used to express bond-stretching potentials. The constants that scale these expressions are different for different chemical environments. Molecular mechanics techniques do not, however, treat electrons, which limits the relevance of the MM model in many situations where more fundamental chemical information is required. MM finds its greatest utility in the prediction of conformational energies and structures, and is often applied in the study of biological systems due to their size and MM’s computational speed and efficiency. Calculations using MM models are the fastest and least expensive of the computational chemistry approaches.

Electronic structure methods against the computational methods to be employed and the associated effort and expense engendered.

Graphical input and analysis As computational programs have developed, the use of graphical tools for interpretation has become commonplace. Indeed, recent computational chemistry graduates may have never prepared a z-matrix or Cartesian coordinate definition for a molecule by hand, except as part of a class exercise! Furthermore, they are able to utilize display methodologies that were unknown just a few years ago to extract information from the (sometimes bewildering) mass of data and numbers that most computational chemistry programs produce. This results in a more intuitive and natural grasp of the results of the calculations and concomitant greater utility of the methods. Graphical user interfaces (GUIs) are now almost as

Two other types of procedures, the semiempirical and ab initio molecular orbital approaches, are examples of electronic structure methods applying quantum mechanics. They implicitly consider the electron distribution in chemical systems as a function of nuclear position. Electronic structure theories vary greatly in complexity and accuracy. Some models are essentially conceptual, requiring no computations and are best suited to paper and pencil. Others only become usable when powerful computers are applied to their solution. At present, the best and most flexible chemical models are derived from quantum mechanics and are based on the Schro ¨dinger Equation. The Schro ¨dinger Equation relates the properties of electrons to those of waves and permits a mathematical description of atomic, and by extension, molecular characteristics. However, the standard form of the Schro ¨dinger Equation can be solved exactly for only the simplest case, the hydrogen atom. Significant approximations are

50 required to apply the Schro ¨dinger wave function approach to problems of general chemical interest. A commonly-used and essentially standard set of approximations and assumptions has come to be known as Hartree-Fock (HF) Theory and is the basis for the majority of work done in quantum chemistry at present. Semiempirical HF methods Semiempirical calculations ignore some of the less important aspects of HF theory that full ab initio treatments explicitly compute, so that fewer actual operations are performed. Also, the semiemperical approach uses empirically-determined parameters and parameterized functions to replace some sections of a more complete HF treatment. The approximations in semiemperical theory result in much more rapid single point energy calculations than in earlier HF ab initio or density functional theory (DFT, see Sections 2.3.2 and 2.3.3) methodologies. The advantage gained in the energy calculations leads to semiempirical methods being some 100–1000 times faster overall than ab initio HF or DFT methods of comparable predictive quality. The most popular of the semiempirical methods are those developed by or derived from work by Michael J. S. Dewar [2]. They include MINDO3 [3], MNDO [4], PM3 [5,6], AM1 [7], and SAM1 [8]. Ab initio HF methods A rigorous execution of Hartree-Fock Theory is called an ab initio (‘from first principles’) approach. These calculations involve a near complete mathematical treatment of the theoretical model underlying Hartree-Fock Theory. Comprehensive calculations of this type result in a potentially enormous number of integrations and differentiations of complex algebraic formulae. The sheer number of separate computations can easily become so vast that only high-end workstationclass computers have the requisite speed, memory, and disk storage space for even moderately sized systems (i.e. !100 non-hydrogen atoms). Ab initio methods, by virtue of being derived from the Hartree-Fock assumptions and approximations, have theoretical inaccuracies that cause difficulties in certain cases. Perhaps the most important of these is the neglect of the dynamic electron correlation effects in the motion of electrons within the self-consistent field used in the iterative solution process. Ab initio corrections are available for correlation (e.g., Møller–Plesset (MP) Perturbation Theory, configuration interaction (CI), or multi-configuration-SCF (MCSCF)), but enormously increase the expense and time required for

A.J. Holder, K.V. Kilway the computation. However, for certain problems, these treatments are required for sufficient accuracy. It should be emphasized that in the great majority of cases where ab initio methods have been rigorously applied, the results have been very good. However, the constraints listed above strictly limit the size and complexity of the systems feasible for full ab initio treatment. Density functional theory methods Another computational method, similar to other HF molecular orbital techniques, is density functional theory (DFT). The basic principle behind DFT is that electron density is a fundamental quantity that can be used to develop a rigorous many-body theory, applicable to any atomic, molecular, or solid state system. In the mid-1960’s, Kohn, Hohenberg, and Sham derived a formal proof of this principle as well as a set of equations (the Kohn–Sham equations), which are similar in form and function to the Hartee-Fock equations of molecular orbital theory. This formalism is such that electron correlation is inherently included in the method at no extra cost in computational efficiency, as with HF ab initio methods. The basic difference between HF theory and DFT can be summarized in the following way: for a given set of atomic positions, HF theory expresses the total energy of the system of nuclei and electrons as a function of the total wabe function, whereas DFT expresses this total energy as a functional of the total electron density. Current DFT methods contain no empirical or adjustable parameters and are thereby acknowledged to be ‘ab initio’ approaches. (Note that both HF and DFT methods use wavefunctions described by mathematical expressions and that the coefficients of these are adjusted in such a manner as to mimic experimental results for the particular atoms). Acceptance of DFT methods by chemists is growing now that general-purpose computational packages are available. This acceptance is also based on literature reports of systematic comparisons of DFT methods with experiment as well as with HF and post-HF methods. Chemists are also quickly learning that DFT methods are slightly more efficient than ab initio HF methods. Therefore, DFT calculations are practical for molecular systems which may be too large and/or troublesome for ab initio HF techniques. For example, DFT methods have been particularly successful in predicting the properties of transition metal systems, which have been notoriously difficult for HF and post-HF techniques.

Rational design of dental materials using computational chemistry

Quantitative structure–activity relationships (QSAR) methods In the case of bulk properties and performance characteristics of materials, the linkage between theory and experiment has often been difficult to establish. In 1963–1964, papers by Hansch and Fujita [9,10] and Free and Wilson [11] introduced a new concept in computational chemistry: bulk properties and even biological activities could be successfully predicted using models based on quantitative information extracted from the structures of molecules. Procedurally, these ‘descriptors’ are combined with experimental information regarding a property or activity of interest and a correlative relationship is mathematically derived. Such a relationship is conveniently cast in terms of a multilinear equation and is termed a quantitative structure activity relationship (QSAR). Modern QSAR methods are quite advanced and are routinely applied to investigations in many areas.[12,13] QSAR models have both explanatory and predictive power. The explanatory power of QSAR models lies in elucidation of the specific structural features of molecules, that when expressed in bulk, lead to particular chemical activities and properties. Many well-known guidelines and rules of thumb already exist for predicting a particular activity from a molecular structure. These function as ‘empirical QSARs’. An example of this is the wellknown correspondence between branching and boiling points as described in sophomore organic chemistry. For such a study to be conducted, high-quality experimental data paired with chemical structures must be gathered. Next, QM calculations are run on these molecules to obtain information that will be used in the correlative process to generate descriptors (chemical information). Descriptors fall into a number of different categories including the following: constitutional, topological, geometric, electrostatic, quantum mechanical, thermodynamic, vibrational and solvation. Once descriptors have been extracted and computed, correlated models can be constructed by any one of several approaches. A number of advanced regression analysis methods are present in the program CODESSA. These include linear, multilinear, principal components’ analysis, and nonlinear partial least squares. Each can be applied separately to derive correlations from the massive amount of data generated from the AM1 (QM) results. CODESSA has excellent graphical utilities that show plots of the various correlations vs. the data.

51

The accuracy and reliability of the model is assessed by statistical analysis of the model’s ability to reproduce experimental results. Such quality scores as t-test values, coefficients of correlation, cross-validated coefficients of correlation, and F-statistics are applied to the entire correlation, whereas variation inflation factors (VIFs) and significance (p values) are computed for each descriptor. The methods applied in this research have a long and successful history of use for both the prediction of physical and chemical properties and biological activities. The chemical significance of particular descriptors to a certain property is a decision that must be made by a knowledgeable scientist. Many correlations are rejected based on the realization that the descriptors from the automated selection are only accidentally related to the property under investigation.

Software tools A variety of software tools are available that implement the methods described here. They range widely in capabilities, expense, and complexity. In the infancy of computational chemistry, programs were often obtained free of charge directly from a University or research group. As methods have become standardized, companies now supply most of the software used in computational chemistry. This author primarily uses AMPAC with Graphical User Interface [14], CODESSA [15] and GAUSSIAN03 [16].

Reactivity modeling Chemical reactivity is the basis of polymerization reactions. Without a clear understanding and control of such reactivity, most chemical research becomes Edisonian in approach. The basic premise in reactivity modeling is that for a polymeric system to be feasible for dental applications it must have a certain reactivity profile. This will define how rapidly and completely the monomer will be converted to polymer in the oral environment. A vast array of factors bears on this critical performance issue, some of the more important of which are amenable to treatment by computational chemical methods. As an example as to how our methods can be used to study reactivity, we present a summary of results for the reaction of BADGE and BMOCHM-TOSU. These results are more fully described in a complete paper [17].

52

A.J. Holder, K.V. Kilway

The combination of these two molecules through cationically-initiated photopolymerization leads to over forty different reaction modes due to the significant number of nucleophilic and electrophilic active sites (see Fig. 3 for a graphical summary). (In Fig. 3, the unsubstituted TOSU is presented to illustrate the reactivity potential for the TOSU core of the BMOCHM-TOSU.) The result of this study was to classify the possible products into three categories based on the energies of activation of the various reaction modes. With this information in hand, the experimentalists were able to quickly and reliably determine which products were indeed present in the final reaction mixture by eliminating a significant number of potential products.

QSAR for skin sensitization The ability of computational methods to predict biological outcomes is also critical for their applicability in the development of dental materials. As an example of the capability of computational methods in this area, we here present a brief summary of a study we have performed on skin sensitization using the mouse L929 local lymph node

Figure 3

assay (LLNA). These results are more fully described in a complete paper [18]. In this QSAR study, we focused on the LLNA because it provides highly reproducible and quantitatively variable results with minimal stress on relatively inexpensive experimental animals. The LLNA’s correspondence with human skin sensitization is 72%, as high as the more widely used Guinea Pig Maximization Test (GPMT). Also, according to literature results, LLNA agrees with GPMT on sensitizing potential in 89% of cases [19]. We elected to carry out our study by transforming the experimental dose/response data into stimulation indexes that resulted in concentrations of various materials that would cause triple the sensitization of a control solution. We selected a training set of 50 molecules and developed a two-descriptor QSAR model with an R2 coefficient of 0.770. The F ratio for this correlation was 78.6, and the variance inflation factor (VIF) for both descriptors was near 1.0. All of the quality indicators suggest that the variance in the data has been appropriately accounted for by the equation. One of the descriptors was related to reactivity, and the other to electrophilic surface area. These fit nicely with a physical/chemical explanation of sensitization involving transport across the membrane and reactivity of the material once past the membrane (see Fig. 4 for correlation). While this level of correlation is strongly suggestive of a clear relationship, we have chosen to treat the data and results in such a fashion as to classify materials as weakly/nonsensitizing, moderately sensitizing, or strong sensitizing (see Fig. 5). Using these classifications, we were, able to correctly classify 59 out of 67 molecule (85%) based on previously reported values from the literature (see Fig. 6). This is a real success story for the application of general purpose computational

Reactive sites.

Rational design of dental materials using computational chemistry

Figure 4 Correlation of experimental and calculated data for LLNA skin sensitization.

chemistry methods applied to a experimental system.

Volume change upon polymerization One of the most difficult materials’ problems in polymeric dental restoratives is the mitigation of polymerization shrinkage and the attendant stress on the restorative and surrounding tooth structure. Computationally-directed rational biomaterials design can significantly shorten the discovery cycle for new materials by focusing experimental attention on promising candidates and by suggesting changes to their structures in a systematic fashion in order to improve their desirable properties. One of the key problems in applying computational chemical (and especially QMQSAR) methods to this is the lack of coherent data sets from which to develop models. In the case of polymerization volume change, such data simply does not exist in the current literature. Even though some results exist, there are no systematic studies available keeping the same variables (such as filler, method, and reaction environment) constant,

Figure 5 agents.

Classification scheme for skin sensitizing

Figure 6

53

Training set by sensitizing class.

making the data unusable for computational prediction. The phenomenon of volume change upon polymerization has long been observed [20]. This point is evident in an excellent review by Sakaguchi and co-workers [21]. In this work, a comparison of four different volume change techniques was affected on a single preparation of a model dental composite. Not only were the results different but there was a range of strain values starting from 0.09 to 4.23%, which also correlated to the constraint of the experiment. Previously, data collection was centered on particular formulations with potential clinical relevancy. The goal of our volume change data collection is to provide systematic data for the understanding the how and the why of polymerization volume change [17]. Under this condition, we have limited as many variables as possible so as to simplify the problem but yield general conclusions. We are utilizing a wide variety of structural moieties in our study so as to obtain data which will allow us to discern how the chemical structure of a monomer correlates to its observed polymerization behavior. Our chosen mechanism is cationic and utilizes standard photoacid methods. We have also standardized our filler, a silicon oxide material that has been passivated with respect to acidic functionalities and has not been treated with a silanizing agent. Our method for the experimental measurement of polymerization volume change is mercury dilatometry. There are several reasons for selecting mercury dilatometry as our standard technique: it is a well-known procedure; the constraint factor for this method is similar to that of most dental preparations; it excludes the interaction between the polymerizing composite and the environment; and finally, the collection of the experimental data are rapid, reproducible, and researcher-independent.

54

A.J. Holder, K.V. Kilway

Figure 7

Structures and designations of oxirane monomers.

Since much of our initial work was focused on oxirane-containing composite materials (NIDCR DE09696 running through September of 2006) and there are little experimental data available for these systems, our initial set of monomers included eight mono- and dioxiranes. The complete experimental details are included in a submitted publication. The structure of the molecules and their designations are shown in Fig. 7. The experimental results including the final monomer volume change results are presented in Table 2. During the experimental volume change measurement, the density of the original monomer must be used to determine the volume change only of the monomer in the composite. In this process, it

Table 2

Results of oxirane polymerization volume change experiments (densities in g/mL).

Compound EEBa DOb DCOa BGEa,c GMPEa,c,d ECHMECHCb GPEa,d DGCDCa a b c d

is assumed that the filler’s volume does not change, and the observed composite volume change is due only to reaction of the monomer. The Rule of Mixtures has been used to extract only the pure monomer change data from the composite volume change results. While most of densities of the liquid monomers are known, there were some compounds in our study for which the values were unknown. For this reason, we have developed a QMQSAR model to predict the densities of organic monomers. As can be seen, the volume changes predicted using the two densities are essentially identical for this set of materials, and this result proves the efficacy of using our density QMQSAR when experimental densities are unavailable.

Composite vol D (%)

Expt’l dens.

Monomer vol D Expt’l dens. (%)

Calc’d dens.

Monomer vol D Calc’d dens. (%)

K0.12G0.09 K4.1G0.9 K0.31G0.23 K1.5G0.5 K1.4G0.1 K2.0G0.4 K4.2G0.2 K0.87G0.13

1.054 0.997 1.138 – – 1.170 1.109 1.220

K0.21G0.17 K9.3G1.2 K0.56G0.42 – – K3.7G0.7 K8.0G0.4 K1.6G0.2

1.071 0.997 1.124 1.160 1.162 1.223 1.101 1.19

K0.21G0.17 K9.3G1.2 K0.55G0.42 K2.6G0.3 K2.0G0.1 K3.7G0.7 K7.9G0.4 K1.6G0.2

Composite did not polymerize to hardness. Composite did polymerize to hardness. Experimental densities are not known for these materials. Resin was melted in hot water and then blended with filler prior to measurement.

Rational design of dental materials using computational chemistry All monomers in this preliminary study exhibited shrinkage to some degree. The results were reproducible (i.e. independent of researcher). The volume change was also measured for a semisolid, glycidyl phenyl ether, raising our monomer melting point range to compounds with melting above room temperature. This experimental and theoretical study is a clear example of true synergy between the computational and experimental sides of chemistry.

Summary Computational chemistry has contributed substantial new capabilities and methodologies to chemical research in recent decades. As shown here, various methods in this field have now begun to be applied to biomaterials research. The application of these techniques has allowed experimental groups to more closely focus their efforts by screening candidates for suitability and by providing information used in rational design of new monomers. Is an expert required for the application of modern computational techniques? We think so. The interpretation of the results and explanation of the meaning of descriptors is certainly a general chemical problem, but running the programs and obtaining data is certainly no trivial task. It requires specialized knowledge and skill, just as does any other area of chemical research. As has become apparent, the era of the lone investigator blazing new trails is largely past and team efforts are best for advancing knowledge. We feel that collaboration between theorists and experimentalists offers the best hope for success.

Acknowledgements Financial support was provided by NIH Grants DE09696 and DE07294 (AJH), NIH Grant R15BM61314 (KVK), University of Missouri-Kansas City, and Semichem.

References [1] Sass SL. The substance of civilization. New York: Arcade Publishing; 1998. [2] Most of this work was accomplished at the University of Texas at Austin. [3] Dewar MJS, Bingham RC, Lo DH. Ground states of molecules. XXV. MINDO3 improved version of the MINDO semiempirical SCF-MO method. J Am Chem Soc 1975;97:1285–93.

55

[4] Dewar MJS, Thiel W. Ground states of molecules 38. The MNDO method. Approximations and parameters. J Am Chem Soc 1977;99:4899–907. [5] Stewart JJP. Optimization of parameters for semiempirical methods I. Method. J Comput Chem 1989;10:209–20. [6] Stewart JJP. Optimization of parameters for semiempirical methods II. Applications. J Comput Chem 1989;10:221–64. [7] Dewar MJS, Zoebisch EG, Healy EF, Stewart JJP. AM1: a new general purpose quantum mechanical molecular model. J Am Chem Soc 1985;107:3902–9. [8] Dewar MJS, Jie C, Yu G. SAM1: the first of a new series of general purpose quantum mechanical molecular models. Tetrahedron 1993;23:5003–38. [9] Hansch C, Fujita T. r-s-p Analysis: method for the correlation of biological activity and chemical structure. J Am Chem Soc 1964;86:1616–26. [10] Hansch C, Muir RM, Fujita T, Maloney P, Geiger E, Streich M. The correlation of biological activity of plant growth regulators and chloromycetin derivatives with Hammett constants and partition coefficients. J Am Chem Soc 1963; 85:2817. [11] Free SM, Wilson SW. A mathematical contribution to structure–activity studies. J Med Chem 1964;7:395–9. [12] Beck GM, Neau SH, Holder AJ, Hemmenway JN. Evaluation of quantitative structure property relationships necessary for enantioresolution with lambda-carrageenan and sulfobutylether lambda-carrageenan. Chirality 2000;12:688–96. [13] Yourtee DM, Holder AJ, Smith R, Morrill JA, Kostoryz E, Brockman W, et al. Quantum mechanical QSARs to avoid mutagenicity in dental monomers. J Biomat Sci, Polym Ed 2001;12:89–105. [14] AMPAC with Graphical User Interface, Ver. 8.0: Semichem, Web: www.semichem.com, Shawnee Mission, KS; 2004. [15] CODESSA, Semichem: Shawnee Mission, KS; 1995. [16] Frisch MG, Schlegel GWTHB, Scuseria GE, Robb MA, Cheeseman JR, Montgomery JA Jr, Vreven T, Kudin KN, Burant JC, Millam JM, Iyengar SS, Tomasi J, Barone V, Mennucci B, Cossi M, Scalmani G, Rega N, Petersson GA, Nakatsuji H, Hada M, Ehara M, Toyota K, Fukuda R, Hasegawa J, Ishida M, Nakajima T, Honda Y, Kitao O, Nakai H, Klene M, Li X, Knox JE, Hratchian HP, Cross JB, Adamo C, Jaramillo J, Gomperts R, Stratmann RE, Yazyev O, Austin AJ, Cammi R, Pomelli C, Ochterski JW, Ayala PY, Morokuma K, Voth GA, Salvador P, Dannenberg JJ, Zakrzewski VG, Dapprich S, Daniels AD, Strain MC, Farkas O, Malick DK, Rabuck AD, Raghavachari K, Foresman JB, Ortiz JV, Cui Q, Baboul AG, Clifford S, Cioslowski J, Stefanov BB, Liu G, Liashenko A, Piskorz P, Komaromi I, Martin RL, Fox DJ, Keith T, Al-Laham MA, Peng CY, Nanayakkara A, Challacombe M, Gill PMW, Johnson B, Chen W, Wong MW, Gonzalez C, Pople JA. GAUSSIAN03. Pittsburg, PA: Gaussian, Inc.; 2003. [17] Manuscript accepted in Macromolecular Theory and Simulation. [18] Manuscript in preparation. [19] Hanneke KE, Tice RR, Carson BL, Margolin BM, Stokes WS. ICCVAM evaluation of the murine local lymph node assay. Regul Toxicol Pharm 2001;34:274–86. [20] Sadhir RJ, Luck RM. Expanding monomers: synthesis, characterization, and applications. Boca Raton, FL: CRC Press; 1992. [21] Sakaguchi RL, Wiltbank BD, Shah NC. Critical configuration analysis of four methods for measuring polymerization shrinkage strain of composites. Dent Mater 2004;20(4): 388–96.