Toxicology and Applied Pharmacology 223 (2007) 114 – 120 www.elsevier.com/locate/ytaap
Predicting interactions from mechanistic information: Can omic data validate theories? Christopher J. Borgert ⁎ Applied Pharmacology and Toxicology, Inc., 2250 NW 24th Avenue, Gainesville, FL 32605, USA Center for Environmental and Human Toxicology, Department of Physiological Sciences, College of Veterinary Medicine, University of Florida, Gainesville, FL 32605, USA Received 27 June 2006; revised 14 December 2006; accepted 3 January 2007 Available online 12 January 2007
Abstract To address the most pressing and relevant issues for improving mixture risk assessment, researchers must first recognize that risk assessment is driven by both regulatory requirements and scientific research, and that regulatory concerns may expand beyond the purely scientific interests of researchers. Concepts of “mode of action” and “mechanism of action” are used in particular ways within the regulatory arena, depending on the specific assessment goals. The data requirements for delineating a mode of action and predicting interactive toxicity in mixtures are not well defined from a scientific standpoint due largely to inherent difficulties in testing certain underlying assumptions. Understanding the regulatory perspective on mechanistic concepts will be important for designing experiments that can be interpreted clearly and applied in risk assessments without undue reliance on extrapolation and assumption. In like fashion, regulators and risk assessors can be better equipped to apply mechanistic data if the concepts underlying mechanistic research and the limitations that must be placed on interpretation of mechanistic data are understood. This will be critically important for applying new technologies to risk assessment, such as functional genomics, proteomics, and metabolomics. It will be essential not only for risk assessors to become conversant with the language and concepts of mechanistic research, including new omic technologies, but also, for researchers to become more intimately familiar with the challenges and needs of risk assessment. © 2007 Elsevier Inc. All rights reserved. Keywords: Mechanism of action; Mode of action; Drug interaction; Chemical mixtures; Toxicogenomics
Introduction Toxicity data are lacking for most drug and chemical mixtures, and so risk estimation typically relies on model predictions. Predicting mixture toxicity has traditionally been approached in one of two ways. The first is to predict toxicity of the mixture of interest based on a similar mixture for which toxicity data are available. The challenges presented by this approach have been summarized (Borgert, 2004) and will not be expanded here. The second approach is to predict the toxicity of the mixture from toxicity data on individual components of the mixture, including the pharmacological and toxicological interactions that may be produced by combined action of the components. This paper addresses the second approach. ⁎ Applied Pharmacology and Toxicology, Inc., 2250 NW 24th Avenue, Gainesville, FL 32605, USA. Fax: +1 352 335 8242. E-mail address:
[email protected]. 0041-008X/$ - see front matter © 2007 Elsevier Inc. All rights reserved. doi:10.1016/j.taap.2007.01.002
Throughout this paper, “chemical” is used in the general sense and may include drugs, pesticides, food ingredients, etc. While toxicity data on individual chemicals are often available, reliable and relevant data on interactions are lacking for most chemical combinations (Borgert et al., 2001; EPA, 1988; Hertzberg and MacDonell, 2002; Hertzberg and Teuschler, 2002). For this reason, mixture risk assessments must often predict the combined action of chemicals in mixtures in the absence of clear empirical data on which to do so. Here again, at least two fundamentally different approaches are possible. One approach would be to make mathematical predictions about combined action based purely on the shape of the dose– response curves of the individual components of the mixture; in other words, to mathematically predict the aggregate dose– response curve from knowledge about the individual dose– response curves. This approach has received little attention, probably because a clear biological basis for it has never been developed. The more widely used approach predicts which of
C.J. Borgert / Toxicology and Applied Pharmacology 223 (2007) 114–120
two mathematical models of combined action best applies based on mechanistic presumptions about the individual chemicals in the mixture. The rationale behind using mechanistic understanding to predict the combined action of chemicals in mixtures relies implicitly on the terminology and concepts used to define interaction, non-interaction, and mode or mechanism of action. This paper explores these concepts and the assumptions and limitations inherent in each, and suggests ways of integrating research on interactions more tightly with the needs of risk assessment. Interaction and mode of action Interactions are of two fundamental types: synergistic or antagonistic. Synergism means that when chemicals are administered together, more of a particular response is produced than was expected, or, that a given level of a particular response is produced with a lower dose of chemicals than expected. Antagonism means the converse: that when chemicals are administered together, less of a particular response is produced than expected, or that a higher dose than expected is required to produce a given level of the particular response. These definitions create a logical conundrum if the goal is to predict interactions, which are defined as the unexpected. Furthermore, interactions with significant biological consequences may be relatively rare and synergisms may be opposed by antagonisms. Together, these factors may justify, at least in part, the implicit assumption used in mixture risk assessment that non-interaction is expected. The concept of non-interaction has both pharmacologic and mathematical implications, but these implications are not necessarily related to one another. A true test of pharmacological non-interaction would require a comparison of biological outcome between conditions that allow and disallow interactions. Since few experimental systems provide the opportunity to conduct such comparisons, non-interaction has been defined on the basis of mathematical models. Two classical models for non-interaction have gained acceptance and widespread use in pharmacology and toxicology. These are Loewe additivity (Loewe and Muischneck, 1926), also called dose addition, and Bliss independence (Bliss, 1939), also called response addition. Both models involve simple addition of parameters related to the components of the mixture, but as implied by their common names, Bliss adds responses whereas Loewe adds doses. Although neither model has a clear biological basis, deductive reasoning has been used to align the parameters added by each model with notions about the mechanistic relatedness of the components of the mixture. Bliss independence assumes that a non-interacting chemical produces a certain level of response as if the other chemicals in the mixture are not present. Predicting the response of the mixture then becomes an exercise in summing the level of response produced by each individual component of the mixture. If the level of response produced by the individual components of a mixture is independent and can be combined by simple summation, inductive reasoning leads to the assumption that those levels of response must have been
115
produced by independent pharmacologic/toxicologic mechanisms. Thus, Bliss independence (response addition) is used in mixture risk assessments to predict the combined response to a mixture of chemicals believed to produce that response by different (independent) mechanisms of action. Loewe additivity assumes that chemicals producing a particular response can be treated as simple dilutions of one another, differing only in potency, and thus, Loewe additivity (dose addition) has been aligned with the combined response to chemicals that share a similar mechanism of action. This alignment is a logical extension of Berenbaum's assertion (Berenbaum, 1981) that the pharmacologic response to multiple doses of a single chemical must always be additive, by mathematical definition. In other words, two 325 mg aspirin tablets must produce the same level of response as one 650 mg aspirin tablet, by simple mathematical definition. He argued that this must be true regardless of whether the chemical shows “self-interaction,” and thus, that dose addition serves as a clear sham model of non-interaction for interaction studies. 1 Although Berenbaum argued that the choice of a noninteraction model for chemical combination studies should not be based on presumptions about mechanisms of action (Berenbaum, 1989), his argument that multiple doses of a single chemical serve as the proof-of-principle for dose addition appears to has led to the assumption that since multiple doses of the same chemical have the same mechanism of action and are dose additive, mixtures of different chemicals with similar mechanisms will also behave by dose addition. It is important to appreciate the logical impediment to testing this assumption empirically, and why caution must be exercised when interpreting studies that purport to have empirically tested the correlation of similar mechanism and dose addition; confirmation (or refutation) of the correlation depends entirely on the accuracy and level of mechanistic understanding of the chemicals used to test the correlation (Borgert et al., 2004; Greco, 2001; Berenbaum, 1989). Regardless of the strength of the empirical basis for correlating similar and dissimilar mechanisms with dose and response addition, predicting the toxicity of mixtures for risk assessment has become, de facto, an exercise in predicting the mechanistic similarity and dissimilarity of mixture constituents. Mechanistic similarity, however, is a vague concept that has yet to be clarified scientifically (Borgert et al., 2004). Imprecise use of terminology is largely to blame for at least some of the vagary, so it is important to clarify a few definitions2. A mechanism of action is the detailed, step-wise sequence of events proceeding from absorption of an effective dose of a substance to the manifestation of a specific biological effect. A “mode of action” is a category of mechanisms that share common key features. Many more chemicals will share modes of action than have the same mechanism. 1 In this context, “self-interaction” would mean that there is not a constant proportionality between changes in response relative to changes in dose. 2 The definitions of mode and mechanism of action provided here are consistent with those in the recent published literature (Dellarco and Wiltse, 1998; Schlosser and Bogdanffy, 1999; Sonich-Mullin et al., 2001; Borgert et al., 2004).
116
C.J. Borgert / Toxicology and Applied Pharmacology 223 (2007) 114–120
If the mechanism of action has been defined, the mode of action is imminently determinable because it is a subset of what is known. Knowing the mode of action, however, leaves many aspects of the mechanism unaddressed. Nonetheless, “mode” and “mechanism” are often used ambiguously and interchangeably in the literature, and so understanding clearly what is being discussed often requires a consideration of the context in which the terms are used and the underlying data to which the terms are applied (Borgert et al., 2004). Even when the terminology is clear, two fundamental questions remain: what are the minimum data requirements for determining a mode of action, and is that amount of data sufficient to warrant assuming dose addition in mixtures? Extensive discussions of those issues have recently been published (McCarty and Borgert, 2006a, 2006b; Borgert et al., 2004) and only a brief overview will be provided here. Mode of action in mixtures risk assessment It is important to recognize that the regulatory paradigm for using mechanistic data in risk assessment is still evolving, but currently ranges from cancer dose–response assessment to categorization of chemicals by groups for mixture risk assessment. Individual data requirements appear to have been developed to meet these various assessment needs rather than to conform to an overarching scientific theory about mixture toxicity. To depart from the default non-threshold assumption for carcinogenesis, EPA's cancer assessment guidelines (US EPA, 2005) require mechanistic data to assess whether a particular chemical operates by a mechanism that would produce a carcinogenic threshold, below which there is no calculable risk of cancer. These guidelines recommend that sufficient mechanistic data be developed to establish the mode of action, which is said to include mechanistic data from several levels of biological organization, including both pharmacokinetic and pharmacodynamic characterizations, and that the causal links between the key mechanistic steps and cancer be established. It is noteworthy that this relatively rigorous standard for mechanistic characterization is applied for the purpose of establishing a dose–response feature for single chemicals. Several guidance documents address the use of mechanistic data for the purpose of assessing dose–response characteristics of mixtures. EPA's Office of Pesticides uses mechanistic data to decide which pesticides share a common mechanism of action and would be expected to be dose additive for cumulative risk estimation and registration (US EPA, 1999). According to a working group convened to decide the appropriate criteria, pesticides acting via a common mechanism of toxicity cause the same critical effect, act on the same molecular target in the same target tissue, and share the same biochemical mechanism, possibly through a common toxic intermediate (Mileson et al., 1998). Ambiguous as it seems to define “mechanism of toxicity” on the basis of “biochemical mechanism,” the working group concluded that these succinct criteria indeed identify pesticides that would be dose additive in mixture. EPA's supplementary guidance for the risk assessment of chemical mixtures (US EPA, 2000) and ATSDR's guidance for
developing interaction profiles (US DHHS, 2001) call for the application of dose-addition models to chemicals with similar modes of toxicity, and response-addition (independence) models for chemicals that operate by dissimilar modes of toxicity. These documents do not require more than similar target organs to apply dose addition, but EPA states that establishing a similar mode of action constitutes a stronger basis for concluding that a mixture would produce dose additive toxicity. This formulation provides the opportunity for considerable professional judgment about the data needed to establish that modes of toxicity are sufficiently similar to use dose addition to predict mixture toxicity. EPA uses a more stringent set of criteria, first proposed by Safe (1990), for assessing risks of mixtures of polychlorinated dibenzodioxins and dibenzofurans by a toxic equivalency factor approach. The so-called TEF approach is simply an application of dose addition based on binding affinity for the Ah receptor, the obligate event for some effects of 2,3,7,8-TCDD. The assumptions implicit in utilizing the TEF approach, and thus the factors considered necessary to validate its application, include that the individual compounds all act through the same biologic or toxic pathway, that the effects of individual chemicals in a mixture are dose additive at submaximal levels of exposure, that the dose–response curves for different congeners are parallel and that the organotropic manifestations of all congeners must be identical over the relevant range of doses. The approach was intended for true chemical congeners, which would be expected to share detoxification and elimination pathways. It is the only formulation of mechanistic criteria that includes the more rigorous pharmacological principles of an empirical demonstration of dose addition. The U.S. Food and Drug Administration (FDA) focuses primarily on drug metabolism and transport processes in assessing the potential for serious drug interactions. Although FDA notes the potential importance of pharmacodynamic interactions, its guidance to industry involves identifying the major metabolic pathways that affect the test drug and its metabolites, including specific enzymes that mediate the production of metabolites, drug elimination, and formation of intermediates (US FDA, 2004). FDA expects industry to explore and anticipate the effects of the test drug on the metabolism of other drugs and the effects of other drugs on its metabolism, but pharmacologic studies to explore the effects of the test drug and its major metabolites are recommended only if feasible. FDA's approach is thus largely mechanistic, but focused specifically on the mechanisms of drug interaction known to be most likely to produce clinically significant drug interactions. Although these various sets of criteria were developed for different purposes, at their core, they purport to define the types and extent of mechanistic data required to determine the mode of action for a chemical or to establish that groups of chemicals share a common mode of action. As shown in Table 1, although no clear consensus can be discerned regarding the types or amounts of data necessary to make determinations about modes of action, a few common features appear to be agreed. There is a general recognition that the molecular targets of the toxicity are
C.J. Borgert / Toxicology and Applied Pharmacology 223 (2007) 114–120 Table 1 Mode of action classification criteria EPA CanRa ILSI CumRb EPA Mixc ATSDRd TEFe Molecular target Cellular target Physiological target Target organ Toxic intermediates Causality of steps Pharmacokinetics Detox. pathways Parallel DRCs Dose addition
X X X X X X X X X
X
X X
X
X X
X X
X
X
X X
X
X X X X
a
US EPA, 2005; bMileson et al., 1998; cUS EPA, 2000; dUS DHHS, 2001; eSafe, 1998.
essential for defining the mode of action, as is the target organ. These would appear to be only the least common denominator, however, as there also appears to be a general agreement that more information is needed beyond these two parameters. The difficulty is in defining what additional information is critical – short of defining the entire mechanism of action – for determining whether two chemicals will interact or be dose additive or independent. Based on the nature of drug and chemical mechanisms, it is reasonable to imagine that different data requirements will apply for different modes of action and for different classes of chemicals, precluding a single set of universally applicable criteria. Mechanisms are multi-dimensional processes that may involve the complete spectrum of biological organization from the molecular to the population level (Fig. 1). The key events critical for production of a particular biological effect –i.e., the mode of action – may involve all of these levels of biological organization, or only a few, depending on the effect and chemical class under consideration. For example, the mode of action for a particular class of enzyme inhibitors might be relatively succinct, involving only the enzyme (molecular level), target organ, and a few key pharmacokinetic steps; in contrast, the mode of action for a particular hormonal effect might involve specific hormone receptors (molecular level) located in affected cells (cellular level) within a specific organ (organ level), as well as feedback inhibition of endogenous hormone production through second messenger systems (biochemical level) located in particular endocrine glands (tissue level). Such complexity has been described for estrogenic modes of action (Moggs, 2005), which may be quite tissue-specific. Thus, determining that two enzyme inhibitors have the same mode of action, i.e., have similar mechanisms, may require much less mechanistic data than for two hormonally active agents. Identifying the set of mechanistic data that predicts dose addition in mixtures may be a complex endeavor. Not only may key mechanistic events occur at different levels of biological organization for different chemicals, but also, mechanisms of action may involve dynamic interplay between different levels of biological organization. Molecular events that lead to changes in organ function may themselves be affected by the very changes in organ function they induced, causing secondary
117
changes in biochemical pathways unaffected by the molecular changes alone. There are many examples of this type of action. Some of the best known are the compensatory feedback responses that occur following stimulation of the adrenergic nervous system by sympathomimetic agents. Due largely to the complexity of the system affected, even sympathomimetic agents that are closely related structurally, such as ephedrine and epinephrine, differ in qualitative ways for specific endpoints. Those differences preclude a simple quantitative comparison (Hoffman, 2001) as is required for a relative potency calculation and the assumption of dose addition. Adrenergic agents also illustrate other important issues related to whether data requirements for identifying the mode of action can predict the type of combined action that will occur in mixtures. Although toxicologists often speak in terms of distinct modes of action underlying any particular response, mechanisms are likely to be dose-dependent (Slikker et al., 2004). This means that there may be transitions from one mechanism to another as dose increases, implying the possibility for transitions in the type of combined action displayed by mixtures of chemicals. Chemicals may be dose additive in one dose range but response additive in another, and interactive – i.e., synergistic or antagonistic relative to one or the other model of non-interaction – at other doses. This underscores the importance of using data from environmentally relevant exposure scenarios for mixture risk assessments and for conducting mixture experiments at doses below the observed effect levels for the individual components since many environmental exposures are below no effect levels for individual chemicals. The concept of dose itself is further complicated for mixtures, particularly because many interactions have a pharmacokinetic rather than pharmacodynamic basis (Krishnan and Brodeur, 1991). Experimenters often administer doses to intact animals, measure molecular or cellular changes, interpret those changes
Fig. 1. Mechanisms of chemical action are multidimensional not only in progression from the molecular to the organ level, but also in time. Furthermore, as effects at the molecular and cellular level lead to effects at the tissue and organ level, those changes can reciprocally impact cells and cellular processes.
118
C.J. Borgert / Toxicology and Applied Pharmacology 223 (2007) 114–120
Table 2 Levels of biological organization and various toxicity metrics Dose Ecosystem Community Population Individual Organ/Tissue Cell Molecule
? ? ? X ? ? ?
Biological effect
Interaction
Adverse effect
X X X X
in a biochemical context, and extrapolate those interpretations back to whole animals or populations of organisms, all without data regarding the dose metrics at any level of biological organization other than the level at which doses were administered (Table 2). In the absence of data or reliable interconversion factors for dose metrics across the various levels of biological organization, inferences about modes of action become extremely tenuous and uncertain. Since predicting combined action involves the dose–response characteristics of multiple chemicals, it would seem risky to do so on the basis of mechanistic data that lack corresponding dose information. This is not to deny that there are data to support the general premise that mechanistic similarity or dissimilarity predicts whether dose or response addition will hold (reviewed in US EPA, 2000) but not all the data support this interpretation and all are plagued by the conundrum of discerning what was tested, i.e., the premise or the accuracy of the mechanistic understanding (Borgert et al., 2004). When components of a mixture are present below their no effect levels for overt toxicity, dose addition may hold only for general narcotic effects related to the organic chemical load (Verhaar et al., 1992; Vaes et al., 1998; Freidig et al., 1999; Vaal et al., 2000; Escher and Hermens, 2002). The future: new technologies and new thinking Any current discussion about the use of mechanistic data in risk assessment, whether for single chemicals or mixtures, must include the topic of new technology. So much has been written about the potential for omic technologies (genomic, transcriptomic, proteomic, metabolomic) to improve risk assessment (Cunningham et al., 2003; Bernauer et al., 2005) that it might seem these technologies will soon overcome the challenges presented by mixtures. Several of the many advancements promised from omic technologies are germane to predicting interactions and combination effects (Amin et al., 2002), including the promise to categorize chemicals by their omic profiles, identify key mechanistic steps at the molecular level, identify fundamentally similar modes of toxic action, and elucidate complete toxicologic pathways. There is little doubt that omic technologies will be useful tools for understanding mechanisms of action and will provide unprecedented information at the molecular level; however, several caveats must be kept in mind (Morgan et al., 2002). One first caveat is that a fundamental difference in perspective appears to be developing between researchers who use omic technologies to study toxicological questions and
risk assessors who attempt to incorporate omic data in a mechanistic context for risk assessment. Some researchers appear to assume that omic responses to chemical exposures are adverse (Waring and Ulrich, 2003), and therefore interpret omic data purely in the context of adverse responses and mechanisms underlying toxicity. Such interpretations presuppose a bona fide link between the genomic event and adverse outcomes in an organism or population. Risk assessors, on the other hand, hope that omic data will help to define which biological responses constitute adverse effects, which chemicals cause those adverse effects, and at what levels of exposure. If researchers working in toxicogenomics presume that all biological responses to chemicals are linked to adverse effects, yet risk assessors presume that toxicogenomics will help them distinguish adverse effects from compensatory, adaptive, or unrelated responses, the two disciplines seem destined for miscommunication! Another caveat relates to the comprehensiveness of the information provided by omic data. Any single omic technology captures only a segment of the molecular steps in biological function; therefore, approaches that produce omic data from several molecular species – a ‘polyomic’ approach – may be necessary to obtain meaningful information about modes of toxicity. The need for polyomic approaches is underscored by the fact that at each macro-molecular level (DNA, Pre-RNA, mRNA, Protein), numerous steps and modifying events present potential targets for interference by chemicals. For example, DNA is modified by methylation, transpositions, and gene amplification, all of which could conceivably be targets for toxic effects (Fig. 2). Likewise, the process of transcription involves RNA capping, splicing, and various ‘editing’ steps that could be interrupted by a mechanism of toxicity. The wellknown effects of various mutations manifest their detrimental effects during the process of translating RNA to primary proteins, and can lead to aberrant protein folding and 3dimensional structural deficiencies. Furthermore, polyomic approaches provide molecular detail, but may not unequivocally identify the higher order physiological process affected by various genes, proteins, or metabolites, nor what changes at the molecular level will manifest detrimental physiological or organ-level changes.
Fig. 2. Component molecular mechanistic steps of omics.
C.J. Borgert / Toxicology and Applied Pharmacology 223 (2007) 114–120
Ultimately, it will be necessary to understand the precise relationship between polyomics and higher order functions in order to make meaningful interpretations of omic data for risk assessment. Merely identifying molecular events that occur with exposure is not enough; it is essential to determine whether these molecular events are obligate (or contributory) steps in a pathway that leads to toxicity, and if so, under what circumstances of dose and physiology those toxic responses are manifested. In order to use information provided by these new technologies most efficiently, and to resolve some of the current ambiguities regarding the data requirements for mode of action assessments, it may be useful to formulate a more comprehensive conceptual framework about modes of action and mechanisms of interaction (McCarty and Borgert, 2006b). Such a framework would help to guide more focused research on integrating dose–response models across several levels of biological organization; help to define experiments that differentiate between pre-toxicological, compensatory, and protective responses to chemicals; identify dose-dependent transitions in mechanisms that most impact risk estimation; and ultimately, allow accurate predictions about interactions at environmentally relevant levels for chemical mixtures. As John Ashby stated in his 1999 Gerhard Zbinden Memorial Lecture, “Toxicology is entering a new phase wherein powerful model systems will become available to predict toxicity and to study mechanisms of action. For these new techniques to achieve their potential, it will be necessary for toxicologists to pose precise questions, and to design experiments to answer those questions unequivocally.” (Ashby, 2000). References Amin, R.P., Hamadeh, H.K., Bushel, P.R., Bennett, L., Afshari, C.A., Paules, R.S., 2002. Genomic interrogation of mechanism(s) underlying cellular responses to toxicants. Toxicology 181–182, 555–563. Ashby, J., 2000. The Gerhard Zbinden memorial lecture. Are environmental chemicals affecting the integrity or expression of the human genome? Toxicol. Lett. 112–113, 3–8. Berenbaum, M.C., 1981. Criteria for analyzing interactions between biologically active agents. Adv. Cancer Res. 35, 269–335. Berenbaum, M.C., 1989. What is synergy? [published erratum appears in Pharmacol. Rev. 1990 Sep;41(3):422]. Pharmacol. Rev. 41, 93–141. Bernauer, U., Oberemm, A., Madle, S., Gundert-Remy, U., 2005. The use of in vitro data in risk assessment. Basic Clin. Pharmacol. Toxicol. 96 (3), 176–181. Bliss, C.I., 1939. The toxicity of poisons applied jointly. Ann. Appl. Biol. 26, 585–615. Borgert, C.J., 2004. Chemical mixtures: an unsolvable riddle? Hum. Ecol. Risk Assess. 10, 619–629. Borgert, C.J., Price, B., Wells, C., Simon, G.S., 2001. Evaluating chemical interaction studies for mixture risk assessment. Hum. Ecol. Risk Assess. 7, 259–306. Borgert, C.J., Quill, T.F., McCarty, L.S., Mason, A.M., 2004. Can mode of action predict mixture toxicity for risk assessment? Toxicol. Appl. Pharmacol. 201, 85–96. Cunningham, M.L., Bogdanffy, M.S., Zacharewski, T.R., Hines, R.N., 2003. Workshop overview: use of genomic data in risk assessment. Toxicol. Sci. 73, 209–215. Dellarco, V.L., Wiltse, J.A., 1998. US Environmental Protection Agency's revised guidelines for carcinogen risk assessment: incorporating mode of action data. Mutat. Res. 405, 273–277.
119
Escher, B.I., Hermens, J.L., 2002. Modes of action in ecotoxicology: their role in body burdens, species sensitivity, QSARs, and mixture effects. Environ. Sci. Technol. 36, 4201–4217. Freidig, A.P., Verhaar, H.J.M., Hermens, J.L.M., 1999. Comparing the potency of chemicals with multiple modes of action in aquatic toxicology: acute toxicity due to narcosis versus reactive toxicity of acrylic compounds. Environ. Sci. Tech. 33, 3038–3043. Greco, W.R., 2001. Evaluating chemical interaction studies for mixture risk assessments—comments. Hum. Ecol. Risk Assess. 7, 306. Hertzberg, R.C., MacDonell, M.M., 2002. Synergy and other ineffective mixture risk definitions. Sci. Total Environ. 288, 31–42. Hertzberg, R.C., Teuschler, L., 2002. Evaluating quantitative formulas for dose– response assessment of chemical mixtures. Environ. Health Perspect. 110 (Suppl. 6), 965–970. Hoffman, B., 2001. Chapter 10: catecholamines, sympathomimetic drugs, and adrenergic receptor antagonists, In: Hardman, J.G., Limbird, L.E., Gilman, A.G. (Eds.), Goodman and Gilman's The Pharmacological Basis of Therapeutics, 10th ed. McGraw-Hill, New York, pp. 215–268. Krishnan, K., Brodeur, J., 1991. Toxicological consequences of combined exposure to environmental pollutants. Arch. Complex Environ. Stud. 3, 1–106. Loewe, S., Muischneck, H., 1926. Effect of combinations: mathematical basis of problem. Arch. Exp. Pathol. Pharmakol. 114, 313–326. McCarty, L.S., Borgert, C.J., 2006a. Review of the toxicity of chemical mixtures containing at least one organochlorine. Regul. Toxicol. Pharmacol. 45, 104–118. McCarty, L.S., Borgert, C.J., 2006b. Review of the toxicity of chemical mixtures: theory, policy, and regulatory practice. Regul. Toxicol. Pharmacol. 45, 119–143. Mileson, B.E., Chambers, J.E., Chen, W.L., Dettbarn, W., Ehrich, M., Eldefrawi, A.T., Gaylor, D.W., Hamernik, K., Hodgson, E., Karczmar, A.G., Padilla, S., Pope, C.N., Richardson, R.J., Saunders, D.R., Sheets, L.P., Sultatos, L.G., Wallace, K.B., 1998. Common mechanism of toxicity: a case study of organophosphorus pesticides. Toxicol. Sci. 41, 8–20. Moggs, J.G., 2005. Molecular responses to xenoestrogens: mechanistic insights from toxicogenomics. Toxicology 213, 177–193. Morgan, K.T., Brown, H.R., Benavides, G., Crosby, L., Sprenger, D., Yoon, L., Ni, H., Easton, M., Morgan, D., Laskowitz, D., Tyler, R., 2002. Toxicogenomics and human disease risk assessment. Hum. Ecol. Risk Assess. 8 (6), 1339–1353. Safe, S., 1990. Polychlorinated biphenyls (PCBs), dibenzo-p-dioxins (PCDDs), dibenzofurans (PCDFs), and related compounds: environmental and mechanistics considerations which support the development of toxic equivalency factors (TEFs). Crit. Rev. Toxicol. 21, 51–88. Safe, S.H., 1998. Hazard and risk assessment of chemical mixtures using the toxic equivalency factor approach. Environ. Health Perspect. 106, S1051–S1058. Schlosser, P.M., Bogdanffy, M.S., 1999. Determining modes of action for biologically based risk assessments. Regul. Toxicol. Pharmacol. 30, 75–79. Slikker Jr., W., Andersen, M.E., Bogdanffy, M.S., Bus, J.S., Cohen, S.D., Conolly, R.B., David, R.M., Doerrer, N.G., Dorman, D.C., Gaylor, D.W., Hattis, D., Rogers, J.M., WoodrowSetzer, R., Swenberg, J.A., Wallace, K., 2004. Dose-dependent transitions in mechanisms of toxicity. Toxicol. Appl. Pharmacol. 201, 203–225. Sonich-Mullin, C., Fielder, R., Wiltse, J., Baetcke, K., Dempsey, J., FennerCrisp, P., Grant, D., Hartley, M., Knaap, A., Kroese, D., Mangelsdorf, I., Meek, E., Rice, J.M., Younes, M., 2001. IPCS conceptual framework for evaluating a mode of action for chemical carcinogenesis. Regul. Toxicol. Pharmacol. 34, 146–152. United States Department of Health and Human Services (US DHHS), 2001. Guidance for the preparation of an interaction profile. US Department of Health and Human Services, Public Health Service, Agency for Toxic Substances and Disease Registry (ATSDR), Washington, DC. United States Environmental Protection Agency (US EPA), November, 1988. Technical support document on risk assessment of chemical mixtures. EPA/ 600/8-90/064. US EPA, Office of Research and Development, Washington, DC. United States Environmental Protection Agency (US EPA), January 29, 1999.
120
C.J. Borgert / Toxicology and Applied Pharmacology 223 (2007) 114–120
Guidance for identifying pesticide chemicals that have a common mechanism of toxicity. US EPA, Washington, DC. United States Environmental Protection Agency (US EPA), 2000. Supplementary Guidance for Conducting Health Risk Assessment of Chemical Mixtures. EPA/630/R-00/002. US EPA, Washington, DC. United States Environmental Protection Agency (US EPA), March 2005. Guidelines for Carcinogen Risk Assessment. EPA/630/P-03/001F. US EPA, Washington, DC. United States Food and Drug Administration (US FDA), October 1, 2004. Drug interaction studies—Study design, data analysis, and implications for dosing and labeling. Preliminary concept paper. Available at http://www.fda.gov/ ohrms/dockets/ac/04/briefing/2004-4079B1_04_Topic2-TabA.pdf
Vaal, M.A., va Leeuwen, C.J., Hoekstra, J.A., Hermens, J.L.M., 2000. Variation in sensitivity of aquatic species to toxicants: practical consequences for effect assessment of chemical substances. Environ. Manag. 25, 415–423. Vaes, W.H.J., Urrestarazu Ramos, E., Verhaar, H.J.M., Hermens, J.L.M., 1998. Acute toxicity of nonpolar versus polar narcosis: is there a difference? Environ. Toxicol. Chem. 17, 1380–1384. Verhaar, H.J.M., van Leeuwen, C.J., Hermens, J., 1992. Classifying environmental pollutants: 1. Structure activity relationships for prediction of aquatic toxicity. Chemosphere 25, 471–491. Waring, J.F., Ulrich, R.G., 2003. Chapter 13: unsupervised hierarchical clustering of toxicants. Introduction to Toxicogenomics. CRC Press, pp. 261–284.