Artificial Intelligence for chemical risk assessment

Artificial Intelligence for chemical risk assessment

Journal Pre-proofs Artificial Intelligence for Chemical Risk Assessment Clemens Wittwehr, Paul Blomstedt, John Paul Gosling, Tomi Peltola, Barbara Raf...

670KB Sizes 0 Downloads 89 Views

Journal Pre-proofs Artificial Intelligence for Chemical Risk Assessment Clemens Wittwehr, Paul Blomstedt, John Paul Gosling, Tomi Peltola, Barbara Raffael, Andrea-Nicole Richarz, Marta Sienkiewicz, Paul Whaley, Andrew Worth, Maurice Whelan PII: DOI: Reference:

S2468-1113(19)30034-9 https://doi.org/10.1016/j.comtox.2019.100114 COMTOX 100114

To appear in:

Computational Toxicology

Received Date: Revised Date: Accepted Date:

23 July 2019 10 September 2019 25 November 2019

Please cite this article as: C. Wittwehr, P. Blomstedt, J.P. Gosling, T. Peltola, B. Raffael, A-N. Richarz, M. Sienkiewicz, P. Whaley, A. Worth, M. Whelan, Artificial Intelligence for Chemical Risk Assessment, Computational Toxicology (2019), doi: https://doi.org/10.1016/j.comtox.2019.100114

This is a PDF file of an article that has undergone enhancements after acceptance, such as the addition of a cover page and metadata, and formatting for readability, but it is not yet the definitive version of record. This version will undergo additional copyediting, typesetting and review before it is published in its final form, but we are providing this version to give early visibility of the article. Please note that, during the production process, errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.

© 2019 Published by Elsevier B.V.

2

Artificial Intelligence for Chemical Risk Assessment

3 4

Clemens Wittwehra,1, Paul Blomstedtb, John Paul Goslingc, Tomi Peltolab, Barbara Raffaela, AndreaNicole Richarza, Marta Sienkiewicza, Paul Whaleyd, Andrew Wortha, Maurice Whelana

5

aEuropean

6

bAalto

University, Espoo, Finland

7

cLeeds

University, Leeds, UK

8

dLancaster

1

Commission, Joint Research Centre (JRC), Ispra, Italy

Environment Centre, University, Lancaster, UK

9 10 11

1To

whom correspondence should be addressed at European Commission, Joint Research Centre, Via E. Fermi 2749 – TP 126, Ispra 21027, Italy. E-mail: [email protected].

12 13

Table of Contents

14

Abstract ..................................................................................................................................................2

15

Introduction............................................................................................................................................2

16

Workshop ...........................................................................................................................................3

17

Further work.......................................................................................................................................3

18

Discussion - Topics in Chemical Risk Assessment that AI could support ................................................4

19

Scientific-technical evaluation process...............................................................................................4

20

Identifying and Prioritising Problems .............................................................................................4

21

Example 1: Organising the chemical space for evaluation .........................................................5

22

Enhancing the Evidence Base .........................................................................................................5

23

Example 2: Systematic Review....................................................................................................6

24

Example 3: Evaluating complex exposure scenarios.......................................................................6

25

Knowledge Discovery......................................................................................................................7

26

Social aspects and the decision making process ................................................................................7

27

Enhancing the Evaluation Process ..................................................................................................7

28

Optimizing Expert Identification .................................................................................................8

29

Enhancing Expert Collaboration .................................................................................................8

30

Simulating Expert Judgement .........................................................................................................9

31

Process Simulation......................................................................................................................9

32

Building Cognitive Models ..........................................................................................................9 1

33

Step-by-step approach..............................................................................................................10

34

Conclusions...........................................................................................................................................11

35

Bibliography..........................................................................................................................................11

36

Figures ..................................................................................................................................................14

37

38 39 40 41 42 43 44

Abstract

45 46 47 48 49

In order to explore ways in which computational methods may help overcome this discrepancy between the number of chemical risk assessments required on the one hand and the number and adequateness of assessments actually being conducted on the other, the European Commission's Joint Research Centre organised a workshop on Artificial Intelligence for Chemical Risk Assessment (AI4CRA).

50 51 52 53 54 55

The workshop identified a number of areas where AI could potentially increase the number and quality of regulatory risk management decisions based on CRA, involving process simulation, supporting evaluation, identifying problems, facilitating collaboration, finding experts, evidence gathering, systematic review, knowledge discovery, and building cognitive models. Although these are interconnected, they are organised and discussed under two main themes: scientific-technical process and social aspects and the decision making process.

56 57 58 59 60 61 62 63 64 65 66 67 68 69

Introduction

70 71 72

Similar efforts are being reported from the International Collaboration for the Automation of Systematic Reviews (ICASR) [ (Beller, et al., 2018) (O'Connor A. , Tsafnat, Gilbert, Thayer, & Wolfe, 2018) (O'Connor A. M., et al., 2019)], which suggests a critical mass in interest in applying AI to

As the basis for managing the risks of chemical exposure, the Chemical Risk Assessment (CRA) process can impact a substantial part of the economy, the health of hundreds of millions of people, and the condition of the environment. However, the number of properly assessed chemicals falls short of societal needs due to a lack of sufficient experts for evaluation, interference of third party interests, and the sheer volume of potentially relevant information on the chemicals from disparate sources.

Chemical Risk Assessment (CRA) is a discipline at the science-policy interface that informs far reaching decisions about the placing of chemical compounds onto the market, thereby having a significant impact on a multi-billion industry, the health of hundreds of millions of people and the condition of the environment. The four elements of CRA, i.e. hazard identification, hazard characterisation, exposure assessment and ultimately risk characterisation (Figure 1), depend on the integration of different types of information from different sources, and increasingly with an unmanageable volume of information, including regulatory dossiers, study reports and scientific literature. To add to the problem, there might be special interest influences, biased opinions, and political preferences (see the current glyphosate and acrylamide discussions). As a scientific body supporting the development of chemical risk assessment methodologies in the European Union, the European Commission's Joint Research Centre (JRC) has therefore launched and coordinated an initiative to explore the applicability of AI to CRA as a means of supporting and improving regulatory decisions, and increasing public trust in the way the decisions are reached.

2

73 74

assessments across environmental and biomedical disciplines. Establishing collaboration with ICASR to exploit a common focus will be a priority.

75 76 77 78 79 80 81 82 83

Workshop

84 85

The following topics were identified as prime candidates to be examined more closely when further efforts are undertaken to make AI an enabling technology in CRA:

86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102

A JRC-organised workshop, titled "AI4CRA – Artificial Intelligence for Chemical Risk Assessment", was held in Ispra, Italy on 4-5 April 2018. The participants were scientists from both the AI and the CRA fields from Aalto University (Finland), Lancaster University, Leeds University (UK), the European Food Safety Authority (EFSA) and the JRC. The participants explored how, and to what extent, applying AI methods could support both the quantitative (the large number of insufficiently assessed chemicals and, the unmanageable extent of existing information sources) and the qualitative (speed, transparency, reproducibility, reliability, objectivity, credibility) aspects of CRA.

        

identifying problems gathering evidence systematic review knowledge discovery supporting evaluation finding experts facilitating collaboration process simulation cognitive modelling

Further work In the weeks following the workshop, the authors of this manuscript continued their discussion about the most promising areas that could be supported, facilitated or even made possible via the integration of AI. Based on the topics identified in the workshop, and following some regrouping, merging and additional input, two major overarching themes were identified:  

Scientific-technical evaluation process Social aspects and the decision making process

103 104 105 106 107

While the first theme touches upon more content-related issues like actually identifying problems to be addressed with CRA, gathering the scientific evidence via e.g. AI-supported systematic review and general knowledge discovery, the second focuses on the social interaction between actors and stakeholders by enhancing the human-led evaluation process and introducing the notion of simulating and exploring human expert judgement:

108 109 110 111

The level of AI-readiness of the individual topics (illustrated by their penetration of the AI area in Error! Reference source not found.) will also indicate the lower-hanging fruit and where quick wins can potentially be made. The ultimate goal of AI-powered Chemical Risk Assessment is of course the total overlap between the topics and AI.

112 3

113

Discussion - Topics in Chemical Risk Assessment that AI could support

114 115 116 117

Scientific-technical evaluation process Error! Reference source not found. illustrates several possible roles of AI in aggregating evidence and using the information to derive knowledge in the scientific-technical process. These are expanded upon below.

118 119

Identifying and Prioritising Problems

120 121 122 123

Since a CRA can only assess what is targeted for evaluation, AI could be trained to find the gaps, based on the experience of previous CRAs, thereby identifying problems that are less obvious to experts.

124 125 126 127

For example, AI could help identifying the actual problems and questions to be asked in a CRA and screen and select/prioritise the chemicals to be assessed in the first place. AI could assist in identifying emerging risks, i.e. new adverse effects not typically covered by the standard regulatory endpoints.

128 129 130 131 132 133 134

Robust evidence surveillance methods are fundamental to evidence-informed risk management. Without robust evidence surveillance, it is not possible to know where there are significant evidence gaps and evidence gluts, and therefore not possible to efficiently prioritise where to commit limited research capacity to either synthesise existing evidence or to commission novel primary studies. The result of lack of robust evidence surveillance is inefficiency, as empty, inconclusive or irrelevant reviews might be undertaken, unnecessary primary studies might be commissioned, and necessary studies might remain unconducted.

135 136 137 138 139 140 141 142 143 144 145 146

In addition to analysing the evidence and supporting judgements, AI can support selecting and prioritising the chemicals for evaluation in the first place. A general challenge in CRA is the slow pace of evaluation of individual substances compared with the number of existing chemicals for which the safety of use, or respective risk management measures, have to be established. How to select the chemicals to evaluate first in the overall universe of chemicals? AI can help screen all available information or indications from different sources – which should comprise the whole universe of available information, including grey literature or social media – on properties and effects of chemicals, as well as the potential exposure to them, based on information on production, intended and actual uses. Starting with setting criteria for identifying the most critical chemicals, that is, posing the highest risk, or otherwise defined priority criteria, AI would learn and build indicators to flag priority chemicals for evaluation. Work of this nature has started on the high-throughput ToxCast data [ (Kleinstreuer, 2014) (Liu, 2015)].

147 148 149 150 151 152 153 154

A helpful concept in that context are evidence maps (also referred to as systematic maps or scoping reviews), which are gaining visibility as a useful systematic analysis product. In brief, evidence maps use systematic review methods to identify the amount and type of evidence available to address a particular topic [ (Miake-Lye, Hempel, Shanman, & Shekelle, 2016) (Wolffe, Whaley, Halsall, Rooney, & Walker, 2019)]. They may include critical appraisal of studies, but there is no attempt to synthesize the evidence to answer an assessment question. In this way, evidence maps also identify critical data gaps or reveal bodies of evidence where “traditional” evidence (experimental animal or epidemiological studies) is available for consideration, or when analysis tools for data poor chemicals 4

155 156 157 158 159 160 161 162 163

would be more appropriate. Ideally, evidence is summarized in a structured format to promote dissemination in searchable databases and results are presented in a visual figure to keep the report concise. Evidence maps may be useful in the assessment update process to determine whether new evidence may result in a change to an existing health reference value. Increasingly, journals are providing author guidance on publication of evidence maps, including environmental journals [ (CEE (Collaboration for Environmental Evidence), 2019) (EHP (Environmental Health Perspectives), 2019) (EI (Environment International), 2019)]. With use of specialized systematic review software, in conjunction with AI tools for automating the mapping process, preparing evidence maps could be a rapid process (weeks, not months).

164 165 166 167 168 169 170

Example 1: Organising the chemical space for evaluation One approach to facilitate the evaluation of the high number of existing chemicals is the attempt to organise the chemical space and establish groups of similar chemicals to evaluate them together, in order to enable a faster and more efficient evaluation compared to single chemical evaluation [ (Government of Canada, 2011) (ECHA, 2019)]. Computational approaches are already used for this purpose. An evaluation on whether AI can improve this process should be made.

171 172 173

Enhancing the Evidence Base The collation of relevant evidence and evaluation of its quality and suitability for use in the assessment process are crucial steps at the beginning of a CRA.

174 175

AI could be trained to crowd-source multi-disciplinary expertise, and add complementary and useful knowledge normally not considered in a specific context.

176 177 178 179 180

New streams of data from biomonitoring and epidemiological studies can contribute to the overall picture [ (National Academies of Sciences, Engineering, and Medicine (NAS), 2017)] but might be difficult to analyse “manually”. Here natural language processing could be employed to derive useable information from disparate types and sources of data (similar to the knowledge discovery employed in [ (Korhonen, 2012)]. AI will be particularly useful for dealing with unstructured content.

181 182 183 184 185

In line with Wilson’s statement that “we are drowning in information, while starving for wisdom” [ (Wilson, 1998)], there are many different sources providing information for CRA and the amount of data is rapidly increasing, for example, using high throughput screening or omics technologies. These “raw data” have to be processed and interpreted to provide useful insights for CRA, which means that AI could filter out the relevant data and synthesise information to build knowledge.

186 187 188 189 190

Not only scientific literature, study reports and submitted dossiers should be considered for CRA, but other valuable “unofficial” sources of information should also be included and mined, such as “grey” literature and other information from the web and social media, e.g. twitter or blogs. These could contribute to identifying trends or emerging risks, by providing information for example on actual use (or misuse) of, and exposure to, chemicals.

191 192 193 194

AI could contribute to building good quality data for CRA. In addition to analysing evidence more efficiently, AI could identify data gaps and help to formulate the corresponding research questions. AI might contribute to setting up the experimental design in the most efficient and effective way to close the knowledge gaps.

5

195 196 197 198

The main obstacle to keeping on top of research is that scientific papers are being published at ever increasing rates. To keep abreast of all research which is relevant to a topic or policy area is impossible without some means of automating the aggregation and summarisation of the content. Without technology, research synthesis efforts remain limited and incomplete.

199 200 201 202 203 204 205 206

In the near term, AI machine-reading tools will enable the characteristics and results of hundreds of thousands of studies relevant to assessing chemical health risks, currently locked away in individual manuscripts, to be converted into composite, queryable databases. These databases can be set up with flagging systems for regulators, automatically identifying new evidence streams which should be analysed, and they should be queryable in terms of concepts essential to their decision-making process. This way, risk managers can be confident that they are acting on all relevant evidence, not just part of it, and their reasoning for focusing on one part of the evidence base rather than another can be made fully transparent.

207 208 209 210 211 212 213

Example 2: Systematic Review In the near term, AI could dramatically increase the amount of conducted systematic reviews (SRs), which in the long run could lead to a culture of evidence-use based primarily on the systematic gathering and appraisal of evidence streams. This would reduce the risk of a 'single study' influence on policy decisions, and ease the process of obtaining scientific consensus. They are both vital for areas where facts are heavily contested and opinions highly polarised.

214 215 216 217 218

An SR aims to provide a complete summary of current evidence which is relevant to answering a research question. They are characterised by the effort made to minimise the risk of bias in the evidence review process, emphasising sensitive search strategies, comprehensive inclusion criteria, and transparent and reproducible approaches to critically appraising existing evidence and synthesising it into summary results (via qualitative and/or quantitative meta-analytical methods).

219 220 221 222 223 224

While the transparency and the minimised risk of bias of an SR has a strong appeal as a technique for developing the methodological quality of CRA, they are lengthy, labour-intensive projects to undertake, which require a great deal of specialised skill across multiple disciplines to complete successfully. This is proving a significant barrier to their uptake in regulatory contexts, where timeframes are often relatively short, and capacity often limited in comparison to the requirements of the methodology.

225 226 227 228 229 230

AI stands to shift this balance in many ways, potentially at every stage of the review process. AI can facilitate each individual step (see Error! Reference source not found.) of an SR, lowering the entry barrier to the use of SR methods, therefore enabling their application in previously inaccessible contexts. The way individual studies are conducted and reported is also expected to change as AI becomes mainstreamed, with new reporting standards developed to facilitate automated search, selection, quality assessment and statistical synthesis.

231 232 233 234 235 236

Example 3: Evaluating complex exposure scenarios AI could help with complex evaluations, for example to conclude on possible effects of combined exposure to chemicals and to simulate exposure to chemicals for a multitude of different possible exposure scenarios and probabilities – across different sectors and pathways, such as via food, consumer goods, occupational and environmental exposure, and for different populations in terms 6

237 238 239 240 241 242 243 244

of geography and susceptibility, and over different periods of time. Similarly, in the direct assessment of chemical mixtures, even in the ideal case that the effects were known for all individual chemical components, the sheer number of possible combinations of several compounds in many possible concentrations is too high as to allow a simulation of the mixtures and measurement of the effects of all these theoretically possible combinations in a laboratory [ (Bopp, et al., 2019)]. AI could help with simulations of all possible mixture (compound and concentration) combinations and thus the prediction of their combined effects, or the determination of the worst case. AI might also contribute to predict interaction of chemicals.

245

Knowledge Discovery

246 247 248 249

As defined in the Commission Communication [ (European Commission, 2016)], knowledge “is acquired through analysis and aggregation of data and information, supported by expert opinion, skills and expertise”.

250 251 252 253 254 255 256 257 258 259 260 261

Information from distinct sources might give new insights when brought together. However, so far there might be no awareness about the existence of and/or connections between these different sources. AI could help to screen and systematically map the “universe of information”, extracting information also from non-easily processable sources, or hidden in scanned documents or graphs, and subsequently to combining them. Pattern recognition techniques can help to make connections and identify relationships, which would be too complex to be recognised by human screening and processing, and support discovering correlations. Furthermore, very different types of data/information have to be integrated to enable conclusions and decision making. AI could help by making sense of data and building knowledge. An example is the Adverse Outcome Pathway (AOP) approach for systematically describing mechanisms of chemical toxicity, and the possibility to find new AOPs or build interconnected AOP networks, based on many scattered pathways or fragments of pathways [ (Wittwehr, 2018)].

262 263 264 265 266 267 268

AI could be trained to identify other substances that could inform the CRA, if identification of similarity in chemical structures, biological behaviour or nature of the available data, would be made. Once enough data have accumulated in the chemical risks database, AI could be used to build multidimensional models of patterns latent within the data (such as trends in toxicity), and compare the “profiles” of different chemicals. By comparing such profiles, risk assessors will be able to base predictions of health risks such as carcinogenicity on the basis of all the available evidence, including hidden similarities.

269 270 271 272 273 274 275

The authors recognise multiple initiatives exist in this space, e.g. at ICASR, US EPA, NTP etc., who are working on operationalizing crowd sourcing options with a focus on ensuring interoperability of tools, onboarding of AI, development of training data sets for model development, mapping of various vocabularies into an ontology framework that can be used to help with AOP development and other evidence synthesis activities, and incorporation of tenants of systematic review into the AI pipeline. As a result of the workshop, JRC is reaching out to various actors to encourage further collaboration and streamlining of efforts.

7

276

Social aspects and the decision making process

277 278 279 280 281 282 283 284 285

Enhancing the Evaluation Process After aggregating the evidence, the next step in a CRA is evaluation. The compiled lines of evidence are assessed, considering the quality of the data, their reliability and associated uncertainties, as well as the relevance for the problem formulated, in order to make an overall judgement. The assessment needs to integrate results from different types of approaches and should be performed in an unbiased way while weighing their respective contributions and handling possibly conflicting results. Evaluation is typically an expert judgement driven process, with expert knowledge and experience being required to make decisions regarding the available data and the conclusions based on them.

286 287 288 289 290 291 292

Different groups of experts are making CRAs in different legal frameworks based on the data made available at a certain point in time. In hindsight, it is often difficult to understand whether and what data was actually considered and how different data was weighed to lead to final conclusions and recommendations regarding the possible or restricted uses of a substance. The discrepancy between decisions taken by different expert groups can lead to mistrust in the resulting CRAs and expert recommendations. Generally, AI could play a role in carrying out the evaluation process in a more neutral, consistent and transparent way.

293 294

AI could learn how to perform a CRA which could then be peer-reviewed by real experts, and this would be the basis for an iterative learning process for both the machine and the experts.

295 296 297 298 299 300 301 302 303 304 305 306 307 308

Optimizing Expert Identification In terms of expert panel work, there are several ways in which AI could support the CRA process. For example, since the success of a group decision process depends on the dynamics of the group, as well as the complementary strengths and areas of expertise of its members and their knowledge of the topic, and on avoiding potential bias due to the experts’ background, finding an optimal mix of experts is a crucial task which AI could support. A selection based on limited or biased selection criteria, or only based on already established networks, can reduce the chance of finding the right balance of experience, competence and different views which are beneficial for a fruitful discussion. AI could be used to create a mapping system that takes into consideration not only the potential experts’ publications on the topic, but parameters that are normally not considered and difficult to retrieve manually, such as membership in other similar panels, presentations at conferences, presence in committees, etc. Combined with a broad and up-to-date search for the information to map the expertise, this process could reduce the risk of bias and broaden the pool of possibilities, not relying on personal experience or potentially biased judgement criteria of the selectors.

309 310 311 312 313 314 315 316 317

Enhancing Expert Collaboration Furthermore, AI could monitor or guide the process in an interactive Weight of Evidence (WoE) and thus help to rationalise the decisions made in the process. A possible direction is to use AI as an external objective observer. It could automatically create and maintain an organised, up-to-date record of topics discussed, and provide further context by linking to relevant information from available sources. This AI-driven external observation could take the form of a dynamic ‘mind map’, generated on the fly as the work of the group progresses, following and facilitating the process in a transparent way. AI would have the ability to cope with large amounts of data regarding individual chemicals, similar chemicals, chemicals in combination, experience made in earlier risk assessments,

8

318 319

recognising differences between different evaluations of one chemical or of similar chemicals in different contexts.

320 321 322 323

AI as an instant feedback instrument could also point out all possible variables and their impact, the pros and cons of different options that the group may be faced with, as well as the consequences from the judgements made for the final outcome. This would include tools to explain consequences of decisions thereby informing the weighing of economic and health related impacts.

324 325

In terms of evaluating uncertainties in CRA, AI can take the probabilistic assessment to a next level by having uncertainties explicitly at the core of the risk assessment methodology.

326 327 328 329 330 331 332 333 334 335

Another direction is a more active involvement of AI in the group decision process, by directly interacting with the members of a group. The contribution of AI could even consist of being a full member of the expert team and contributing an additional expert opinion, drawn from the available data. Going beyond the role of making evaluation more efficient, AI could develop into an overarching universal expert entity in itself, by learning about human expert decisions based on the available evidence. AI would build a collective memory of and accumulate the experience from many experts and learnings from previous cases. Furthermore, AI could derive the implicit knowledge from experts in the first place. This could be valuable for industry and regulators because the potential loss from key experts missing risk assessments could be mitigated. The accumulated knowledge from all AI-assisted group decision processes could act as a shared memory or knowledge resource.

336

Simulating Expert Judgement

337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357

Process Simulation Once the scope of a CRA is defined, the CRA process is based on the four steps of hazard identification, hazard characterisation, exposure assessment, and risk characterisation (Error! Reference source not found.). Since the risk assessor needs to make a number of methodological choices at each step, the CRA process can in principle follow a number of different pathways, leading to potentially different conclusions and risk management measures. In practice, the risk assessor reaches a conclusion based on just one of these pathways, without exploring or acknowledging the other possibilities. AI would provide a means of simulating all possible pathways, i.e. the "multiverse" of risk assessment outcomes for a given defined problem formulation [ (Steegen, 2016), (Wacker, 2017)]. This would provide a means of characterising the uncertainties in the CRA process and provide the risk manager with transparent documentation on the range of possible CRA outcomes from the most to the least conservative assessment. These uncertainties cover not only issues related to experimental design and the inherent variability of experimental data, but also the impact of subjective choices, such as one data type over another, or one data analysis procedure over another. In the short term, simulating the CRA process would provide a means of judging the reliability of risk assessments already conducted, especially in the case of controversial chemicals of high societal concern, thereby informing actions to reduce specific uncertainties where these are deemed too large. In the long term, the ability to simulate the CRA process will largely obviate the need for the risk assessor, thereby freeing resources to place more emphasis on problem formulation, taking into account a wider range of chemical concerns than is addressed by the existing regulatory framework.

358 359 360

Building Cognitive Models Human processes, especially those related to decision making and government, are often assumed to be rational and logically structured. The calls for transparency of decision making are motivated 9

361 362 363 364

by the need to scrutinise the process, i.e. to verify how much it follows this clear, predefined, logical pathway of reasoning. Certainly, this assumption of rationality is prevalent in strongly codified areas such as risk assessments, but also holds true in other complex co-decision processes where many sources of knowledge, actors and interests are involved.

365 366 367 368 369 370 371 372 373 374 375

However, the assumption that decision making is mostly rational and strongly relies on an objective analysis of facts is often discussed as simplistic [ (Damasio, 1994) (Lehrer, 2009) (Pessoa, 2013)]. Our rationality is more subjective and context-bound than can be assumed [ (Epley & Gilovich, 2016) (Kahnemann, 2011) (Fisher, Knobe, Strickland, & Keil, 2016)], and our values and emotions have an important influence on the perception of facts and willingness to act on them [ (Baekgaard, 2017) (Okon-Singer, 2018) (Volz & Hertwig, 2016)]. Moreover, our cognitive processes are influenced by cognitive biases, hence not all facts and all positions in the policy and decision making processes are equally considered (even if it seems so by involved actors), which affects the results [ (Banuri, 2017) (Hallsworth, 2018)]. Therefore, decision making is not guaranteed to improve simply because more or better science is introduced to a process: the perception and interpretation of facts depends on many factors.

376 377 378 379 380 381 382

It is unlikely, however, that the decisions are made in a completely haphazard way. There must be underlying mechanisms of how they happen, with patterns of interplaying factors involved in them contributing to particular results. Therefore, it can be beneficial to better understand our cognition and group dynamic in the context of chemical risk assessment, science advice and decision making. As risk assessment processes and precautionary principle are often contested, a constructive rethinking of them would benefit from a better understanding of the factors at play and their interconnections.

383 384 385 386 387 388 389 390 391

For complex processes, a single person, or even a team routinely involved in them, will have difficulty in systematically auto-analysing these patterns of human thinking, interacting, and decision making. This is where AI comes as a potential solution: it could help model cognitive and decisionmaking processes [ (John, Rosoff, & Pynadath, 2016) (Wall, Blaha, Franklin, & Endert, 2017)], Wall et al, 2017). Acting as a neutral observer, silently shadowing and recording the decision making steps, AI could model them and uncover the logic of their structure, not necessarily as rational as assumed. These objective models could identify pitfalls not observable by individuals participating in the process, spot the major bottlenecks and as such bring more awareness about the realities of the decision making, knowledge elicitation or risk assessment processes.

392

The objectives would slightly vary, depending on scenarios:

393 394 395 396 397 398 399

1. Where the rationality is strongly expected and clearly codified, AI observations could reveal the actual complications of these assumptions, pointing out areas worth improving in the process. 2. Where the process is already not fully clear (especially from an individual's perspective in complex systems), they give a tool to understand what is going on. This could be highly interesting information e.g. for researchers interested in providing their input to policy effectively, as they often struggle with an overly simplified policy cycle model.

400 401 402 403

Models produced by AI could provide empirical evidence of the functioning of decision-making processes. Through such cognitive models, weaker and stronger points of complex processes can be identified and a more thorough reform can be foreseen. If focused specifically on scientific support, it could help map the best ways of achieving impact with evidence. 10

404 405 406

In practical terms, it would require a sufficient number of observable cases (e.g. with recording the meetings, mapping the information flows, related decision trees/patterns, etc.), which requires a lot of trust on the side of various institutions who manage or participate in these processes.

407 408 409 410 411 412 413 414

Step-by-step approach Full exploitation of all benefits of AI for CRA cannot be a short term operation: Prior to use for decision-making, users will need to see evidence in the quality of AI for accurate collection of information reported in studies (both from journals and grey literature) as well as from data sources that may not be reported in single studies (high throughput screening results). Also, the development of AI-based evidence integration approaches will have to be done with awareness of structured frameworks being used in environmental health for weight of evidence (e.g., approaches developed by NTP, US EPA IRIS, GRADE).

415 416 417 418 419 420 421 422 423 424

Conclusions

425 426 427 428 429

AI assisted innovation in evidence gathering could therefore build an automated system of transforming data into knowledge, with much more efficiency. Freed up brain power could be used to identify important gaps in knowledge and frame the right questions. Altogether this leads to a more robust evidence base for policy making, and hopefully to a more transparent method in the decision making process.

430 431 432 433 434 435

Using AI is not only beneficial for humans, but in turn can help develop better AI. For example, it would allow developing better systems for human-AI collaboration and AI-based argumentation in the CRA process, building on the complementary strengths of human and machines. In particular, cognitive models not only help AI understand humans better (e.g., goals, preferences, decision making processes) but also help developing human-understandable and transparent algorithms that can explain reasons (including emotions) behind their actions.

436 437 438 439 440 441 442

It is expected that further exploration of the topics covered in this workshop could lead to executable plans for pilot projects, with a view to eventually increasing the efficiency and effectiveness of the CRA process. However, the only way to fully realize the applications of AI for CRA is global and discipline-crossing collaboration. The AI method development and deployment needs (including associated user dashboards) are too large and complex to be developed by any single entity. The development of AI should include a focus on interoperability of tools to best take advantage of the talent and resources available in both the environmental and biomedical sciences.

The workshop participants concluded that AI can support and facilitate important processes and activities in the multifaceted CRA discipline. While the introduction of AI to support CRA will not be a trivial undertaking, the current problems that CRA is facing (inadequate resources, cases of information overload, public scepticism) will not be solved by conventional means. "More of the same" will not lead to better and reliable CRA, but AI offers the opportunity to support CRA in innovative ways. These include supporting the gathering and analysis of information as the basis for the CRA, facilitating different aspects of expert evaluation, including AI as an additional expert around the table, to simulating all possible assessment pathways and the range of possible outcomes, as well as creating new knowledge and cognitive models of decision-making.

11

443 444 445

Bibliography

446 447

Banuri, S. (2017). Biased Policy Professionals. Policy Research working paper. Washington DC: World Bak Group.

448 449 450

Beller, E., Clark, J., Tsafnat, G., Adams, C., Diel, H., Lund, H., et al. (2018). Making progress with the automation of systematic reviews: principles of the International Collaboration for the Automation of Systematic Reviews (ICASR). Sysetmatic Reviews, 7(77).

451 452 453

Bopp, S. K., Kienzler, A., Richarz, A.-N., Van der Linden, S. C., Paini, A., Parissis, N., et al. (2019). Regulatory assessment and risk management of chemical mixtures: challenges and ways forward. Critical Reviews in Toxicology.

454 455

CEE (Collaboration for Environmental Evidence). (2019). Journal of Environmental Evidence. Retrieved from https://environmentalevidencejournal.biomedcentral.com/

456 457

Damasio, A. (1994). Descartes' Error: Emotion, Reason, and the Human Brain. Grosset/Putnam, New York.

458 459 460 461

ECHA. (2019). Mapping the chemical universe to address substances of concern. Retrieved 7 16, 2019, from ECHA Website: https://www.echa.europa.eu/documents/10162/27467748/irs_annual_report_2018_en.pdf /69988046-25cc-b39e-9d43-6bbd4c164425

462 463

EHP (Environmental Health Perspectives). (2019). Environmental Health Perspectives. Retrieved from https://ehp.niehs.nih.gov/

464 465

EI (Environment International). (2019). Environment International. Retrieved from https://www.sciencedirect.com/journal/environment-international

466 467

Epley, N., & Gilovich, T. (2016). The mechanics of motivated reasoning. Journal of Economic Perspectives, 30(3), 133-140.

468 469 470

European Commission. (2016). Communication to the Commission. Data, Information and Knowledge Management at the European Commission. Retrieved from https://ec.europa.eu/transparency/regdoc/rep/3/2016/EN/C-2016-6626-F1-EN-MAIN.PDF

471 472

Fisher, M., Knobe, J., Strickland, B., & Keil, F. C. (2016). The Influence of Social Interaction on Intuitions of Objectivity and Subjectivity. Cognitive Science, 41(4).

473 474

Government of Canada. (2011, October 8). Plan to Address Certain Substances on the Domestic Substances List. Canada Gazette, Part I: Vol 145, No. 41.

475 476 477

Hallsworth, M. (2018). Behavioural Government. Using behavioural science to improve how governments make decisions. London: The Behavioural Insights Team, The Institute for Government.

478 479 480

John, R., Rosoff, H., & Pynadath, D. (2016). Semi-automated construction of decision-theoretic models of human behavior. International Conference on Autonomous Agents and Multiagent Systems (AAMAS). 12

Baekgaard, M. (2017). The role of evidence in politics: Motivated reasoning and persuasion among politicians. British Journal of Political Science.

481

Kahnemann, D. (2011). Thinking, fast and slow. London: London.

482 483

Kleinstreuer, N. (2014). Phenotypic screening of the ToxCast chemical library to classify toxic and therapeutic mechanisms. Nature Biotechnology.

484 485

Korhonen, A. (2012). Text mining for literature review and knowledge discovery in cancer risk assessment and research. PloS One.

486

Lehrer, J. (2009). How We Decide. Houghton Mifflin Harcourt, Boston.

487 488

Liu, J. (2015, 4 28). Predicting Hepatotoxicity Using ToxCast in Vitro Bioactivity and Chemical Structure. Chem. Res. Toxicol., 738-751.

489 490 491

Miake-Lye, I., Hempel, S., Shanman, R., & Shekelle, P. (2016). What is an evidence map? A systematic review of published evidence maps and their definitions, methods, and products [Review]. Systematic Reviews 5(1).

492 493

National Academies of Sciences, Engineering, and Medicine (NAS). (2017). Using 21st Century Science to Improve Risk-Related Evaluations. Washington, DC: The National Academies Press.

494 495 496 497

O'Connor, A. M., Tsafnat, G., Gilbert, S. B., Thayer, K. A., Shemilt, I., Thomas, J., et al. (2019). Still moving toward automation of the systematic review process: A summary of discussions at the third meeting of the international collaboration for automation of systematic reviews (ICSAR). Systematic Reviews, 8.

498 499 500 501

O'Connor, A., Tsafnat, G., Gilbert, S. B., Thayer, K. A., & Wolfe, M. S. (2018). Moving toward the automation of the systematic review process: a summary of discussions at the second meeting of International Collaboration for the Automation of Systematic Reviews (ICASR). Systematic Reviews, 7(3).

502 503 504

Okon-Singer, H. (2018). The Interplay of Emotion and Cognition. In A. Fox, R. Lapate, A. Shackman, & R. Davidson, The nature of emotion: Fundamental questions (pp. 181-185). New York: Oxford University Press.

505 506

Pessoa, L. (2013). The Cognitive-Emotional Brain: From Interactions to Integration. Cambridge: MIT Press.

507 508

Steegen, S. (2016). Increasing Transparency Through a Multiverse Analysis. Perspectives on Psychological Science 11, 702-712.

509 510

Volz, K. G., & Hertwig, R. (2016). Emotions and Decisions: Beyond Conceptual Vagueness and the Rationality Muddle. Perspectives on Psychological Science, 11(1), 101-116.

511 512

Wacker, J. (2017). Increasing the Reproducibility of Science through Close Cooperation and Forking Path Analysis. Frontiers in Psychology 8, 1332.

513 514 515 516

Wall, E., Blaha, L., Franklin, L., & Endert, A. (2017). Warning, Bias May Occur: A Proposed Approach to Detecting Cognitive Bias in Interactive Visual Analytics. Retrieved from IEEE Visual Analytics Science and Technology (VAST): https://www.cc.gatech.edu/~ewall9/media/papers/BiasVAST17.pdf

517

Wilson, E. O. (1998). Consilience: The Unity Of Knowledge. Alfred A. Knopf. 13

518 519 520

Wittwehr, C. (2018). Use and Acceptance of AOPs for Regulatory Applications. In N. Garcia-Reyero, & C. Murphy, A Systems Biology Approach to Advancing Adverse Outcome Pathways for Risk Assessment (pp. 379-390). Springer.

521 522 523

Wolffe, T. A., Whaley, P., Halsall, C., Rooney, A. A., & Walker, V. R. (2019). Systematic evidence maps as a novel tool to support evidence-based decision-making in chemicals policy and risk management. Environment International, 130.

524

525

526 527 528 529 530 531

Figures Figure 1 - Steps in the Chemical Risk Assessment Process .....................................................................3 Figure 2 – AI and Chemical Risk Assessment ..........................................................................................4 Figure 3 - Possible roles of AI in CRA ......................................................................................................5 Figure 4 - Steps of a Systematic Review .................................................................................................7

532 533

Figure 1 - Steps in the Chemical Risk Assessment Process

534

14

535 536

Figure 2 – AI and Chemical Risk Assessment

537

538 539

Figure 3 - Possible roles of AI in CRA

540

541 542

Figure 4 - Steps of a Systematic Review

543 544 545

Research Highlights for manuscript "Artificial Intelligence for Chemical Risk Assessment"

546 547 548 549 550

 

Artificial Intelligence (AI) has potential to improve chemical risk assessment (CRA) and associated regulatory decisions AI could influence the scientific-technical evaluation process and the social aspects of the CRA decision making process 15

551 552 553 554 555

  

Systematic Reviews are among the CRA processes that will profit most from the application of AI techniques Using AI as external objective monitor could make Weight of Evidence considerations less subjective and more trustable AI could identify pitfalls not observable by individuals participating in the CRA process

556

16