Verbal and visual causal arguments

Verbal and visual causal arguments

COGNITION Cognition 75 (2000) 65±104 www.elsevier.com/locate/cognit Verbal and visual causal arguments Uwe Oestermeier*, Friedrich W. Hesse Departmen...

661KB Sizes 0 Downloads 51 Views

COGNITION Cognition 75 (2000) 65±104 www.elsevier.com/locate/cognit

Verbal and visual causal arguments Uwe Oestermeier*, Friedrich W. Hesse Department of Applied Cognitive Science, the German Institute for Research on Distance Education at the University of TuÈbingen, Konrad-Adenauer-Str. 40, D-72072 TuÈbingen, Germany Received 26 January 1998; received in revised form 4 June 1999; accepted 4 January 2000

Abstract The present paper analyzes how verbalizations and visualizations can be used to justify and dispute causal claims. The analysis is based on a taxonomy of 27 causal arguments as they appear in ordinary language. It is shown how arguments from spatio-temporal contiguity, covariation, counterfactual necessity, and causal mechanisms, to name only a few, are visualized in persuasive uses of tables, graphs, time series, causal diagrams, drawings, maps, animations, photos, movies, and simulations. The discussion centers on how these visual media limit the argumentative moves of justifying, disputing, and qualifying claims; how they constrain the representation of observational, explanatory, and abstract knowledge in the premises of causal arguments; and how they support and externalize argument-speci®c inferences, namely generalizations, comparisons, mental simulations, and causal explanations. q 2000 Elsevier Science B.V. All rights reserved. Keywords: Casual reasoning; External representations; Visual arguments

1. Introduction Causal arguments are pieces of reasoning consisting of causal claims (or conclusions, theses, points, etc.) and premises (or grounds, justi®cations, data, etc.) that support causal claims as reasons. This de®nition encompasses verbal as well as visual forms of reasoning. In verbal arguments, reasons and conclusions can be easily distinguished if connectives like ``because'', ``therefore'', ``so'' or other explicit markers are used: ``Bad charts caused the Challenger accident because the accident would not have happened if the engineers had used better charts to * Corresponding author. Fax: 149-7071-979100. E-mail address: [email protected] (U. Oestermeier) 0010-0277/00/$ - see front matter q 2000 Elsevier Science B.V. All rights reserved. PII: S 0010-027 7(00)00060-3

66

U. Oestermeier, F.W. Hesse / Cognition 75 (2000) 65±104

convince the NASA of®cials that a start in cool weather is risky''. (See Tufte, 1997, for an extended version of this argument.) In most argumentative uses of visualizations, claims and reasons are not explicitly marked. The following examples attempt to convince their recipients of causal claims, but they do so without any explicit argumentative vocabulary. Instead, they arrange information in a manner such that it makes it easy for the recipient to draw the intended causal conclusion him or herself. Perhaps the most famous example is Dr. John Snow's cholera map (Tufte, 1997). Snow investigated the 1853±54 cholera outbreak in London. At that time, the general medical opinion ascribed the disease to ``miasmas'' and other such emanations from the swamps and mud of the Thames River. However, Dr. Snow suspected infected water supplies to be the source of the disease. He backed up his hunch by plotting the homes of 500 victims in a street map of Soho (Fig. 1). All victims had drunk from the Broad Street pump at the center of the ``cholera ®eld''. Nowadays similar maps are frequently used in epidemiology and health care (Monmonier, 1991). The second example is taken from Otto Neurath, who attempted to raise the level of education of the working masses by concise visualizations (Neurath, 1991). The

Fig. 1. A map of deaths from the 1853±54 cholera outbreak in the Broad Street area. From Tufte, 1983, p. 24.

U. Oestermeier, F.W. Hesse / Cognition 75 (2000) 65±104

67

Fig. 2. Neurath's visualization of the in¯uence of alcohol on interventions by the police. The rows show the days from Sunday to Saturday, the dark column represents the condition ``alcohol involved'', the light column the condition ``no alcohol involved''. Thursdays (``Donnerstage'') and Saturdays (``Samstage'') are marked as paydays (``Lohnauszahlungstag''). From Neurath, 1991, p. 53.

exhibition poster in Fig. 2 shows how alcohol increases the frequency of police interventions. Neurath certainly intended to endorse the conclusion that drinking causes unnecessary problems. Other current examples are animated maps that visualize the effects of the El NinoÄ phenomenon, advertisements that compare the effectiveness of two detergents in split screens, computer simulations that are used to reconstruct the causes of traf®c accidents, and NATO aerial photos that demonstrate the destructive power of bombs by showing targets before and after air strikes. These examples already show some of the obstacles in the way of a comprehensive analysis of visual causal arguments: A major obstacle, the implicitness of visual arguments, has already been mentioned. If an argumentative vocabulary is missing, additional contextual information is required for an understanding of the argumentative intention of the visualization in question. Accordingly, visual arguments are highly context dependent. A related dif®culty is the fact that visualizations are only rarely used in isolation. As shown in the examples, most visualizations need additional symbolic labels or verbal comments to be understandable. The question thus arises: What format of representation exactly carries the argument? Are visualizations only illustrations of points stated mainly verbally? Or do they ful®ll a genuine argumentative function independent of the verbal supplements? And if so, how can they ful®ll this function exactly? Which premises are representable in which visual format and which causal inferences are supported by which visual means? Behind all this is one key problem: The richness of argument patterns and visual

68

U. Oestermeier, F.W. Hesse / Cognition 75 (2000) 65±104

formats used in everyday communication. Can all these varieties be reduced to a limited (but not necessarily small) number of argument patterns that show up in various visualizations? Can we specify an analytical framework that is rich enough to capture verbal causal arguments as well as visual ones? To answer these questions we discuss verbal and visual arguments in two strands. Section 2 of the present paper attempts to give a tentative answer to the basic question ``Which forms of causal evidence exist?'' and Section 3 proceeds by asking ``Which media transport what forms of causal evidence?'' Both strands of the discussion start with summarizing tables. Table 1 in Section 2 presents a taxonomy of 27 verbal causal arguments. This taxonomy was developed on the basis of several text corpora and tries to capture all important causal argument patterns in ordinary language. Table 2 in Section 3 picks up this taxonomy and presents an argumentmedia-matrix, which provides a detailed depiction of the argument patterns supported by different forms of visualizations, namely, tables, graphs, time series, causal diagrams, drawings, maps, animations, photos, movies, and simulations. The other subsections explain the main distinctions of our taxonomy of causal arguments and their relations to argumentative uses of visualizations. ² Firstly, our taxonomy distinguishes the basic argumentative moves of defending, attacking, and qualifying claims. Whereas pros, cons, and quali®cations can easily be articulated verbally (see Section 2.2), visualizations are seriously limited in their ability to express counter-arguments and quali®cations (see Section 3.2). ² Secondly, our taxonomy speci®es three types of premises involved in causal arguments: Observational (i.e. spatial, temporal, or episodic), explanatory (i.e. intentional or causal), and abstract knowledge (i.e. conceptual knowledge about criteria for causation). Symbolic texts and verbalizations are unlimited in their ability to express all these types of premises (see Section 2.3), whereas indexical, iconic, and diagrammatic media are restricted to observational and explanatory knowledge (see Section 3.3). ² Thirdly, our taxonomy speci®es the inference patterns which are needed to come up with a causal conclusion, namely, inferences from observations, generalizations, comparisons, mental simulations, and causal explanations (Section 2.4). It will be shown, how visual means like super- and juxtapositions externalize and support these inferences (Section 3.4). The concluding Section 4 summarizes the main aspects of our analysis in a single diagram (see Fig. 9) and outlines directions for further research. 2. Verbal causal arguments In our analysis, we consider argumentations as communicative acts (Kjùrup, 1978) directed at convincing someone by means of reasons. This de®nition includes persuasive speech acts and uses of pictures, movies, diagrams, simulations, etc., as well as the special case of arguing with oneself, with or without the help of external

U. Oestermeier, F.W. Hesse / Cognition 75 (2000) 65±104 Table 1 Causal arguments, their premises and inference patterns a Argument type Arguments for causal claim Circumstantial evidence 1. Spatio-temporal contiguity (Einhorn & Hogarth, 1986; Hume, 1739/1978)

2. Co-occurrences (Hume, 1739/ 1978; Kuhn, 1991)

3. Similarity of cause and effect (Einhorn & Hogarth, 1986)

Contrastive evidence 4. Covariation (Mill, 1979)

5. Statistical covariation (Cheng, 1993; Eells, 1991)

Schema, example, presupposed knowledge and basic inferences

A caused B because B happened at A/at nearly the same time as A Example: It was probably the drink because he fell in love when he was drinking the cocktail Premises: Spatial or temporal knowledge about the contiguity or simultaneity of two objects or events Inferences: Inference from contiguity to causality A caused B because As frequently occur together with Bs Example: Printing caused the crashes: when I tried to print the system always crashed Premises: Episodic knowledge about multiple instances of As and Bs Inferences: Generalization about multiple instances and inference from constant conjunction to causality A caused B because A is similar to B Example: The stain on the carpet is from your dirty boots. The stain on the carpet and the stain on your boots have the same color Premises: Episodic knowledge about similar structural properties of A and B Inferences: Comparison of a putative cause with its putative effect and inference from similarity cues to causality A caused B because B changes with A Example: This must be the right wheel because turning this wheel reduces the brightness of the screen Premises: Episodic knowledge about observations under different conditions Inferences: Comparison of variable conditions and outcomes (Mill's ``method of difference ``) and inference from covariation to causality A caused B because A increases the probability/ risk/percentage of B. Special case of (4) stressing a probabilistic regularity between A and B Example: Smoking causes cancer because smokers have a much higher risk of getting cancer Premises: Episodic knowledge about multiple observations under different conditions Inferences: Statistical generalization about multiple observations, comparison of outcomes, and inference from covariation to causality

69

70

U. Oestermeier, F.W. Hesse / Cognition 75 (2000) 65±104

Table 1 (continued) Argument type

Schema, example, presupposed knowledge and basic inferences

6. Before-after-comparison (Ducasse, 1993)

A caused B because B exists after A but not before A. Special case of (4) stressing the temporal asymmetry between A and B Example: The new diet was very effective. Before the diet, I was fat, and now I am slim Premises: Temporal knowledge about distinct observations Inferences: Comparison of before and after and inference from a temporal asymmetry to a causal relation A caused B because action A led to B (in comparison to A.) Special case of (4) stressing the manipulability of B by A Example: Look, if I click the ``Cancel'' button the system crashes Premises: Episodic knowledge about experimental observations and about the intentional manipulation of an experimenter Inferences: Comparison of intervention vs. no intervention conditions and inference from manipulability to causality A caused B because B would not have happened if A had not happened. Special case of (4) stressing the counterfactual necessity of A for B Example: Because of you, I'm late. If you had arrived on time, I would have been able to leave my of®ce earlier Premises: Episodic knowledge about an observed factual and a non-observable ®ctitious episode. The ®ctitious episode is typically reconstructed from causal knowledge Inferences: Comparison of the consequences of factual occurrence A with a mental simulation of the ®ctitious condition A

7. Experimental comparison (Wright von, 1971)

8. Counterfactual vs. factual (conditio sine qua non; Hume, 1739/1978; Mackie, 1974)

Causal explanations 9. Causal mechanism (Ahn, Kalish, Medin & Gelman, 1995; Shultz, 1982)

10. No alternative (Kuhn, 1991)

A caused C because A led to C via the process/ mechanism B Example: His anger caused the accident. It affected his concentration Premises: Speci®c causal knowledge about mechanisms and causal chains Inferences: Transitive inferences along causal chains or other complex inferences about causal structures A caused B because there is no other explanation for B Example: The new word processor crashed the system because it was the only program running at that time

U. Oestermeier, F.W. Hesse / Cognition 75 (2000) 65±104 Table 1 (continued) Argument type

11. Typical effect (Thagard, 1999)

Arguments against causal claims Counterevidence 12. Wrong temporal order

13. No contact

14. Free decision

15. Insuf®cient cause

Schema, example, presupposed knowledge and basic inferences Premises: Speci®c causal knowledge about multiple plausible explanations for B Inferences: Search for explanations and negation of all possibilities A caused B because B happened and B is a typical effect of A Example: That must be the word processor. It often crashes the system Premises: Speci®c causal knowledge about multiple plausible explanations for B and their typicality/frequency Inferences: Search for a plausible explanation of B and generalization about multiple instances

A has not caused B because A happened after B Example: The server problems have not caused your system crash, the server problems occurred afterwards Premises: Episodic knowledge about the observed temporal order of the events A and B Inferences: Inference from a temporal relation to the negation of a causal relation A has not caused B because A and B were not connected/in contact Example: How could the server cause the system crash? Your computer was not connected with the server at that time Premises: Spatial-temporal knowledge about missing contact or interrupted processes Inferences: Inference from a great distance or missing links between A and B to a negation of a causal link A has not caused C because between A and C the free decision B occurred Example: Not the traf®c situation was responsible for the accident. He decided out of his own volition to speed up in this situation Premises: Intentional knowledge about actions and their mental preconditions (like motives, desires etc.) Inferences: Inference from a free act breaking the causal nexus to a negation of a causal relation A has not caused B because A sometimes happens without producing B Example: Smoking does not cause cancer because my uncle smoked all the time and never became ill

71

72

U. Oestermeier, F.W. Hesse / Cognition 75 (2000) 65±104

Table 1 (continued) Argument type

16. Unnecessary cause

Alternative explanation 17. More plausible alternative (Thagard, 1999)

Insuf®ciency of evidence 18. Fallacy post hoc ergo propter hoc (Walton, 1989)

19. Low force of statistical data (Huff, 1991)

20. Low force of single cases

Schema, example, presupposed knowledge and basic inferences Premises: Episodic knowledge about counterexamples or general knowledge that A does not generally cause B Inferences: Refutation of a general conclusion by a search for counterexamples or inconsistencies with general knowledge A has not caused B because B sometimes happens without A Example: Smoking does not cause cancer. Nonsmokers can get cancer as well as smokers Premises: Episodic knowledge about counterexamples or general knowledge that B does not generally presuppose A Inferences: Refutation of a general conclusion by a search for counterexamples or inconsistencies with general knowledge A has not caused B but C (because C is more probable, because C was contiguous to B etc.) Example: It was not the snow but rather his risky driving that caused the accident Premises: Speci®c knowledge about alternative explanations of B Inferences: Negation of a causal conclusion by search for a competing better explanation The fact that A happened before B does not prove that A caused B Example: The fact that the system crashed after you started the printing job does not prove that something is wrong with the printing routines Premises: Abstract knowledge stating that temporal sequence does not prove causation Inferences: Application of strict criteria for the cogency of an argument. A temporal sequence does not prove causation The statistical connection between A and B does not prove a causal link Example: If - statistically speaking - violent behavior and the amount of television viewing correlate, this would prove nothing Premises: Abstract knowledge stating that statistical relationships do not prove causation Inferences: Application of strict criteria for the cogency of an argument. A statistical correlation does not prove causation That A causes B cannot be proven by a single observation

U. Oestermeier, F.W. Hesse / Cognition 75 (2000) 65±104 Table 1 (continued) Argument type

21. Unknown mechanism

Arguments qualifying causal claims Causal complexities 22. Partial cause

23. Indirect cause

24. Common cause

25. Interaction

Schema, example, presupposed knowledge and basic inferences Example: Whether or not this new medicine causes the trouble cannot be said just after a single case Premises: Abstract knowledge stating that single cases do not prove a general causal connection Inferences: Application of strict criteria for the cogency of an argument That A causes B is not proven as long as a detailed mechanism connecting A and B is not known Example: Whether smoking causes cancer cannot be said as long as the mechanisms are not known Premises: Abstract knowledge stating that causal knowledge is still incomplete Inferences: Application of strict criteria for the cogency of an argument. As long as detailed mechanisms are unknown one has not proven causation

A is not the cause of B, it is only a partial cause Example: The traf®c situation was only a partial cause of the accident Premises: Abstract causal knowledge about the fact that effects can have multiple causes or speci®c knowledge that several factors are involved Inferences: Inference from a complex causal structure to a quali®cation of a causal conclusion A is not the cause of B, it is only an indirect cause of B Example: The rain was only an indirect cause of the accident Premises: Abstract causal knowledge that causes lead via chains to their effects or speci®c knowledge about the links between A and B Inferences: Transitive inferences from chained links to a quali®cation of a causal conclusion A is not the cause of B, both are effects of C Example: Smoking and cancer are both effects of a certain way of living Premises: Abstract causal knowledge that spurious causation can be explained by a common cause or speci®c knowledge that C causes A as well as B Inferences: Inference from a complex causal structure to a quali®cation of a causal conclusion A is not the cause of B. A and B cause each other

73

74

U. Oestermeier, F.W. Hesse / Cognition 75 (2000) 65±104

Table 1 (continued) Argument type

26. Mix-up of cause and effect (Walton, 1989)

Causation without responsibility 27. No intention (Hart and HonoreÂ, 1985; Weiner, 1995)

Schema, example, presupposed knowledge and basic inferences Example: Homelessness causes unemployment and vice versa Premises: Abstract causal knowledge that many factors in¯uence each other or speci®c knowledge about an interaction (for an interaction in this ordinary sense is no speci®c statistical knowledge required) Inferences: Inference from a complex causal structure to a quali®cation of a causal conclusion A is not the cause of B, A is the effect of B Example: The prison system is not the effect of the growing violence. It is its cause Premises: Speci®c knowledge about the causal direction Inferences: Inference from the direction of causal links to a negation of the direction of a causal relation Action A caused B but person P did not intend B Example: It's true, John broke Peter's toy but he did so unintentionally Premises: Speci®c knowledge about actions and their intentions Inferences: Inference from the intentionality of an action to the negation of a responsibility

a

References indicate where these arguments and their preconditions are discussed in more detail. The ``premises'' and ``inferences'' are discussed below.

representations. By causal arguments (Walton, 1989; Weddle, 1978) we mean pieces of reasoning consisting of causal claims or conclusions (e.g. ``smoking causes cancer'') and premises which support the causal claims as reasons (e.g. ``because smokers have a much higher risk of getting cancer''). In ordinary language arguments are very often implicit (Toulmin, 1958). One can argue for and against causal claims without using the words ``cause'', ``effect'', ``causing'', ``because'', ``therefore'', ``so'' etc. Depending on the context, the utterance ``Excuse me, but my colleague was late'' might be explicated as a complete causal argument: ``I'm late because of my colleague. For, if he had arrived on time I would have been able to leave earlier and therefore arrive at this meeting on time''. This implicitness makes arguments dif®cult to analyze but an analysis of causal arguments does not have to start from scratch. Following Hume and Mill, many philosophers have tried to de®ne causation and, in doing so, have developed criteria for causal claims. Most of them argue normatively, i.e. they try to give an account of causation as coherent, uni®ed, formal and as general as possible. The most important

U. Oestermeier, F.W. Hesse / Cognition 75 (2000) 65±104

75

analyses are based on the criteria of counterfactual necessity (e.g. Lewis, 1973) and statistical relevance (e.g. Eells, 1991). The descriptive ordinary language literature, on the other hand, has stressed that the concept of causation is not a unitary one but rather a conceptual cluster with related yet different meanings (Hart & HonoreÂ, 1985). Ordinary language philosophy has always been cognizant of the fact that various normatively correct and incorrect criteria are used in argumentative discourse. We believe that this richness of argument patterns has to be acknowledged if an attempt is made to understand the role of external representations in causal cognition. Like verbalizations, visualizations employ normatively correct and incorrect argument patterns (Huff, 1991). Therefore, our analysis of visualizations starts with a survey of verbal arguments in ordinary discourse. As the uses of visualizations in conference talks, advertisements, and textbooks show, this survey is indispensable because ordinary language is always, by default, the main code to rely on. Verbal comments and explanations are integral parts of visual forms of communication (Lake & Pickering, 1998) and the interplay of visualizations and speech acts has to be addressed by all relevant theories of mediated causal cognition. 2.1. A taxonomy of causal arguments in ordinary language Our taxonomy of ordinary language arguments (see Table 1) was developed inductively and iteratively. As a ®rst step, we collected argument patterns from several sources: (a) The general philosophical, psychological and rhetorical literature on causal cognition and causation, (b) Kuhn's interview study (Kuhn, 1991), (c) a pilot study of our own in which 16 participants of a philosophy seminar were asked to argue for and against a claim about the cause of a car accident, and (d) a content analysis of 42 articles in a local newspaper (SchwaÈbisches Tagblatt TuÈbingen, from 6 September 1996 to 30 January 1997). This led to a ®rst version of the taxonomy with detailed classi®catory criteria for subsequent use by raters. To each argument were attached typical examples, syntactical cues, and explicit descriptions of the premises and inference patterns. The next stage took the form of a random sample of 60 articles drawn from a collection of German newspaper texts from the Frankfurter Rundschau on the ECI/ MCI CD-ROM (European Corpus Initiative, 1996). Two independent raters classi®ed 20 of these articles. Disagreements were discussed and used to re®ne the taxonomic criteria. With these re®ned criteria at hand, the remaining 40 articles were independently rated so as to calculate three measurements of interrater agreement according to Cohen's kappa (Cohen, 1960). The raters were asked to judge (a) whether or not a sentence contained an implicit or explicit causal claim, (b) whether the surrounding text mentioned speci®c grounds for the claim, i.e. contained a complete argument, and (c) the taxonomization of an argument. The agreements were 0.81 for (a), 0.74 for (b), and 0.66 for (c). According to Fleiss (1981), kappas of 0.40 to 0.60 can be characterized as fair, 0.60 to 0.75 as good, and over 0.75 as excellent. In addition, the taxonomy was applied by one of the raters to a large text corpus.

76

U. Oestermeier, F.W. Hesse / Cognition 75 (2000) 65±104

All Frankfurter Rundschau articles on the ECI/MCI CD-ROM were scanned for the keyword ``verursach*'' (German for ``to cause''). This method led to a rapid scan of thousands of articles at the cost of ignoring synonyms, counterfactuals and implicit causal statements. The content analysis of the resulting sample of 1024 articles and a discussion between us, as authors and the raters led to the ®nal version with 27 arguments in Table 1. In the 1024 articles, 285 arguments were found. The classi®cation of the arguments revealed that arguments explaining mechanisms were by far the most common ones (78.2%), followed by partial causes (3.9%), statistical covariations (3.5%), experimental comparisons (2.8%), more plausible alternatives (2.4%) and unspeci®c covariation arguments (2.1%) (in Table 1 these arguments are numbered 9, 22, 5, 7, 17, and 4, respectively). All other arguments remained below 2%. Surely, there are other ways of classifying causal arguments (see Kuhn, 1991). The taxonomy here and its fundamental distinctions re¯ect our present goal: A comprehensive analysis of the premises and inference patterns of visual arguments. As we shall see, the expressive power of visualizations largely depends on the following aspects (in the following numbers in brackets refer to arguments in Table 1): ² Whether a visualization is capable of expressing argumentative moves such as pros (1±11), cons (12±21), or quali®cations (22±27). ² Whether a visualization is capable of representing all the necessary types of premises, namely, observational spatial (1, 2, 13), temporal (1, 2, 6, 12), episodic (3±8) knowledge; explanatory intentional (7, 14, 27) and causal knowledge (9± 11, 22±27); as well as abstract knowledge of the concept of causation (18±21). ² Whether a visualization supports special inference patterns, namely, generalizations (2, 5, 11), comparisons (3±8), mental simulations (8), and causal explanations (9±11, 17). 2.2. Argumentative moves In Table 1, the distinction between pros, cons, and quali®cations is the major line of division. Pro-arguments are of the form ``A caused B, because¼'', counterarguments or rebuttals of the form `` : (A caused B), because¼'', and quali®cations or differentiations of the forms ``A is causally connected to B but¼'' or `` : (A is the cause of B) but¼''. These basic schemas re¯ect the turn-taking situation of defending, attacking, and qualifying claims in argumentative discourse. We have organized our taxonomy along these lines because visualizations differ drastically with respect to the various argument moves they can support. As we proceed we shall see that only language is ¯exible enough to articulate all moves whereas pictorial codes are strongly limited in their power to express negations and quali®cations. 2.2.1. Pros Arguments for causal claims range from (a) circumstantial and (b) contrastive

U. Oestermeier, F.W. Hesse / Cognition 75 (2000) 65±104

77

evidence to (c) causal explanations. Traditionally arguments of type (a) and (b) are classi®ed as inductive, whereas arguments of type (c) are considered as deductive and abductive. Most analyses in the philosophy of science and cognitive science have focused on normatively correct forms of evidence, i.e. inferences that can be easily framed in logical or statistical formalisms. There have only been a few attempts to incorporate insecure temporal and similarity cues as well (Einhorn & Hogarth, 1986). In ordinary discourse, people cite a great variety of normatively correct and incorrect evidence, as our content analysis and the few systematic studies of causal argumentation show (Kuhn, 1991). Therefore, theories concentrating on one type of evidence can only be part of the truth (Oestermeier, 1997; Thagard, 1999). (a) Circumstantial evidence is based on de®cient and indirect observational data and, in most cases, leads only to suspicions or suppositions of causal connections. The general argument form is: ``A caused B because R(A, B)'', where R is a symmetric relationship of spatial and/or temporal contiguity (1), repeated conjunction (2), or similarity (3). Yet, one could urge that, in causal contexts, asymmetric relations are what is meant, even if symmetric expressions are used. Ordinary language is notoriously vague in this respect and the temporal ``at'' in argument (1), for instance, can mean ``at nearly the same time'' as well as ``soon after''. Although temporal order is an important cue for causal order, this ambiguity often cannot be resolved. In a courtroom, for instance, the evidence provided by witnesses may rely on hearsay or disrupted observations and the exact order of events remains forever unknown. (b) Contrastive evidence. Covariation arguments compare conditions in respect to an outcome and are all of the form ``A caused B because B, under condition A, was different from B under condition : A''. The common idea behind these arguments seems to be that a cause is something that makes a difference (Lewis, 1973). Nevertheless, the underlying conditionals have different interpretations: (4) is the least speci®c formulation of a covariation; it simply compares the values of (at least) two changing variables. The other covariation arguments are more speci®c: (5) compares two conditional probabilities, (6) two events or qualities at two consecutive time points, (7) the outcomes under two conditions of an experiment and (8), an observed reality with a ®ctional alternative. Accordingly, these arguments stress different aspects of causation: (5) the regularity of a general probabilistic law, (6) the asymmetry of time, (7) the asymmetry between an independent and dependent variable (i.e. the intentionally manipulated condition of an experiment and its outcome), and (8) the counterfactual necessity of a cause. 1 (c) Causal explanations. As a third main group, we found arguments that offer 1

These criteria can be combined. A before±after-comparison (6), for instance, can be conducted with statistical data (5) measured under experimentally controlled conditions (7): ``Our traf®c checks have been very effective. After the start of our campaign, excesses of the speed limits decreased in the participating cities from 40 to 10% whereas they stayed at this level in other cities''. We decided to omit such combinations from our taxonomy as they were rare in our text corpora and we wanted to avoid a proliferation of argument patterns. No such combination was found within the corpus which was used to calculate the interrater agreement.

78

U. Oestermeier, F.W. Hesse / Cognition 75 (2000) 65±104

causal explanations instead of observations as evidence and presuppose causal relations to the right of the connective ``because''. The speci®cs of these arguments are as follows: (9) presupposes knowledge about mechanisms, i.e. complex causal structures (e.g. chains and forks) that provide detailed explanations of how a cause brings about its effect, (10) knowledge about multiple possible explanations of an effect, and (11) knowledge about the frequency of causal regularities. In the interview study by Kuhn (1991) and our own studies, such arguments were the most frequently used (especially the variants of (9), see above). 2.2.2. Cons Just as one can argue for, one can also argue against causal claims in numerous ways. General argumentative moves, which apply to non-causal and causal problems alike, question the expertise of a proponent and provide counter-examples for a general conclusion. Here, such moves are omitted. As speci®c counter-moves we only list arguments (12±16) which question the plausibility of a causal claim (`` : (A caused B), because¼'') and arguments (17±22) which question the cogency of a causal argument (``that A caused B has not been proven, because¼''). These counter-arguments differ with regards to the aspect of a pro-argument they dispute and in their abstractness. The more speci®c counter-arguments either directly contradict circumstantial and contrastive evidence or offer competing hypotheses as alternative explanations. Arguments (12±16) provide concrete information about temporal features (12), spatio-temporal contacts (13), intentional acts (14), counter-examples or general evidence against the necessity or suf®ciency of a cause (15, 16). The more theoretical argument (17) offers an alternative causal explanation as a counter-move, which seems to be the most common way to rebut a pro-argument. Arguments (18±22) are the most abstract and, in essence, state that evidence concerning temporal successions (6), single cases (1, 4), and statistical correlations (5) remains, as such, insuf®cient for proving causation. 2.2.3. Quali®cations Causal problems are very often so complex that judgments of the form, ``A is the cause of B'' are too sweeping. A natural argumentative counter-move, therefore, claims that the statement is not completely wrong but in need of quali®cation and amendment. Narrowing of the arteries of a leg, for instance, may be a result of heavy smoking, but one must admit, that the relation between smoking and narrowing is not direct and simple: other factors like nutrition and genetic predispositions are also relevant. As our studies show, most of these arguments, considering more complex and non-obvious causal structures like multiple causes (22), chains (23), forks (24), and bi-directional connections (25) are given without further backing. This indicates an implicit consensus that things are not normally as simple as one explicitly states. 2.3. Arguments from observations, explanations, and conceptual knowledge Up to now we have tried to make clear that the full range of causal arguments presupposes different argument schemas, various types of premises, and inferences.

U. Oestermeier, F.W. Hesse / Cognition 75 (2000) 65±104

79

Most of these aspects have already been discussed in the existing literature, but a similarly extended discussion, how different media induce different types of knowledge and processing is missing. Observations. Evidence about observable or measurable properties is the basis for the induction and justi®cation of causal claims (1±7). Media and cognitive artifacts extend the domain of the observable and the measurable dramatically (Thagard, 1999). We know, for example, about the hole in the ozone layer from satellite maps in newspapers. No single individual can observe these long-term and global environmental changes directly. Observational media such as protocols, data tables, documentary photos and ®lms are common means of communicating such knowledge. Causal explanations. Many people have a rough idea about the mechanisms which lead to the hole in the ozone layer without ever having observed and explored these phenomena themselves. Besides explanatory texts, they know about the problem from info-graphics in newspapers, animations and simulations on TV. These popular forms of visual explanations (Tufte, 1997) are of utmost importance because, by themselves, individuals are rarely able to develop causal theories of complex non-local processes. Conceptual knowledge. Less common, though of great relevance in scienti®c contexts, are applications of strict criteria for causation. One may argue, for example, that correlation does not prove causation, that single observations are insuf®cient for proving causation, and so on. As we will show, linguistic means are practically the only way to express such explicit abstract knowledge about the concept of causation and its related criteria (18±21). This form of knowledge, though, is also important for a critical assessment of visualizations. Impressive correlations in statistical graphs are not necessarily impressive evidence for causality (Huff, 1991). However important the distinction between these types of knowledge or premises may be, it is dif®cult to give clear-cut de®nitions for these concepts. It is a commonplace in the philosophy of science that observations are theory laden. This commonplace has also important consequences for our present topic: External representations of data are nearly always produced and interpreted with a causal theory in mind. It would have been pure chance, for instance, if Dr. Snow included the correct cause in his map (Fig. 1) of the observed effects without foregoing causal knowledge. 2 Only by virtue of the fact that he already had a theory about the link between water and the disease, was Snow able to arrange the data in such a convincing way (see Tufte, 1997, for a historical discussion.) Similarly, observations are often automatically interpreted in causal terms (Strawson, 1985). In everyday life, for instance, we describe behavior in intentional terms (``as everybody could see, he spilled the wine because he was upset about his son'') albeit that, strictly speaking, intentions can only be inferred as hidden causes of overt behavior. It is this eager willingness of recipients to draw their own causal conclusions that enables authors to convince them of implicit causal claims by presenting 2

We would like to thank an anonymous reviewer who brought this point to our attention.

80

U. Oestermeier, F.W. Hesse / Cognition 75 (2000) 65±104

observations. Indeed, many visualizations, e.g. movies, cannot be understood without a causal interpretation of the seen. That observations and causal interpretations are intertwined makes them dif®cult to separate in practice but not indistinguishable in principle. As we use the terms, observations are directly linked via perceptual processes to the observed concrete episodes. Causal theories, on the other hand are general, pre-structured, and schematized products of foregoing experiences. Causal explanations apply these theories to concrete problems, i.e. they interpret the observed and observable in terms of a theory and prior causal knowledge. These principal distinctions remain valid, even if observations and theories are mediated. Photos, maps, protocol lists, etc. are often produced with the intention of illustrating a theory. However, even if their existence depends on theoretical and explanatory intentions they are still causally linked, via the processes of recording and measurement, to the concrete represented episodes. Representations of theories in causal diagrams, simulations, etc. on the other hand, are articulated forms of an authors general knowledge. A rough criterion for distinguishing between mediated observation and theory is whether or not the involved representations contain explicitly encoded information. All intentionally produced signs that go beyond direct measurements and exact replicas of reality are potential products of inferential, and hence theory-laden cognitive processes. 2.4. Inference patterns and their verbalization Generalizations, comparisons, explanations, and mental simulations are essential parts of any serious cognitive activity. Therefore, it ought not be surprising that these fundamentals show up in a comprehensive taxonomy of causal arguments. Language offers many means to indicate and express such cognitive processes. However, an all-embracing list of these means is beyond the scope of this paper. Some examples might illustrate the numerous possibilities of how these basic inference patterns show up on the linguistic surface: Generalizations, for instance, can be expressed in plural constructions or with quanti®ers (e.g. ``all'', ``most'', ``many''); comparisons can be expressed by comparatives (e.g. ``bigger'') or oppositions (e.g. ``small±big''); mental simulations, and especially counterfactuals, can be expressed with the help of the subjunctive or by labeling scenarios as ``possible'' or ``®ctitious''; and causal explanations describe causal structures by verbs with implicit causal content (e.g. ``to pull'', ``to help'') or explicit causal phrases (e.g. ``causes'', ``this led to''). Although verbally expressible, linguistic codes do not always support these forms of reasoning ef®ciently: Neurath's poster in Fig. 2, for instance, presents statistical data along a symmetry axis (humans are especially suited to detect deviations from symmetries) and uses the contrast of black and white to invite the recipient to compare the data for him or herself. The same set of data, however, could have been presented in sentences or lists. This would make it more dif®cult for the recipient to arrive at similar conclusions. This is what is meant by our claim that visual means can facilitate tasks that are neither supported by the vocabulary, the grammar, nor the linearity of texts.

U. Oestermeier, F.W. Hesse / Cognition 75 (2000) 65±104

81

After providing an impression of the multiple facets of causal argumentative patterns and forms of causal knowledge we now turn to our main issue as to how argumentative patterns are implemented in the use of external representations and cognitive artifacts. We concentrate on those visual media that are integral parts of causal cognition in modern culture, namely tables, graphs, time series, causal diagrams, drawings, maps, animations, photos, ®lms, and simulations. 3. Visual causal arguments Humans interpret the world, whether real or represented, in causal terms. Therefore, nearly all media can be used to evoke causal inferences on the side of the recipients. Accordingly, experiments in causal cognition have been conducted with texts (e.g. Ahn et al., 1995), tables (e.g. Ward & Jenkins, 1965), drawings (e.g. Ferguson & Hegarthy, 1995), animations (e.g. Heider & Simmel, 1944), videos (e.g. Storms, 1973), and simulations (e.g. White, 1993). Only a few studies compare different external representations of causal information: Some with effects of presentation (e.g. Ward & Jenkins, 1965; Anderson & Sheu, 1995; Lober & Shanks, 2000) and others without (e.g. Wasserman, 1990). In spite of these divergent ®ndings and the manifold media used in everyday cognition, a comprehensive ``representational analysis'' (see Zhang, 1996) of the various media is still missing. The fundamental question, namely, which types of evidence can be mediated by which form of external representation, has largely be ignored. To answer this question we propose in this section to analyze uses of visualizations as argumentations, i.e. we assume that verbal and visual arguments express the same basic ideas. Although nearly all experiments in the domain of causal cognition have been conducted with external representations, causal reasoning is still widely considered as something that happens in the head of a reasoner. In one sense, this is a truism. The interpretation of the external representation depends on the individual reasoner and his knowledge, and it is only the interpretation which makes an argument sound and plausible, a conclusion true or false etc. But in another sense, reasoning can be considered as an interaction of one or more individuals with their information environments (Clark, 1997; Hutchins, 1995; Zhang & Norman, 1994), i.e. as a discourse not merely taking place in the head because many argumentative steps are delegated to other people or tools. With paper and pencil or computers we can externalize these steps, transform abstract relationships into visible spatial ones, and thereby inspect and control argumentative and causal relationships in a way completely unknown to illiterate societies. Especially scienti®c forms of causal reasoning are unthinkable without external support ( Crosby, 1997; Donald, 1991; Thagard, 1999). The speci®c roles of visualizations in this process of argumentation are twofold: They provide the necessary knowledge about the premises of an argument and they invoke, support or externalize the speci®c inference patterns that lead from the presented premises to the intended conclusions. In both roles, the representational and the inferential role, visual arguments drastically differ from verbal ones: Maps,

82

U. Oestermeier, F.W. Hesse / Cognition 75 (2000) 65±104

for instance, are special tools for the representation of large-scale areas but they cannot visualize intentions (because they do not show the behavior or facial expressions of individuals) and they cannot represent knowledge of the concept of causation (they only show concrete spatial areas). Linguistic codes are unlimited in this respect, the producer can express everything as explicitly as he likes. In consequence, there is an important asymmetry between verbal and visual codes. Articulate argumentations, weighting pros and cons, are easily expressible linguistically and can be performed without visualizations, whereas most visualizations are incomprehensible without accompanying speech acts (Fleming, 1996; Messaris, 1997). In principle, we do not rule out the possibility that contextual information and nonlinguistic codes can be used to disambiguate and comment on visual arguments but in practice, an extended exchange of arguments cannot be achieved without words. Why are visual arguments used at all, if, on the one hand, the expressive power of visual non-linguistic representations is so limited that they need ``contexts'', and on the other hand all argument patterns can, in principle, be expressed propositionally? The answer is simple: Unlike language, visualizations are specialized tools for special cognitive tasks. Word sequences are not well suited to support perceptual inferences whereas non-uniformly structured visualizations include the highly developed visual apparatus in higher cognitive processes (Larkin & Simon, 1987; Rumelhart, 1989). Externalization of reasoning means that visualizations supplant, support and facilitate such inference tasks. Comparisons, for instance, can be supported by split screens and counterfactual ``what-if-analyses'' by interactive computer simulations. 3.1. A matrix of causal arguments and their visualizations Visualizations are special tools for special cognitive tasks. Accordingly, not all forms of causal evidence can be expressed in every visualization in the same way and not all forms of reasoning are externalized by all media in a similar manner. The taxonomy of causal arguments will serve as a guideline for the representational limitations and inferential support of the various visual media. The media-argument-matrix in Table 2 summarizes the analysis and shows the argument patterns supported by a medium. The rest of this paper mainly defends and explains the entries in this matrix. After brie¯y explicating the de®nitions of media as well as the background assumptions in Sections 3.1.1 and 3.1.2, the general principles used for the construction of the matrix will then be discussed in Sections 3.2±3.4. 3.1.1. Background assumptions and de®nitions To begin with the de®nitions and assumptions: Isolated and pure visual forms are rarely used in real-life. Text boxes, for instance, are integral parts of causal diagrams and maps. If we take such textual elements as genuine parts of a visualization and allow for arbitrary texts, we are in danger of overrating the expressive power of visualizations because everything can be represented propositionally (Anderson, 1978). To avoid such trivialities, we concentrated on single representations (i.e.

Cons Counterevidence 12. Wrong temporal order 13. No contact 14. Free decision 15. Insuf®cient cause 16. Unnecessary cause Alternative explanation 17. More plausible alternative

Pros Circumstantial evidence 1. Contiguity 2. Co-occurrences 3. Similarity of cause and effect Contrastive evidence 4. Covariation 5. Statistical covariation 6. Before-after-comparison 7. Experimental comparison 8. Counterfactual Causal evidence 9. Mechanism 10. No alternative 11. Typical effect

Arguments

2

2

2 2 2

2 2 2

2 2 2 1 1

1 1 1 1 1

1 1 1 1 1

2 2 2 1 1

2 2 1

2 1 1

2

1 1 2 1 1

2 2 1

1 1 1 1 1

1 1 1

Time series

Table

Graph

Diagrams

Symbols

Table 2 Media and their visual support of causal arguments a

1

2 2 2 2 2

11 1 1

2 2 2 2 2

2 2 2

Causal diagram

1

1 1 1 1 1

1 1 1

11 11 1 1 1

11 11 11

Drawing

Pictures

1

1 1 1 1 1

1 1 1

1 1 1 1 1

11 11 1

Map

1

1 1 1 1 1

1 1 1

11 11 11 1 1

11 11 11

Animation

1

2 1 1 1 1

1 1 1

11 11 2 1 2

11 11 11

Photo

1

1 1 1 1 1

1 1 1

11 11 11 1 2

11 11 11

Movie

Indexical pictures

11

11 11 11 11 11

11 11 11

11 11 11 11 11

11 11 11

Simulation

Hybrids

U. Oestermeier, F.W. Hesse / Cognition 75 (2000) 65±104 83

2 2 2 2 2 2

Quali®cations 22. Partial cause 23. Indirect cause 24. Common cause 25. Interaction 26. Mix-up of cause and effect 27. Not intended 2 2 2 2 2 2

2 2 2 2 2 2 2 2 2 2

2 2 2 2 11 11 11 11 1 1

2 2 2 2

Causal diagram

1 1 1 1 1 1

2 2 2 2

Drawing

Pictures

2 2 2 2 2 2

2 2 2 2

Map

1 1 1 1 1 1

2 2 2 2

Animation

1 1 1 1 1 1

2 2 2 2

Photo

1 1 1 1 1 1

2 2 2 2

Movie

Indexical pictures

11 11 11 11 11 11

2 2 2 2

Simulation

Hybrids

a Cells of the matrix indicate whether a medium is able to visualize an argument pattern without restrictions (11), in part (1), or not at all (2). See Table 1 for the premises and inference patterns of the arguments and the text for a discussion of how media characteristics constrain the range of visualizable causal arguments.

2 2 2 2

Time series

Table

Graph

Diagrams

Symbols

Insuf®ciency of evidence 18. Post hoc ergo propter hoc 19. Low force of stat. data 20. Low force of single cases 21. Unknown mechanism

Arguments

Table 2 (continued)

84 U. Oestermeier, F.W. Hesse / Cognition 75 (2000) 65±104

U. Oestermeier, F.W. Hesse / Cognition 75 (2000) 65±104

85

single photos, maps etc.) and relatively pure visual forms (e.g. pictures without texts, movies without sound tracks) in Table 2. Here we are interested only in the argumentative functions of visual formats as such, accordingly, one must carefully distinguish between symbolization and visualization. As we use the term, a representation symbolizes information or knowledge if linguistic or numeric codes are used, and visualizes information or knowledge if pictorial or diagrammatic formats are used. A combination of visual and symbolic codes is constitutive for many forms of representation: Causal diagrams, for instance, visualize causal relations by arrows and symbolize the relata by letters. A representation need not visualize or symbolize all the premises and all steps of inference. As the matrix shows, at least three cases have to be distinguished. (a) The premises of an argument are visualized and the inferences are visually supported, i.e. everything that is necessary for the recipient to draw the intended causal conclusion is available in a visual format. Symbolic labels may be part of the representation but are not essential. Symbolized information is not essential, if the symbols can be removed or substituted by arbitrary signs and the argument pattern remains recognizable. (b) Premises and inferences are visualized only in part, i.e. symbolized information is essential (e.g. the colors in a thematic map cannot be understood without legends). (c) Premises and inferences cannot be visualized within a representation format and are only expressible if symbolic notations or contextual additions are used (e.g. in historical maps the years of battles have to be annotated numerically). In Table 2 case (a) is marked by a ``11'', (b) by a ``1'', and (c) by a ``2''. 3.1.2. Media and their characteristics These distinctions mentioned above have to be kept in mind, when trying to understand the specialities of the various media, their speci®c weaknesses and strengths, to which we now turn. The general principles that constrain the expressive power of these media will be discussed in the subsequent Sections 3.2±3.4. Tables. Webster's New Encyclopedic Dictionary (1993) de®nes a table as ``a systematic arrangement of data in rows or columns for ready reference''. As we use the term, table entries symbolize and do not visualize the data. Tables as a whole, however, can be considered as visual forms of argumentation because their columns and rows support visual comparisons (3±8) and generalizations (2, 5) of data (Ward & Jenkins, 1965). By juxtaposing the data of different conditions of an experiment, for instance, the eye can easily switch between these conditions. In principle, tables can contain implicit causal and intentional descriptions, as well as explicit explanations and arguments but such text tables are not considered here. Tables with digital data have some advantages over pure graphs because they can describe values and differences exactly, can lead to better quantitative estimates and are subject to effective heuristics in complex tasks (see Meyer, 1996, for a review). Tables arrange symbolic information spatially, but space has no denotative function in tables. Diagrams are different in this respect. Characteristic for diagrams is

86

U. Oestermeier, F.W. Hesse / Cognition 75 (2000) 65±104

that they represent non-spatial relations such as temporal, causal, logical, and statistical ones by spatial and topological relationships. The most important and popular diagrams are graphs and time series, which visualize quantitative knowledge, and causal diagrams, which visualize qualitative causal knowledge. Graphs. A graph is ``a diagram that represents change in one variable factor in comparison with that of one or more other factors'' (Webster's New Encyclopedic Dictionary, 1993). Examples are dot plots showing multiple measurements, line or bar graphs showing means. Such graphs represent quantities by spatial extensions and positions, i.e. they presuppose a basic understanding of Cartesian geometry. 3 Like tables, graphs can be used to make comparisons of measurements easier and thus support the same forms of contrastive evidence as tables (4±8). Frequently, however, graphs are a better method to show the relations among data. Especially if the data are diffuse, they can reveal general trends that remain hidden in tables. Time series are graphs that represent temporal changes in one or more variables. Time is mainly visualized analogously along a spatial dimension but symbolic markers are commonly used to denote temporal units (days, hours, etc.) and exact time points. Time series support the same inferences as graphs but stress such temporal aspects as simultaneity (1), temporal asymmetry (6, 12) and variation. Compared with tables of temporal data, they are especially useful for the visualization of cyclic symptoms of repetitive mechanisms (see Fig. 3). Causal diagrams denote causal relationships by the spatial and topological relationships of connected arrows. They abstract completely from temporal, spatial, and other non-causal forms of episodic knowledge and visualize causal structures of arbitrary complexity (9, 22±25) in what is perhaps the purest possible way. Like other diagrams, they permit of perceptual inferences, ``which are extremely easy for humans'' (Larkin and Simon, 1987, p. 98). Multiple causes and effects (22, 24) can be seen at a glance and transitive causal chains (9, 23) can literally be followed by the eye (see Fig. 5). Additionally, the clearly visible direction of causation minimizes the error of confusing cause and effect (26). Considered as visualized search spaces for causal links, they also support the search for alternative explanations (10, 17). Even very complex structures which remain incomprehensible in textual or pictorial representations can be visualized in this way. Whereas diagrams use spatial relations to denote abstract non-spatial relations, pictures represent concrete spatial forms. Typical pictures, as they are here considered, are iconic representations and are, in certain relevant ways, similar to the things they represent. Spatial relationships like contiguity and contact (1, 13) are represented by spatial relationships. From a cognitive point of view, this means that icons can be understood if one grasps the nature of the similarity between the representation and the represented. Iconic representations such as photos and ®lms provide information structures which are similar to the natural environment (Gibson, 1986). The corresponding perceptual processes evolved long before language and other cultural codes appeared. With the help of these processes 3 In Table 2 we assume that graphs represent neither longitude, latitude, nor time along the x- and yaxis. Such graphs are equivalent to maps or time series.

U. Oestermeier, F.W. Hesse / Cognition 75 (2000) 65±104

87

Fig. 3. A time series which shows the monthly outgoing mail of the U.S. House of Representatives from 1967 to 1973. Peaks regularly occur every 2 years. As a non-citizen of the United States one must know the election days to understand the causality behind these peaks: Representatives make heavy use of their free mailing privileges to foster their re-election. After Tufte, 1983, p. 37.

pictures mainly support perceptual inferences from observations; explicit and articulated theoretical knowledge cannot be represented in realistic pictures. This might explain why these representations are so rapidly processed and seldom the subject of articulate forms of reasoning. Drawings are pictures that represent objects by means of lines and shapes. Drawings often show structures more clearly than realistic pictures (Ferguson, 1992). In contrast to photos, drawings can visualize past and future as well as factual and ®ctitious scenes (8). They can also visualize temporal sequences by juxtaposing consecutive stages of a process and movements by lines or arrows (see Fig. 4). Thus, drawings can be used to illustrate concrete mechanisms of arbitrary complexity (9, 22±25). Even hidden mechanisms can be illustrated in exploded views of cars and other machines. Such technical drawings are important aids in the search for explanations (10, 17) of defects. Thematic maps are drawings of non-spatial properties within an area (see Figs. 1 and 6). The spatial distribution of dots or symbols represent knowledge about places and regions. Conventional shadings or colors visualize thematic features and are mostly explained in the form of a legend. As overviews of large areas, maps support searches for co-occurrences of different event types (2), as well as comparisons between different regions (5). As specialized representations of large-scale areas

88

U. Oestermeier, F.W. Hesse / Cognition 75 (2000) 65±104

Fig. 4. Technical drawings from H.T. Brown's 1868 book Five Hundred and Seven Mechanical Movements (from Ferguson, 1992, p. 118). Such drawings illustrate complex explanations by clearly reducing mechanisms to their component parts.

they can only illustrate mechanisms that operate in geographic dimensions (e.g. maps that show the operation of the Gulf Stream). Animations are dynamic sequences of non-photographic pictures. Thus, they combine the representational properties of drawings with a direct representation of the ¯ow of time and movements, which cannot be directly observed in static pictures. They can represent arbitrary ®ctitious episodes (8) as well as hidden causal structures (9), especially in their temporal aspects (6). Indexical pictures such as photos and movies are generated with the help of cameras. Thus, they are more independent of the intentions and manual abilities of the producers than drawings and paintings. In argumentative contexts they are mainly used because the camera guarantees a causal link from the represented to the representation (Messaris, 1997). They endorse the illusion of being an eyewitness which can observe the situation itself, despite the fact that the photographer and not the recipient controls the shown sector of reality. Photos are static pictures (semi-)automatically produced by a camera. A short exposure of the ®lm material freezes a single time point (we will not be considering multiple exposures or long exposure times here). Hence, single documentary

U. Oestermeier, F.W. Hesse / Cognition 75 (2000) 65±104

89

Fig. 5. A causal model of a population system in Stella. This modeling system uses a variant of causal diagrams to explicate the causal dependencies between the variables. At the same time, the graphical interface provides access to the mathematical relationships between the variables.

photos can prove the simultaneity of the depicted events (1). Before±after-comparisons (6) require at least two photos with additional information about the temporal order. In advertisements for diets, juxtaposed photos often show overweight persons ``before'' and ``after'' the diet. In contrast to drawings, photos can only visualize mechanisms operating at the ``surface'' and not hidden by other objects (we will not be considering X-rays or infrared cameras here.) They are not able to represent the structure of economic, social, cognitive and other complex abstract phenomena. Movies are dynamic sequences of photos produced by a special ®lm camera. Whereas animations extend drawings, movies mainly add temporal aspects to the underlying photographical format. Thus, temporal order (6, 12) can be represented in single shots. Cuts, split screens, and multiple sequences are ®lmic means which support the representation of contrastive (4±7) and general (2, 5) visual evidence. Simulations. Nowadays, simulations are substitutes for experiments in many ®elds of engineering and science. This would be a waste of time and money if they were not considered convincing arguments for causal claims. Examples of these are ¯ight simulators, numeric simulations with graphic interfaces, and modeling systems. In these computer systems, the user can see the effects of his intentional manipulations directly and in a way such that traditional static media would ®nd impossible. Simulations combine counterfactual arguments (8) and arguments from interactivity (7) with the explanatory power of quantitative scienti®c models. They allow the user to observe the behavior of a system under quasi-experimental conditions governed by causal background theories of varying degrees of complexity (9± 11, 22±25). A systematic comparison between the results of various simulation runs can be supported by additional tabular displays. The main idea behind a computer simulation is that the simulation follows algorithmic rules, just as real events follow natural laws. Thus, causality is built-in into simulations. In most cases, the encoded causal premises are completely hidden from

90

U. Oestermeier, F.W. Hesse / Cognition 75 (2000) 65±104

Fig. 6. A thematic map showing the populations of the peppered moth (Biston betularia) in the United Kingdom. The dark varieties carbonaria and insularis frequently occur in industrial regions. Only with knowledge of industrial regions in the U.K. (or the reading of this caption) does the intended causal argument become obvious. (After Sauer & MuÈller, 1987, p. 36.)

the user and only explicit to the programmer (mainly in terms of ``if¼then¼'' constructs and assignments to variables). There are only a few simulation packages which use a special notation for visualizing causal relationships within the user interface (see Fig. 5). In sum, simulations combine diagrammatic, pictorial, and symbolic codes. In all their varieties, they carry more causal argument patterns than any other representations discussed so far. After having discussed the limitations and strengths of speci®c media, we shall now turn to the general principles behind the items in the media-argument-matrix in Table 2. These principles are derived from the main distinctions of our taxonomy of causal arguments: The argumentative moves, the types of premises, and the involved inference patterns. 3.2. Argumentative moves As can be seen in the matrix, media differ in their ability to convey the building blocks of any debate: Pros, cons, and quali®cations. It is clear from the examples above (see Figs. 1±6) that visualizations can provide evidence for causal conclusions. But can they also be used to deny and qualify causal statements? Many

U. Oestermeier, F.W. Hesse / Cognition 75 (2000) 65±104

91

authors are skeptical of this and some theorists even argue that visual argument is a contradiction in terms (Fleming, 1996) because ``Pictures can't say ain't'' (Worth, 1975) and debates cannot take place without negations of claims. It is true that pictures have no explicit sign for negations. Nevertheless, they can be used to trigger inferences with negative conclusions (Messaris, 1994; Lake & Pickering, 1998). In the courtroom, for instance, a photo can refute a driver's claim that he always buckles up if it clearly shows him driving without a seat belt. However, such visualizations provide no special support, in the form of perceptual cues, that obvious contradictions between person A's observations and persons B's claims exist. This indicates that knowledge of the turn-taking situation between a proponent and an opponent is suf®cient for identifying communicative acts as claims and counter-claims, negations or af®rmations, defenses or attacks. Pictures and photos can be used in both ways, although it seems more common to use pictures to demonstrate that something happened and not to prove that something did not happen. A photo, for example, can easily show that two persons met and thus were able to in¯uence each other, but it cannot show that two persons had never met in their life time. The latter is dif®cult to communicate by means of pictures because everybody knows that pictures only represent a small spatial-temporal sector of reality. Thus, pictures can provide evidence for negative conclusions but, as such, do not support negative claims. 4 The limitations become more obvious, if we consider counter-arguments that question the suf®ciency of evidence provided by a proponent (18±21). It is dif®cult to imagine how the thought ``a correlation does not prove causation'' can be expressed without symbols, although this objection is important in the evaluation of many statistical graphs. Doubts about whether or not necessary or suf®cient conditions for the application of a concept are violated can easily be expressed linguistically but a picture or visualization does not explain it's doubtful cogency of itself (Gombrich, 1982). Similarly, pictures offer no systematic way of introducing new conceptual distinctions and quali®cations into a discourse. How should one visualize, for instance, that one considers personal responsibility and not causality the issue at stake (27)? Language is unrestricted in this respect: New distinctions and combinations can be constructed from a large initial vocabulary (e.g. the concept of a ``indirect cause'' from ``indirect'' and ``cause''). Such combinations of ideas cannot generally be achieved in visualizations. Nevertheless, a number of important distinctions are built into some visual codes. Causal diagrams, for instance, clearly visualize partial, indirect, and common causes. But all quali®cations that go beyond such clearly visible patterns need additional verbal comments in order to be understandable. 4 Only interactive displays can be considered an exception to this rule. While typically, simulation packages do not include explicit signs for negations, they can be used to change variables and thus compare a value p with a value : p. The act of negation is built into the manipulation of conditions. It is self-evident that someone argues from different premises if he changes the parameters of a simulation, even if these parameters are pictorially encoded. Therefore, in Table 2 simulations were marked as the only visualizations that fully support counter-arguments.

92

U. Oestermeier, F.W. Hesse / Cognition 75 (2000) 65±104

Thus, visualizations are not capable of weighting all pros and cons. Nevertheless, this does not render them worthless in the context of argumentation. Concrete representations are often easier to interpret and process than abstract codes. Compared to symbolic codes, visualizations have speci®c advantages for the support and externalization of mental operations. There is a fundamental trade-off: What one loses in representational power, one gains in inferential effectiveness (Levesque & Brachman, 1987; Stenning & Oberlander, 1995). The following chapters now turn to these representational limitations and inferential strengths. 3.3. Arguments from observations, explanations, and conceptual knowledge As we have seen in Section 2.3, arguments from observations and explanations have a parallel in observational and explanatory media. Maps and graphs, for instance, directly map concrete spatial and episodic data on to a plane surface, but intentional and causal relations cannot be represented directly in these visualizations. Abstract knowledge about concepts is clearly beyond the representational power of these media. Our distinction of arguments from observations, explanations, and abstract knowledge about the concept of causation serves, therefore, as a major guide to the representational limitations of visualizations. The question we want to address now is, which principles determine these representational limitations? As will be seen, the following media characteristics are relevant here: ² Whether or not the medium uses symbolic, diagrammatic, pictorial, or hybrid codes. This distinction is relevant because symbolic codes are unlimited in their ability to express abstract conceptual knowledge, whereas diagrammatic and pictorial codes are restricted to observations and explanations. ² Whether the medium is static, dynamic or interactive. This distinction is relevant because it determines whether spatial, temporal, and causal relations are intrinsic to these media or require more explicit articulations. ² Whether or not the representation is indexical, i.e. causally linked to the thing represented. Indexical media are restricted to observations and measurements and cannot support the elaboration of ®ctitious scenarios. We will discuss these aspects along our taxonomic distinction of observational, theoretic, and abstract conceptual knowledge. The visualization of observations. There is no doubt that much of our causal knowledge relies on observable spatio-temporal positions of events and objects and their other observable and measurable properties like weights, movements, temperatures, velocities, etc. All these data can be described verbally or encoded digitally in tables in an abstract and static way and without any similarity between the representation and the represented. 5 Pictorial visualizations, on the other hand, preserve the spatial arrangement of the 5 Realistic pictures can show digital or analog measurement devices in addition to the objects under investigation, as is frequently the case in textbooks. Such indirect pictorial replicas of non-pictorial codes have not been included in the present paper. We consider only those cases, in which data are directly mapped on to spatial or temporal dimensions of a representation.

U. Oestermeier, F.W. Hesse / Cognition 75 (2000) 65±104

93

represented objects and dynamic visualizations preserve, at least within one scene, the represented ¯ow of time. Following Palmer (1978), one can speak of intrinsic representations because an isomorphism between the representation and the represented exists, i.e. spatial contiguity (1) and contact (13) cannot be left unspeci®ed in pictures as well as temporal contiguity (1) and sequence (6, 12) cannot be left unspeci®ed in dynamic media. These relations can be ef®ciently read off from the display and need not be inferred from symbolic descriptions and coordinates. They are processed as important cues for causality (Messaris, 1997; Shanks et al., 1989) even if they are irrelevant. In any case, the direct experience of the continuous spatio-temporal ¯ow of events is a crucial factor in the direct perception of causal relations (phenomenal causality) which can be described but not experienced in abstract and static media (Michotte 1963; Schlottmann & Anderson, 1993). Thus, spatial and temporal observations can easily be visualized but other nonspatio-temporal episodic data are more dif®cult to visualize in pictures. Does a mass accelerate at a constant speed or not, does an object become hotter or colder? These questions are almost impossible to answer on the basis of concrete and realistic pictures alone. In such cases, graphs are useful that represent these non-spatial properties within a spatial coordinate system. Thus, equality (3) and increases and decreases (4) of values can easily be seen whereas in reality these critical comparative relationships are often dif®cult to discriminate. In such graphs the isomorphism between the representation and the represented is more abstract than in the case of pictures. In consequence, these abstractions cannot be understood without symbolic labels that explain the meaning of the visual elements of the graph. The visualization of explanations. Arbitrary complex causal structures (9, 22±25) can be visualized explicitly in causal diagrams or modeling systems and implicitly in pictorial media. If the causal relations between the components are either directly visualized (as in the case of causal diagrams) or obvious for competent recipients (as in the case of technical drawings), the recipient can actively construct his explanations from visual constituents with which he is already familiar (see Fig. 4). Hegarthy, 1995 used static drawings of pulley systems and found that eye movements follow the suspected causal links between the parts of the systems, i.e. the perception of the parts is already guided by prior causal knowledge about mechanisms. As long as the spectator is able to analyze the complex system into its constitutive parts, even photographed mechanisms can become ``self-explanatory''. Most people, for instance, are able to understand photos of chain transmissions and other simple devices. In this sense, graphs are not compositional, they can show only a limited number of variables and lack an explicit notation for asymmetric causal relations. They map quantities on to symmetric spatial relations (Kosslyn, 1989). However, this does not rule out the fact that the intended interpretation is often an asymmetrical causal one. This can most easily be induced with symbolic labels, if one axis is labeled ``number of cigarettes'' and the other ``age of death'' most people will immediately try to interpret the shown data causally. It is common practice to assign causes in this way to the x-axis and effects to the y-axis (Tufte, 1983). Gattis and Holyoak (1996) found that deviations from this rule can lead to different causal interpretations, e.g. whether an effect is considered as strong or weak. They argue that the assignment of causes to

94

U. Oestermeier, F.W. Hesse / Cognition 75 (2000) 65±104

the x-axis and effects to the y-axis is more than a culturally bound convention because subjects from different cultures with different reading orders show the same patterns of interpretation. This is one example of how data can be arranged in accordance with prior causal knowledge. Another important case is the representation of predictions in tables, graphs, drawings, maps, and animations. These media can represent ®ctitious and factual data alike. Only indexical media like photos, traf®c videos, printouts of seismograms and electrocardiograms guarantee, if properly working, that the shown events really happened. The resulting representations are causally linked to the observed episodes. Such recording and measurement devices display ``raw data'' and produce thus probably the most theory-free visualizations available. The visualization of conceptual knowledge. Abstraction from irrelevant details is crucial in all forms of media production and reception but some media make this simpler than others. In a ¯ight simulator, for example, we have built-in space in the pictorial display, built-in time in the system dynamics and built-in causation in the interactive manipulations of the system. Texts which represent the other extreme, need not say anything about these fundamental aspects of reality. Pictures and diagrams lay between these extremes. The limited abstractness of icons becomes obvious if one tries to ®nd depictions for the most important words of an ordinary language (see the attempts by Ogden (1932); Neurath (1936) to depict the 850 words of Basic English). It is very easy to ®nd icons for concrete nouns (like ``apple'', ``bird'', and ``door'') but it nearly impossible to visualize epistemic (e.g. ``belief'', ``experience''), argumentative (e.g. ``agreement'', ``argument'', ``because'', ``but''), logical (e.g. ``all'', ``every'', ``no'', ``or'', ``if''), and causal (e.g. ``cause'', ``effect'', ``impulse'') expressions. Thus, there seem to be no natural and easy ways of pictorially communicating about beliefs, doubts, agreements, disagreements, non sequiturs, etc. But all these concepts are essential for discussions about causal problems, our knowledge about the problems, and the state of the debate. Whereas verbal arguments may question whether any criteria for causation have been ful®lled, visualizations are not even able to formulate arguments concerning the concept of causation (18±21, see also Section 2.3). Even diagrams, which are more abstract than iconic pictures, still need symbols to express knowledge about concepts and other abstract entities. 3.4. Inference patterns and their visual support The representation of data and theoretical premises is one of the major argumentative functions of visualizations; the other is to support and trigger argument speci®c inference patterns. In the domain of causal cognition, various forms of reasoning have been distinguished. It is a common assumption, for instance, that associationistic and perceptual inferences have to be distinguished from more re¯ective forms of reasoning (Lober & Shanks, 2000; Shanks, 1991). We share this assumption and distinguish between automatic and rapid causal interpretations of perceptual cues requiring no special support and more complex operations such as generalizations, comparisons, counterfactual reasoning, and causal explanations.

U. Oestermeier, F.W. Hesse / Cognition 75 (2000) 65±104

95

Generalizations. Linguistically, generalizations can be expressed in plural constructions or with quanti®ers like ``all'', ``most'', ``many''. Visualizations lack such means. Nevertheless, they are often interpreted as general messages. Pictograms of women and men, for instance, abstract from the individuality of persons and stand for women and men in general. The generality of such interpretations is a matter of circumstance, convention, and expectation. Within the list of media discussed here, realistic movies are the most concrete visual representations. As they show particulars, all intended generalizations have to be inferred by the viewers themselves. Whereas a verbal example can be generalized by a single phrase

Fig. 7. Neurath's visual argument for preventive health care. The captions mean ``RICKETS IS CURABLE, Rickets not treated, Rickets treated''. Juxtapositions serve two functions here. Read from left to right the two picture columns compare two different conditions. Read from top to bottom, a temporal order is imposed. Both children are drawn identically at the beginning but look differently in the end. From MuÈller, 1991, p. 75.

96

U. Oestermeier, F.W. Hesse / Cognition 75 (2000) 65±104

(``as is always the case'') pictorial codes use schematizations, sequences and juxtapositions of cases, and representations of multiple objects to support generalization (Messaris, 1997). Generalizations can thus be induced in several ways. Some are more conventional and symbolic in nature, others more iconic and visual. Dr. Snow's map in Fig. 1, for instance, shows all single cases whereas the pie charts in the map in Fig. 6 aggregates multiple observations. It is assumed that the user knows the meaning of sizes of pie segments. Similar implicit generalizations are often used in data graphs, where means and other generalizations are presented in single quantities. More explicit forms of visual generalizations presuppose that the recipient actively scans the display for multiple items. In Fig. 6, both forms of generalizations are used in combination. Comparisons are similar to generalizations in that respect that they require the representation of several objects or events. Comparatives are one way to express contrasts linguistically. Visualizations lack such grammatical constructions but there are several visual means to invite the recipients to perform the necessary comparisons him or herself: Tables, superpositions of line graphs, juxtapositions of drawings and photos, split screens within movies and animations, sequences of episodes etc. All these visual ``comparatives'' exploit the disposition of our cognitive system for actively pursuing comparisons (4±8). The aspect in which the comparison is made depends on context. Spatial juxtapositions of pictures, for example, can denote temporal sequences (as in comics) as well as simultaneous conditions of an experiment. Conventions concerning the arrangements from left to right or top to bottom for temporal order are used by the recipient to determine the intended before-after-comparison or comparison of different conditions (see Fig. 7 for a combination of both forms of comparisons). Such conventions seem to depend on the reading order of a culture (Gattis & Holyoak, 1996). In one of the few experiments addressing the effects of various representation formats, Ward and Jenkins (1965) found that tabular summaries lead to a causal rating in accordance with the comparative argument (4): 75% of the subjects who received data in a table followed a contingency-rule which presupposes a comparison between cells, whereas only 17% of subjects receiving the raw data on a trial by trail basis followed this pattern. In the serial trial by trail condition the participants had to rely on their memories to make the relevant comparisons. In this condition the subjects rather answered in accordance with argument (2): they simply counted the co-occurrences of the putative cause with the putative effect. 6 Mental simulations. Counterfactuals do not compare observations as the other comparative forms of evidence discussed so far. They are non-inductive and compare a factual episode with a ®ctional one that is constructed on the basis of prior knowledge. Several theorists have stressed that such mental simulations are essential for causal reasoning (Mackie, 1974; Wells & Gavanski, 1989). Cognitive artifacts can be used to extend and re®ne such thought experiments and mental 6

It would be interesting to know whether the presentation of the trials in a list would have shown a similar strategy. The results of Wasserman (1990) indicate that the difference between trial-by-trial presentations and tables are much greater than between lists and tables.

U. Oestermeier, F.W. Hesse / Cognition 75 (2000) 65±104

97

manipulations in dramatic ways. The question ``what would have happened if¼`` is often especially dif®cult to answer, because it is nearly impossible to have a general view of the consequences of ®ctitious scenarios. Pencil and paper have been shown to be helpful in reasoning tasks that presuppose the construction of several possible situations (Bauer & Johnson-Laird, 1993). Again, this crucial argumentative function depends on the semiotic properties of media, whether they are symbolic, iconic, or indexical. Verbal codes can easily mark the important distinction between fact and ®ction by labeling them ``fact'' and ``®ction'' or using the subjunctive whereas visual codes provide no such cues. Consider the pictograms in Fig. 8. They certainly have a general and factual intention. They warn many persons about situations that happened and can happen again. Nevertheless, they can be used as a substitute for a singular counterfactual argument. Assume that the warning hint was clearly and visibly attached to a drill, the worker ignored it and the depicted accident took place. In such cases, a silent tip of the ®nger on the right hand part of the warning hint can be as convincing as a spoken counterfactual argument ``That would not have happened if you had protected your hair''. This context dependent switch in meaning is typical for pictures. Depending on prior knowledge and circumstances they are even more open to multiple interpretations than texts (Messaris, 1994). Even though pictures without additional comments cannot carry explicit counterfactuals (8), it is natural to think about ®ctitious events with the help of pictures. Computer simulations, for instance, are used more and more in the courtroom to discuss the question as to whether an actual car accident would not have happened, had the drivers behaved differently (Joseph, 1998). Indexical photos and movies do not provide such help. There are only two options to produce ®ctions with such representations. Either one can manipulate the involved media or one can manipulate the reality itself. Both methods are quite common. Television and cinema are full of staged, retouched, faked, or digitally manipulated photos and ®lms (Mitchell, 1994). But in argumentative contexts these methods of producing ®ctions are not helpful. If photos or ®lms are manipulated, the

Fig. 8. Neurath's warning pictogram ``Protect your hair from the spindle''. From MuÈller, 1991, p. 75.

98

U. Oestermeier, F.W. Hesse / Cognition 75 (2000) 65±104

media are treated like drawings and paintings and lose their characteristic indexicality. If the reality is manipulated, the construction of the ®ctitious scenario is not done within the medium and the main advantage of manipulating representations instead of the represented is lost. In any case, independent additional evidence is required to back up the plausibility of the depicted scenario. As recent discussions of manipulations in the mass media show, people are increasingly aware of these problems (Messaris, 1997; Mitchell, 1994). Causal explanations. As our content analysis of newspaper texts indicated, most causal claims are founded on speci®c causal knowledge about mechanisms (9) and plausible alternative explanations (17). Causal diagrams visualize such knowledge by simple arrows and transform, as already mentioned, complex inferences into easy perceptual tasks (22±25). Each path (or combination of paths) leading to an effect can be considered as a causal explanation of this effect. The diagram is, therefore, literally a search space for explanations. Computer simulations are another example of cognitive tools that visualize causal theories and allow for operation in large search spaces for explanations. Typically, the search space as such is not visualized. By running a simulation one can prove that particular values of variables lead to stable or unstable systems, that some in¯uences are negligible and others crucial within a particular model and so on. Theories and explanations can thus be tested and compared one after another. Simulations of car accidents, climatic changes etc. often produce displays that look quite realistic though what is really visualized by such a simulation is a simpli®ed and idealized mathematical model of the represented system. If the underlying mathematical model is wrong or seriously incomplete, all the vividness and interactivity is worthless. Computer-simulations remain cognitive tools for exploring the consequences of theories even if looking like observational media. Only in cases where actual data are used to run a simulation, is theoretical and observational knowledge visualized simultaneously. 4. Conclusions and outlook As omnipresent parts of our cultural background, media are usually taken for granted, although nearly no serious cognitive activity takes place without them. Media provide cultural forms of information (e.g. measurements, statistics, formula, predictions, explanations, theories, criteria, counterfactuals, thought experiments) unavailable in natural environments, reduce the informational richness (e.g. tactile and audible cues, temporal and interactional aspects) of the real world, and selectively emphasize information (e.g. by highlighting certain aspects, putting distant things together, etc.) These roles of media, of course, have not completely left unnoticed. Several authors have stressed the importance of language (Hilton, 1995; Semin, Rubini & Fiedler, 1995), diagrammatic representations (White, 1993), and dynamic media (Shanks, 1991) in causal cognition. One can build on this work, but a broader and more systematic framework is required for a general analysis of media. To provide such a framework we conceptualized media uses as

Fig. 9. A summary of our analysis of visual causal arguments. Verbal and visual causal arguments (pros as well as cons) are based on the same ideas and concepts in the center of the diagram. On the left hand side, the diagram shows how media speci®c representations constrain the types of knowledge of an argument's premises. Indexical media, like photos and ®lms, for instance, are restricted to observations, whereas symbolic texts can express all types of concrete and abstract knowledge. On the right hand side, the diagram shows how visualizations support argument-speci®c inferences. A before-after-comparison (6), for instance, can be supported by two juxtaposed pictures. See text for further explanations.

U. Oestermeier, F.W. Hesse / Cognition 75 (2000) 65±104 99

100

U. Oestermeier, F.W. Hesse / Cognition 75 (2000) 65±104

argumentations and analyzed in detail how visual causal arguments represent the premises of an argument and support the argument speci®c inferences patterns by special arrangements of the premises. The richness and the free access of humans to abundant forms of media and argument patterns makes a general analysis of human causal cognition both fascinating and arduous. It seems, however, that a limited number of basic ideas underlie this manifold: Spatio-temporal contiguity, regularity, covariation, manipulability, to name just some of them. Fig. 9 summarizes the way in which these ideas relate to the main themes of this paper: The constraining formats of knowledge representations and externalizations of inferences. Philosophy has mainly analyzed these ideas on the basis of linguistic intuitions and formal languages. Language is also central to our approach but with two major differences. Firstly, we do not rely on linguistic intuitions or normative logical or statistical accounts but a systematic taxonomy of ordinary language arguments. Secondly, we do not consider language as the only medium of externalized thought. Our analysis provides evidence that causal thinking is not necessarily linguistic (Mackie, 1974) although a direct inference from external to internal forms of thinking remains problematic (Scaife & Rogers, 1996). Nevertheless, our analysis revealed many reasons why argumentative standards can most easily be ful®lled by written language (Fleming, 1996; Messaris, 1994). We were not able to detect non-linguistic visual counterparts for arguments questioning the suf®ciency of pro-arguments (18±21) and arguments explicitly qualifying causal claims (22±27). Only diagrams and simulations using explicit notations are able to ful®ll some of these essential requirements of critical thinking. The simplest and most practical way to overcome these limitations of visualization, is to use language as well, as is typically the case in lecture rooms, laboratories, and court rooms (Joseph, 1998). Another possibility is to combine visual codes as in thematic maps, modern info-graphics, and multi-media-applications. In commercials, computer animations and ``real'' movies are being increasingly combined (Messaris, 1997). Inserted computer animations are frequently employed to visualize mechanisms of hair protection and the like that are not perceptible to the naked eye while the realistic photos are used as a background. In spite of the overwhelming possibilities offered by new technologies, we found no cases were such combinations of codes clearly transcended the expressive power of verbalization. We were only able to detect one fundamental quality not available in verbalization: Phenomenal causality can be described but not experienced (Michotte, 1963). Only dynamic visualizations provide this special quality. Typically, these prototypical causal relations like pushes and pulls are so obvious that there is no need to argue about them. In advertisements these forms (e.g. someone wipes a table with a special cloth and everything is clean immediately) rather correspond to causal claims (``look, this product is really effective'') than to complete arguments. No positive reasons for believing this claim are provided as long as no further data or explanations are offered as to why the product is so effective. Nevertheless, seeing is believing, and in this respect, phenomenal causality may be more impressive than lengthy arguments. We have not said much about the

U. Oestermeier, F.W. Hesse / Cognition 75 (2000) 65±104

101

perceived strength of arguments but our analysis opens the way for empirical questions about persuasive aspects: ² To what degree are visual arguments more convincing, easier to understand and easier to remember than purely linguistic ones? ² Which types of visualization are more convincing than others and which factors contribute to the perceived strength of a visual argument? ² Which combinations of verbal and visual codes are the most convincing ones? At present we have no complete answers to these questions. Normatively speaking, the argumentative strength of a visualization mainly depends on the quality of the underlying data and theories and not the visual format. However, in most cases the question as to whether the data or causal claims are scienti®cally dubious can not be evaluated, because the mass media often cite scienti®c visualizations out of context. Therefore, in practice, pragmatic knowledge about the credibility of the source, and the involvement of the producers probably contributes more to the convincingness of a visualization than scienti®c correctness. Research on processes of persuasion has much to say about these issues (see McGuire, 1985, for a review) but here we have been focusing on the representational limitations and the inferential roles of visual causal arguments. These aspects of our analysis are testable. The empirical content of the cells in our media-argument-matrix in Table 2, for example, can be tested as follows: ² If an argument pattern can be visualized without restrictions within a medium, then subjects should be able to translate this argument, without the use of additional symbolic comments, into a visual one. ² If an argument pattern cannot be visualized within a medium, then subjects should be unable to translate the verbal version into a visual one. ² If operations such as generalizations and comparisons are especially supported in one medium and not in another, the overall performance on related tasks should be better (or equal, but not worse) if this potential is fully used. Many studies have already addressed the question as to which form of presentation is the most suitable for each respective task. This work cannot be reviewed here (see MacDonald-Ross, 1977; Meyer, 1996). We only note, that many of these studies face the problem of confounding content with form. If one wants to attribute effects to forms of representation alone, it does not make sense to compare animations based on speculative theories with movies based on observations. In all media comparisons, a principled account of the notoriously vague expression ``the same information expressed in different ways'' is required. Our distinction of the varieties of causal arguments provides a preliminary answer to this question. Acknowledgements We would like to thank Davor Bodrozic and Petra Reinhard for their help on the content analysis of the text corpora as well as Steffen Ballstaedt, JuÈrgen Buder, Ralf

102

U. Oestermeier, F.W. Hesse / Cognition 75 (2000) 65±104

Decker, Stephan Schwan and the anonymous reviewers for their helpful comments. This research received support from Deutsche Forschungsgemeinschaft Grant He1305/9-1. References Ahn, W., Kalish, C. W., Medin, D. L., & Gelman, S. A. (1995). The role of covariation versus mechanism information in causal attribution. Cognition, 54, 299±352. Anderson, J. R., & Sheu, C. F. (1995). Causal inferences as perceptual judgments. Memory and Cognition, 23 (4), 510±524. Anderson, J. R. (1978). Arguments concerning representations for mental imagery. Psychological Review, 85 (4), 249±277. Bauer, M. I., & Johnson-Laird, P. N. (1993). How diagrams can improve reasoning. Psychological Science, 4 (6), 372±378. Cheng, P. W. (1993). Separating causal laws from causal facts: pressing the limits of statistical relevance. In D. L. Medin, The psychology of learning and motivation, vol. 30, (pp. 215±264). San Diego: Academic Press. Clark, A. (1997). Being there: putting brain, body, and world together again, Cambridge, MA: MIT Press. Cohen, J. (1960). A coef®cient of agreement for nominal scales. Educational and psychological measurement, 20, 37±46. Crosby, A. W. (1997). The measure of reality: quanti®cation and western society, Cambridge: Cambridge University Press. Donald, M. (1991). Origins of the modern mind: three stages in the evolution of culture and cognition, Cambridge, MA: Harvard University Press. Ducasse, C. J. (1993). On the nature and the observability of the causal relation. In E. Sosa, & M. Tooley, Causation, (pp. 125±136). Oxford: Oxford University Press. Original work published 1926. Eells, E. (1991). Probabilistic causality, Cambridge: Cambridge University Press. Einhorn, H. J., & Hogarth, R. M. (1986). Judging probable cause. Psychological Bulletin, 99 (1), 3±19. European Corpus Initiative (1996). Multilingual corpus 1 (CD-ROM). Edinburgh: Human Communication Research Centre (Distributor). Ferguson, E. L., & Hegarthy, M. (1995). Learning with real machines or diagrams: application of knowledge to real-world problems. Cognition and Instruction, 13 (1), 129±160. Ferguson, E. S. (1992). Engineering and the mind's eye, Cambridge, MA: MIT Press. Fleiss, J. L. (1981). Statistical methods for rates and proportions, New York: Wiley. Fleming, D. (1996). Can pictures be arguments? Argumentation and Advocacy, 33, 11±22. Gattis, M., & Holyoak, K. J. (1996). Mapping conceptual to spatial relations in visual reasoning. Journal of Experimental Psychology: Learning, Memory, and Cognition, 22 (1), 231±239. Gibson, J. J. (1986). The Ecological approach to visual perception, Hillsdale, NJ: Lawrence Erlbaum. Gombrich, E. H. (1982). The visual image: its place in communication. In E. H. Gombrich, The image and the eye: further studies in the psychology of pictorial representation, (pp. 137±161). Ithaca, NY: Cornell University Press. Hart, H. L. A., & HonoreÂ, T. (1985). Causation in the Law, 2nd ed. Oxford: Clarendon. Hegarthy, M. (1995). Mental animation: inferring motion from static displays of mechanical systems. In B. Chandrasekaran, J. Glasgow, & N. H. Narayanan, Diagrammatic reasoning: cognitive and computational perspectives, (pp. 535±575). Menlo Park, CA: AAAI Press/MIT Press. Heider, F., & Simmel, M. (1944). An experimental study of apparent behavior. The American Journal of Psychology, 52, 243±259. Hilton, D. J. (1995). Logic and language in causal explanation. In D. Sperber, D. Premack, & A. J. Premack, Causal cognition: a multidisciplinary debate, (pp. 495±529). Oxford: Clarendon. Huff, D. (1991). How to lie with statistics, London: Penguin Original work published 1954. Hume, D. (1978). In L. A. Selby-Bigge, A treatise of human nature, 2nd ed. Clarendon: Oxford Original work published 1739.

U. Oestermeier, F.W. Hesse / Cognition 75 (2000) 65±104

103

Hutchins, E. (1995). How a cockpit remembers its speed. Cognitive Science, 19, 265±288. Joseph, G. P. (1998). Modern visual evidence, New York: Law Journal Seminars-Press. Kjùrup, S. (1978). Pictorial speech acts. Erkenntnis, 12, 55±71. Kosslyn, S. M. (1989). Understanding charts and graphs. Applied Cognitive Psychology, 3, 185±226. Kuhn, D. (1991). The skills of argument, Cambridge: Cambridge University Press. Lake, R. A., & Pickering, B. A. (1998). Argumentation, the visual, and the possibility of refutation: an exploration. Argumentation, 12, 79±93. Larkin, J. H., & Simon, H. A. (1987). Why a diagram is (sometimes) worth ten thousand words. Cognitive Science, 11, 65±99. Levesque, H. J., & Brachman, R. J. (1987). A fundamental tradeoff in knowledge representation and reasoning (revised version). In H. J. Levesque, & R. J. Brachman, Readings in knowledge representation, (pp. 41±70). Los Altos: Kaufmann. Lewis, D. (1973). Causation. Journal of Philosophy, 70, 556±567. Lober, K., & Shanks, D. R. (1999). Experimental falsi®cation of Cheng's (1997) Power PC theory of causal induction. Psychological Review, in press. MacDonald-Ross, M. (1977). How numbers are shown. AV Communication Review, 25 (4), 359±409. Mackie, J. L. (1974). The cement of the universe, Oxford: Oxford University Press. McGuire, W. J. (1985). Attitudes and attitude change. In G. Lindsey, & E. Aronson, (pp. 233±346). Handbook of social psychology vol. 2. (3rd ed.), Hillsdale, NJ: Erlbaum. Messaris, P. (1994). Visual ``literacy'': image, mind, and reality, Boulder, CO: Westview Press. Messaris, P. (1997). Visual persuasion: the role of images in advertising, Thousands Oaks, CA: Sage Publications. Meyer, J. -A. (1996). Visualisierung im Management, Wiesbaden: Deutscher UniversitaÈts-Verlag. Michotte, A. (1963). The perception of causality, New York: Basic Books. Mill, J. S. (1979). A system of logic ratiocinative and inductive, collected works VII and VIII, Toronto: University of Toronto Press. Mitchell, W. J. (1994). The recon®gured eye: visual truth in the post-photographic era, Cambridge, MA: MIT Press. Monmonier, M. (1991). How to lie with maps, Chicago: University of Chicago Press. MuÈller, K. H. (1991). Symbole, statistik, computer, design. Otto Neuraths bildpaÈdagogik im computerzeitalter, Vienna: HoÈlder-Pichler-Tempsky. Neurath, O. (1936). International picture language, London: Kegan Paul. Neurath, O. (1991). Gesammelte bildpaÈdagogische Schriften, Vienna: HoÈlder-Pichler-Tempsky. Oestermeier, U. (1997). Begrif¯iche und empirische Fragen der Kausalkognition (Conceptual and empirical questions of causal cognition). Kognitionswissenschaft, 6 (2), 70±85. Ogden, C. K. (1932). Basic English, London: Kegan Paul. Palmer, S. E. (1978). Fundamental aspects of cognitive representation. In E. Rosch, & B. B. Lloyd, Cognition and Categorization, (pp. 259±303). Hillsdale, NJ: Erlbaum. Rumelhart, D. E. (1989). Towards a microstructural account of human reasoning. In S. Vosniadou, & A. Ortony, Similarity and analogical reasoning, (pp. 298±312). Cambridge: Cambridge University Press. Sauer, K. P., & MuÈller, J. K. (1987). Fernstudium Naturwissenschaften: Evolution der P¯anzen- und Tierwelt. Studienbrief 2: Ursachen und Mechanismen der evolution, TuÈbingen: Deutsches Institut fuÈr Fernstudien an der UniversitaÈt TuÈbingen. Scaife, M., & Rogers, Y. (1996). External cognition: how do graphical representations work? International Journal of Human-Computer Studies, 45, 185±214. Schlottmann, A., & Anderson, N. H. (1993). An information integration approach to phenomenal causality. Memory and Cognition, 21 (6), 785±801. Semin, G. R., Rubini, M., & Fiedler, K. (1995). The answer is in the question: The effect of verb causality on locus of explanation. Personality and Social Psychology Bulletin, 21 (8), 834±841. Shanks, D. R., Pearson, S. M., & Dickinson, A. (1989). Temporal contiguity and the judgement of causality by human subjects. Quarterly Journal of Experimental Psychology, 41B (2), 139±159. Shanks, D. R. (1991). On Similarities between causal judgements in experienced and described situations. Psychological Science, 2 (5), 341±350.

104

U. Oestermeier, F.W. Hesse / Cognition 75 (2000) 65±104

Shultz, T. R. (1982). Rules of causal attribution. Monographs of the Society for Research in Children Development, 47, 1±51. Stenning, K., & Oberlander, J. (1995). A cognitive theory of graphical and linguistic reasoning: Logic and implementation. Cognitive Science, 19 (1), 97±140. Storms, M. D. (1973). Videotape and the attribution process: reversing actors'' and observers'' point of view. Journal of Personality and Social Psychology, 27, 165±175. Strawson, P. F. (1985). Causation and explanation. In B. Vermazen, & M. B. Hintikka, Essays on Davidson ± Action and Events, (pp. 115±135). Clarendon: Oxford. Thagard, P. (1999). How scientists explain disease, Princeton, NJ: Princeton University Press. Toulmin, S. (1958). The uses of argument, Cambridge: Cambridge University Press. Tufte, E. R. (1983). The visual display of quantitative information, Cheshire, CT: Graphics Press. Tufte, E. R. (1997). Visual explanations: images and quantities, evidence and narrative, Cheshire, CT: Graphics Press. Walton, D. N. (1989). Informal logic: a handbook for critical argumentation, Cambridge: Cambridge University Press. Ward, W. C., & Jenkins, H. M. (1965). The display of information and the judgement of contingency. Canadian Journal of Psychology, 19, 231±241. Wasserman, E. A. (1990). Attribution of causality to common and distinctive elements of compound stimuli. Psychological Science, 1, 298±302. Webster's new encyclopedic dictionary (1993). New York: Black Dog and Leventhal Publishers. Weddle, P. (1978). Argument: a guide to critical thinking, New York: McGraw-Hill. Weiner, B. (1995). Judgements of responsibility: a foundation for a theory of social conduct, New York/ London: Guilford Press. Wells, G. L., & Gavanski, I. (1989). Mental simulation of causality. Journal of Personality and Social Psychology, 56 (2), 161±169. White, B. Y. (1993). ThinkerTools: causal models, conceptual change, and science education. Cognition and Instruction, 10 (1), 1±100. Worth, S. (1975). Pictures can't say ain't. Verses, 12, 85±108. Wright von, G. H. (1971). Explanation and understanding, London: Routledge and Kegan Paul. Zhang, J., & Norman, D. A. (1994). Representations in distributed cognitive tasks. Cognitive Science, 18 (1), 87±122. Zhang, J. (1996). A representational analysis of relational information displays. International Journal of Human-Computer Studies, 45, 59±74.