Journo/ oJS&ol Prychology, Vol 30. pp 215-221, Pergamon Press Ltd Prmted I” [he USA
1992 b 1992 The Journal
eon4405/92/$5 00 + 00 of School Psychology, Inc
CRITIQUES OF SCHOOL PSYCHOLOGICAL MATERIALS Cecil R. Reynolds, Associate Editor
Ex-Huming an Old Issue Robert T. Brown and lee A. Jackson University
of North
Carolina
at Wilmington
People frequently infer inaccurate conclusions from particular experience, demonstrating errors in inductive reasoning. Here we review research on several of these errors, relying in particular on Gilovich’s (1991) How We know Whal Isn’t So. The errors include seeing patterns or relationships where none exist, neglecting statistical regression, overgeneralizing unrepresentative data, and drawing conclusions on the basis of incomplete decision matrices. These problems are exacerbated by social psychology phenomena such as the “false consensus effect,” through which our associations with like-minded people lead us to exaggerate the extent to which others share our beliefs. Such errors in reasoning are the kind of problems that methods courses teach professionals to avoid. We discuss the role of these errors in clinical research.
The intersection the Department counted
at Racine
and
Oriole
of Transportation
support
do not speak someone
is a hazard.
I know that
cars a day is incorrect.
I’ve
100 cars in five minutes.
I am writing in hope of dispelling officers
Drives
count of 6,000
gun control.
for the majority
who knows.
the far-flung
myth that law enforcement
The police administration of rank
on Capitol
and file police.
Gun bans and other harsh anti-gun
lawful gun owners ultimately
do nothing
Recent
Take
Hill it from
laws directed
at
to reduce violent crime.
letters in the Wilmington,
NC Morning Star
Do these writers know what they are talking about? Well, maybe. But if their conclusions are correct, it is not because of the quality of their reasoning. The first writer claimed that her five-minute sample was more valid than the Department of Transportation’s longer one. The second writer both argued that his opinion alone was more valid than that of the “police administration,” whoever they are, and made unsubstantiated claims that police do not support gun control and that such control is ineffective. These two letters illustrate the Address correspondence and reprint requests to Robert T. Brown, Department of Psychology, University of North Carolina at Wilmington, 601 South College Rd, Wilmington, NC 284033297.
215
216
Journal of School
kind of common
errors in reasoning
Psychology
that cognitive
and social psychologists
have been studying for the last two decades or so. Recently, we (Brown & Jackson, 1990) summarized some of the mistakes that people frequently make in decision making. As reported by Dawes (1988), for example, people reach irrational decisions by not considering all information available and by honoring sunk costs, misunderstanding probability, and using their own biased experience
instead of more reliable actuarial evidence.
Here we address another aspect of the same general issue of how people use-or misuse- information to come to conclusions. In his engagingly written How We Know What Isn’t So, Gilovich (1991)
has summarized
research
on
problems that we have in going from experience to a conclusion, or belief. The title is taken from a quotation attributed to the U. S. “country philosopher,” Artemus Ward: “It ain’t so much the things we don’t know that get us into trouble. It’s the things we know that just ain’t so.” 1 A more recent and acerbic version comes, not surprisingly, from H. L. Mencken: “The most costly of all follies is to believe passionately in the palpably not true. It is the chief occupation of mankind.” As opposed to decision making, Gilovich is mainly interested in the role of psychological factors in the errors people make in evaluating information to come to some general belief about how the world works. Social factors play what may seem to be a surprisingly large role in such reasoning. Cognitive processes
operate
in the context
of others’ responses
to an individual’s
ex-
pressed opinions. Typically, people come to conclusions from data they have available, express them, and then receive feedback from others on their perceived validity. In a number of situations, the data themselves will be interpersonal, coming from acquaintances. Furthermore, social feedback is likely to strengthen conclusions, whether valid or invalid, because acquaintances do not provide a random sample of the varying perspectives of people in general. Not only do our own roles dictate that we spend much time with people of similar life-styles such similarities. psychologists, be confirmed.
and values, but we tend to pick our friends on the basis of Thus, if psychologists spend much of their time with other
their reasoning, particularly on psychological issues, will likely A detailed review of social cognition in general is in Fiske and
Taylor (1991). Gilovich’s book is of value to those doing clinical work for at least three reasons. The obvious one is that knowing about the kinds of problems that lead to errors in inferences is itself useful in helping to evaluate one’s own reasoning. The second is that clients or their families may bring erroneous reasoning and attributions concerning a problem into a clinical setting. Finally, much formal clinical research has suffered from many of the same ‘As an aside, those trying to hunt down the original source for a particular thought may also have problems, in this case of attribution. Different books of quotations attribute exact or highly similar versions of this saying, not only to Ward but to Josh Billings, Kin Hubbard, and even Will Rogers. Going by publication date, the credit indeed appears to be to Ward.
217
Brown and Jackson
errors that people make in their everyday psychiatrists,
and psychoanalysts
lives. All too many psychologists,
have contracted
what we call “Mencken’s
syndrome.”
THE PHILOSOPHICAL Many
reasoning
problems
PROBLEM
are fallacies
OF INDUCTION
in induction,
the kind of reasoning
involved when we use observed evidence to draw conclusions
about the unob-
served. Thus, the writer of the first letter above drew a conclusion about traffic in general on the basis of her observations. A relatively early philosophical view was that induction as well as deduction could lead to certainty of knowledge, as in Francis Bacon’s position that a crucial experiment would prove both which of two competing theories was wrong and which was right (e.g., Brown and Reynolds, 1984). Al ready questioned, the certainty of induction ended in 1748 with Hume’s Enquiry Concerning Human Understanding, from which modern discussion of induction takes off (Black, 1967). In the Enquiry, Hume (1777 edition, reprinted in Selby-Bigge, 1975, pp. 25-26) stated: “All the objects of human reason or enquiry may naturally be divided into two kinds,
to wit, Relations of Ideas and Matters of Fact.” Relations
of
ideas, such as geometry and algebra, are true “without dependence on what is anywhere existent in the universe.” On the other hand, “ . the contrary of every matter of fact is still possible. . . . That the sun will not rise tomorrow is no less intelligible a proposition, and implies no more contradiction, than the affirmation,
that it will rise.” Hume
argued
that the bases upon which we
anchor our feelings of certainty are themselves vacuous, but that vacuity does not prevent feelings of certainty. Philosophers, according to Black (1967), have yet to resolve the problem that Hume raised. One way out of the dilemma, of course, is “boldly face life without certainty and make the best of it” (Jones, 1969, p. 349). This is the approach that most scientists take and that Jones states Hume himself would have taken. Even at its very best, then, the accuracy of induction is a sometime thing, however certain our feelings may be.
FALLIBILITIES Unfortunately,
IN HUMAN
recent research in psychology
REASONING indicates that in routine reason-
ing, induction is rarely anywhere near its best. Some examples of the cognitive, motivational, and social fallibilities to which it is subject follow. People frequently read meaning into what are actually chance sequences or patterns. As Rorschach realized, ambiguity can be a rich source of meaning and interpretation. People seem intent, as Gilovich says, on making something out of nothing. Indeed, we are programmed to detect patterns such as the “face of Jesus” in a billboard advertising spaghetti or blasphemous “subliminal” messages
on rock records played backwards.
In his own research,
Gilo-
218
Journal of School
Psychology
vich has demonstrated that streaks of successful shots in basketball, attributed to a “hot hand,” actually occur at no greater than chance level. Many players and sportscasters have not only inferred that streaks are real, but have developed elaborate explanations of them. When told that the streaks are chance, such people frequently reject the data and develop elaborate explanations of the research findings! Statistical regression may also lead to causal inferences where none are warranted.
Regression
is a difficult phenomenon
to convey to students even
in formal methods courses, so we should not be surprised at its general misunderstanding. Thus, students who have above average scores on an initial GRE testing frequently expect to do better a second time because of a practice effect. They have difficulty understanding that owing to regression, they likely will do worse in the absence of some active preparation. Furthermore, to borrow one of Gilovich’s examples, parents whose child does exceptionally well in school one year may have unrealistic
expectations
concerning
subse-
quent years. Clinicians may find his treatment of regression useful in explaining to parents and clients why some of their expectations may indeed be unwarranted. In addition, many supposedly effective can be explained more simply in terms of regression.
clinical
interventions
A related tendency is for people to overgeneralize incomplete and unrepresentative data. They also tend to see what they want to see and even reinterpret information
that is contrary
to their own positions.
Gilovich
aptly quotes
a psychologist who, in a slip of the tongue, said, “I’ll see it when I believe it.” Gilovich summarizes one study in which supporters and opponents of the death penalty read two articles, one presenting evidence in favor of the death penalty and one against it. The results were clear: Proponents of both sides found the study favoring
their own position
to have been sound and well-
conducted, whereas they located many flaws in the contrary one. Thus, both groups emerged from the study with their positions on the death penalty strengthened. People tend to draw conclusions on the basis of an incomplete decision matrix that contains one or two cells instead of the requisite four. Consider as an example the common belief that people who adopt children are more likely later to conceive one themselves than are those who did not adopt. Since people concentrate on information that is consistent with their beliefs, they are likely to focus on the cell showing an apparently large number of couples who have adopted and subsequently conceived. When information on all four cells is available, however, one sees that the conception rate of those who have adopted is not above base rate. Reliance on one-cell data, as Gilovich suggests, may help to explain some people’s belief in precognition: People are far more likely to remember occasions where they felt that something was going to happen and it did happen than on the other possible, and more frequent combinations of events. Impressions about the effectiveness of selection procedures may also be
Brown and Jackson
219
biased favorably or unfavorably
by the nature of the data available for evalua-
tion. If those we select perform
well, we may judge our selection procedures
as highly effective. The problem, of course, is that we have no way to assess how well those who were not selected would have performed. Similarly, the impression
that SATs
are bleak if not miserable
rests in part on restricted-range
predictors
of college grades
effects that are induced by the fact that people
within a given college tend not to have SATs that vary a great deal. As many who read this article will know, when students in universities with a wide range of SAT scores are considered, the correlation with grades is surprisingly high. Biasing factors may also occur in the representation of information in secondary sources. Gilovich tellingly documents how those reporting Watson and Raynor’s classic study of “Little Albert” have exaggerated the intensity of the conditioned response and its generalization to other stimuli. Distortions in retelling may occur to increase increase its impact or plausibility, aware of the resulting
paradox:
the entertainment value of a story, or even out of self-interest. Gilovich
If we can trust neither
to is
our own individual
experience nor the statements of others, what do we do? Among other things, he suggests that people consider the source of information and beware of the use of personal
testimonials.
The
value of a general
skepticism
regarding
claims should also be considered. People tend to exaggerate the degree of social support that they have for their positions. The “false consensus effect” is the tendency for “ people’s own beliefs, values, and habits to bias their estimates of how widely such views and habits
are shared by others” (Gilovich,
people with whom we associate and the material viewpoints,
1991,
p. 113).
we expose ourselves to a biased sample of information.
ing apparent degree of agreement which others share our beliefs.
IMPLICATIONS
Since
the
we read will likely share our
may lead us to exaggerate
OF FALLACIES IN REASONING
The resultthe extent
to
FOR PROFESSIONALS
Reading Gilovich from a professional standpoint may leave one with two discomfiting feelings. The first is that many problems in everyday reasoningconclusions
based on case histories,
post hoc ergo propter hoc reasoning, consider alternative interpretations-are
small and/or unrepresentative
samples,
biased interpretations, and refusal to those about which even undergradu-
ate methods books warn. Indeed, methods courses deal largely with ways to avoid these problems. Thus, they should not be surprising to those with any background in psychology. The second is the distressing realization that clinical observations have suffered from these problems to the extent that many of its conclusions have been as flawed as those of untrained persons in their everyday reasoning. Consider a few well-known
sins of the past: using prefrontal
lobotomies
to
220
Journal of School Psychology
treat various behavior
chromosome,”
blaming
disorders, describing XYY males as having a “criminal mothers for their children’s schizophrenia and autism
(schizophrenogenic and refrigerator mothers, respectively), and attributing Tourette’s syndrome to one or another problem in psychosexual development. All these treatments
or theories,
quaint,
scientific
(1) claimed
which may now seem antiquated
support,
serve as a case book of fallacies,
and even
(2) were based on studies that could
including
exaggerated
perception
of social
support, (3) have been discredited and now are viewed as embarrassments by many in the field, and (4) hurt many children and adults, patients and parents. When conducting their own studies, researchers in a variety of areas either have seemingly forgotten their methods courses or have all too clearly revealed that they never had any. Furthermore,
it doesn’t take a paleontologist
to see that we are not dealing with a fossilized historical record (these studies having largely been conducted from the 1940s through 1960s) or take a pessimist to suggest that many of these problems still trouble clinical research. Clinical
research,
of course,
has improved
scribed above would not likely now occur. experimental
psychology
greatly;
some of the errors de-
One of the great contributions
may be its palliative effects on applied research.
we still routinely
ignore
have controversies
over the real nature of what yesterday
problems
of sampling
brain damage or dysfunction),
today is ADHD,
(motivational
(Barkley,
deficit disorder)
and design. but tomorrow
1989). Attempts
Even
was MBD
of But
now, we (minimal
may be MDD
are still being made
to specify a single cause for learning disabilities, in spite of the fact that diagnosis, which in some sense must precede etiology, is at best uncertain (Ysseldyke and Stevens, 1986). A related potential problem is the tendency for psychologists to congregate in relatively small, narrow interest associations and conferences
where they are likely to be interacting
only with those of like
mind. Those are the conditions, of course, that foster false consensus effects. Change in knowledge is virtually a defining characteristic of science, but one emerges from a consideration feeling that less change interpreted
of the problems
that Gilovich describes with the
would occur if research
in the first place. Another
virtually
were better defining
conducted
characteristic
and of sci-
ence, after all, is its methods, which should help prevent it from making the same errors that people commit in their everyday lives. In other words, science should do better. Sadly, Artemus Ward and even H. L. Mencken may have been all too accurate. So may Wittgenstein: “If there were a verb meaning ‘to believe falsely,’ it would not have any significant indicative.”
first person,
present
REFERENCES Barkley, R. A. (1989). Attention deficit-hyperactivity disorder. In E. J. Mash & R. A. Barkley (Eds.), Treatment of childhood disorders (pp. 39-72). New York: Guilford. Black, M. (1967). Induction. In P. Edwards pp. 169-181). New York: MacMillan.
(Ed.),
Encyclopedia of philosophy (vol.
4,
Brown and Jackson
221
Brown, R. T., & Jackson, L. A. (1990). Decisions! decisions! decisions! Juurna1 of xclcool P&lOiO~, 28, 79-85. Brown, R. T., & Reynolds, C. R. (1984). Crucial experiments in psychology. In R. J. Corsini (Ed.), Eacyclapedia ojpsycholo~ (vol. 1, pp. 329-331). New York: Wiley. Dawes, R. (1988). Rational choice in an uncertain world. San Diego: Harcourt Brace Jovanovich, Fiske, S. T., & Taylor, S. E. (1991). Social cognition (2nd ed.). New York: McGrawHill. GiIovich, T. (1991). How we know what is& so. New York: Free Press. Jones, W. T. (1969). Hobbes to Hume (2nd ed.), New York: Harcourt Brace Jovanovich . Selby-Bigge, L. A. (Ed.). (1975). Ifume% enpi& concerning human und~s~nding and concerning theprinciples of morals (3rd ed.). Oxford, England: Clarendon Press. Ysseldyke, J. E., & Stevens, L. J. (1936). Specific learning deficits: The learning disabled. In R. T. Brown & C. R. Reynolds (Eds.), Psychological perspectives on childhood exceptional+. New York: Wiley.