Attributing mind to animals: The role of intuition

Attributing mind to animals: The role of intuition

J. SocialBiol. Struct. 1983 6, 231-247 Attributing mind to animals: The role of intuition Paul S. Silverman Department of Psychology, University of ...

1MB Sizes 0 Downloads 44 Views

J. SocialBiol. Struct. 1983 6, 231-247

Attributing mind to animals: The role of intuition Paul S. Silverman

Department of Psychology, University of Montana, Missoula, MT 59812, USA Cognitive hypotheses and tests of them do not provide for the criterial properties by which presence and absence of mind can be distinguished. This is because the Morganian form of parsimony does not permit a disconfirmation of the null hypothesis that a phenomenon is not mental. Nevertheless, mental and non-mental events can intuitively be distinguished. These intuitive judgments are based on biases that powerful actions directed towards oneself are mindful, that unlikely events have intentional origins, and that certain similarities between human and animal behavior and physiognomies imply mental states in animals. The latter bias is often applied in cognitive research on animals and leads to unresolvable dilemmas concerning the existence of mental states. For example, evidence for negative feedback control or symbolicity in animals has served to support the argument that they are intentional or aware. But machines and physiological processes which also meet criteria for feedback control and symbolicity are not credited with mental qualities. An analysis of the use of analogy in science suggests that it is misapplied when one reasons that when certain non-human behaviors resemble human behavior, the non-human must be 'mindful'. The intuitive tendency to judge animals as mindful can be justified in circumstances requiring conservative ethical decisions on animal treatment. But in the context of purely empirical judgments, animal behavior can only be characterized as 'mind-like' and it must be recognized that other behavior, normally considered as 'mindless', can be similarly characterized.

Introduction In The Bridge of San Luis Bay, Thornton Wilder recounts an investigation by an 18th century Peruvian monk who witnesses the collapse o f a high bridge and the fatal plunge o f the people crossing it. Unwilling to believe that their destinies were a matter of chance, he asks: 'Why did this happen to those five? If there were any plan in the universe at all, if there were any pattern in human life, surely it could be discovered mysteriously latent in those lives so suddenly cut off.' Attempting to prove the existence o f God, the monk Brother Juniper researches the life histories of the victims for a common pattern. One modern form o f the investigation o f a pattern underlying behavior is the attempt to identify the means by which organisms themselves control their destinies, often phrased in terms o f 'intention' or 'awareness', b o t h generally subsumed under the concept of 'mind'. The deus ex machina is now thought to reside in the machine. The study of mental processes involves a form of hypothesis statement reminiscent o f that used b y Brother Juniper. The null hypothesis may be that an organism is not purposive 0140-1750/83]030231 + 17 $03.00

© 1983 Academic Press Inc. (London) Limited

232

P. S. Silverman

or is unaware. The alternative approach of assuming that the organism (or object or event) is aware of a particular event or has some specific internal purpose 'in mind' is not an accepted practice. Brother Juniper's form of hypothesis testing, however, would not have withstood the scrutiny of good scientific practice: 'He thought he saw in the same accident the wicked visited by destruction and the good called early to heaven.' One might wonder how the intuitive judgment of evidence crept into the test of the monk's null hypothesis that God did not exist. His tautological conclusions could be written off to 'bad design', but I suggest that difficulties arose because (1) the hypothesis itself was not subject to a clear test, and (2) despite the correct form of the hypothesis, the monk's intuitive bias was that God did exist. The application of cognitive theory to animal studies seems to be subject to a similar difficulty, and controversies over 'sufficient evidence' and even 'testability' are not rare. I shall argue in this paper that cognitive hypotheses and tests of them do not provide for the criterial properties by which presence of mind (mindfulness) can be distinguished from absence of mind (mindlessness). This is because the Morganian form of parsimony provides an inadequate framework for phrasing a testable null assumption and, consequently, judgments of sufficient evidence are largely subjective. Nevertheless, 'mental' phenomena can be distinguished from 'non-mental' ones at an intuitive level, and thus one attributes a chimpanzee's behavior to a 'cognitive system' (though the functional components are unknown), but characterizes the workings of hearts, livers, radios, thermostats, mousetraps and computers as non-cognitive, though they are systems (Haugeland, 1978). By posing questions such as: 'Are animals aware of X? Do they have intentions to do X?' one faces the problem of searching for evidence that an action is mindful rather than mindless. However, theories and methodologies fail to provide rules for making such decisions. To infer mind, one is forced to fall back on intuitive judgments. An intuitive judgment is a judgment which appears to be true, which one feels compelled to make, but for which there is no procedure for resolving doubt or defending certainty. Knowledge based on intuition cannot be submitted to objective tests and can only be accepted (or rejected) as immediately apparent. For this reason, it is often difficult to resolve disputes over the existence or non-existence of particular mental states (e.g., concepts, feelings) in animals. To illustrate the inherent limits of cognitive research in animals, the negative feedback model for intentional behavior and the symbolic capacity criterion for awareness in apes will be discussed. I then examine the role of intuition in the inference of mind and offer a suggestion for its application to the ethics of animal research. If cognitive research can inform ethical practice at all, its role is limited to the identification of necessary, but not sufficient, conditions for mindfulness. While a variety of topics touch on the problem of inferring mental from cognitive processes, and both from behavior, this essay is not concerned with a number of them: The mind-body problem is implicit here but is addressed only in the context of the practical problem of operationalizing the concept of animal mind. There are, undoubtedly, unnumerable ways of attempting to define and operationalize mind. The arguments in this essay focus only on the inference of intentionality and awareness from negative feedback control and symbol use in nonhuman animals. The problems intrinsic to these examples are sufficiently general that they can be applied to other organisms, definitions of mind, and behavioral criteria. Finally, I do not address the arguments that animals, or any other entities, can or cannot 'have' minds. I do maintain that there is no empirical method of determining which is the case. We do, however, intuit mind. My intention is to explore the inevitable conflict which results and suggest a resolution.

Attributing mind to animals

233

Morganian parsimony William of Occam's principle of parsimony was that an explanation should not invoke inessential assumptions ('it is vain to do with more what can be done with fewer'). Since C. Lloyd Morgan's reaction against casual reports of animal thinking (1894), parsimony has taken on a particular interpretation in psychology; 'more' being considered to encompass the qualities of mind, and 'fewer' to include such factors as sensation and physiological functions. Extending the psychological scale to its logical limit at the lower end, one would consider physical phenomena to be simpler than mental ones. There is, then, a firm historical basis for taking the simpler explanation of an observation to be one which does not attribute intentionality or awareness to the activity observed. Consequently, scientific practice has it that the preliminary hypothesis to be tested is that mind is not operative in a particular case. The common point for departure for both behaviorist and cognitivist is to hypothesize mindlessness (or non-intention or non-awareness) until evidence proves the hypothesis wrong (of course, they disagree as to the nature of the required evidence). A recent series of commentaries on animal cognition research illustrates the position well, where among over 50 reviews of a set of articles, only one argued for the opposite approach (Menzel & Johnson, 1978; see also Menzel & Everett, 1978). The position that the researcher's goal should be to disprove the hypothesis of mindlessness continues to receive a great deal of support from the current philosophical stance in which the issue is phrased 'How can we know that other minds exist?' rather than as 'How can we know that some objects and events have no minds?' (Shorter, 1967).t Popper's proposal that the concept of simplicity corresponds to the 'degree of falsifiability' (1959: 140) suggests why the Morganian interpretation is so widely accepted. It is presumed easier to falsify the 'non-mental' than 'mental' hypothesis. Consider a stone tumbling down a hillside. Beginning with a presumption that the stone has intentions, one can satisfactorily attribute its behavior to a desire to change positions. To test this proposal, place the rock back at its point of origin and obstruct its earlier path. It is found that the rock either stops at the barrier or follows a new path leading to a new resting point. Given a 'mindful universe' cosmology, a change of intention may be attributed to the rock. Because such reasoning can account for virtually any behavior, the judgment of mindful activity is part of a logically closed system and independent of the outcome of any test. In Popper's view, statement of the null hypothesis and the form of hypothesis testing are intimately related. One would expect that if the initial assumption of mind is nonfalsifiable, that the alternative assumption of non-mind does provide for falsifiability. This is not the case. There are no criteria by which mindless processes can be ruled out as an explanation for a behavior. The cognitivist response to this dilemma has been to consider certain organisms as mindful and to then forge ahead to identify the particular conceptual system which the animal (or human) possesses. The decision of which organism possesses a mind and which does not is based, in large part, on intuitively derived global similarities between animal and human behavior or physiognomy. I shall return to this role of intuition, but first will consider the limits implicit to the cognitivist position by means of two substantive examples. tThis viewpoint, that at base the world ~ mindless, f'mds its roots in the writings of Demoscritus, who proposed a world theory based on depersonalized physical 'atoms'. Ironically, the concepts of 'statistical' atom and indeterminancy suggest to some authors that consciousness is implicit in the organization attributed to the universe (Eddington, 1928; Schr6dinger, 1945). Parallels with the ideas of Eastern mysticism have also been recognized (Capra, 1975).

234

P. S. Silverman Limits of the cognitivist position

From Descartes' pineal gland as mechanical pump to Koffka's 'forces of cohesion' (1935: 126) and modern analogies to computing systems, models of mind are drawn from sources considered to be mindless and the spark of 'awareness' or 'intention' must be placed outside the model. Given this lacuna, issues such as animal awareness or computer intelligence remain controversial and not only scientific hypotheses but conclusions as well must rely on the intuitive dispositions of judges. In fact, one is still in the situation that G. J. Romanes (1882) described when he suggested that attributing thoughts and intentions comes down to the following common sense analogy: Had I been in the animal's shoes and acted as it did under those circumstances, 1 would have intended or been aware of X. Therefore, the animal intended or was aware of X (see also Bishop, 1980; Campbell & Blake, 1977; Hook, 1960; Nagel, 1974). Adopting Popper's negative phraseology, if it acts as I do, then it could not be mindless. Though one can experimentally vary the 'circumstances' and make predictions, the arbitrary judgment that an observation constitutes evidence remains the modus operandi. What does being in someone's shoes really mean? How similar to me must it act before I conclude that its behavior is mindful? Romanes concluded that variability of adaptive action produced by learning was 'objective evidence for mind', but also admitted that this criterion was neither completely inclusive of mind nor exclusive of nonmind. The arbitrary nature of judging the presence of mind from human-like behavior is exemplifted below by the shortcomings of the negative feedback/intentionality analogy and the proposition that symbolic ability (symbolicity) in a non-human animal is evidence for awareness. Since awareness and intentionality are two properties limited to objects credited with minds, it follows that behavioral evidence for either could serve as evidence for the existence of mental processes in an animal. To the extent that behavioral observations are inadequate to establish or refute awareness or intentionality, attribution of mindfulness to animals (based on those observations) is ill-founded.

Negative feedback and intentionality The negative feedback control model was developed to construct self-regulatory machines and extended to explain the self-governed behavior of animals (Rosenblueth, Wiener & Bigelow, 1943; Wiener, 1948). Application of the feedback model to animal behavior makes the programmatic assumption that one should be able to reconstruct the innards of the animal-as-black box and find a control system inside. Once done, the system would serve as a representation of intentional behavior. In fact, all modern explanations of intentionality or 'volition', as it is also called, have used the thesis of negative feedback either implicitly or explicitly (see Kimble & Permuter, 1970). Briefly, a negative feedback loop is a system which maintains or achieves a goal represented by a reference signal, by comparing it with the state of an input signal. When a difference between the two states is registered as an 'error', a compensatory action is initiated to reduce the difference. When the comparator registers no error, no action occurs. Three relevant features of an object which is suspected to be operating as a negative feedback control system are observable: (1) the environmental input condition, (2)the object's behavioral output and (3) the resulting state of the environment-object relation. When the object is indeed such a system, these features are respectively characterized as a disturbance, a compensatory action and a controlled quantity. For example, someone watching an airplane maintain a level attitude observes the wind buffeting the aircraft, the movement of the flaps, and/or the actual orientation of the plane. The observer of a social interaction among primates watches one animal approach the other, the latter move away, and the distance (or probability of aggression) remain stable. Finally, an obstacle is observed in the path

Attributing mind to animals

235

of a novel object moving away from point A; the object proceeds around the obstacle, continuing to increase its distance from A. A test which determines whether a phenomenon is goal-controlled makes use of such evidence. There are a number of ways in which a test for negative feedback can be implemented. The observer may break the circuit, obstructing input by delaying or eliminating it, and expect a feedback-controlled system to fail to produce the compensatory behavior hypothesized to maintain a controlled quantity. William Powers (1973) has concisely" summarized an alternative direct 'test for the controlled quantity' which also involves all three observable features listed above. It consists in 'applying a known disturbance to the quantity thought (or known) to be controlled and observing in detail the subsequent behavior of that quantity under the influence of the continuing steady disturbance and the behaving system's output . . . . If every disturbance acting on the quantity is nearly cancelled by an equal and opposite effect of the organism's actions on the same quantity, that quantity is a controlled quantity by definition and the organism is organized as a control system relative to that quantity, also by definition (pp. 233,234).' There are two types of problems in applying the negative feedback model as a criterion for the identification of intentionality in a system. On the theoretical level, the inference of negative feedback control is not coextensive with the identification of intentionality (Dreyfus, 1972; Taylor, 1950). On a practical level, the hierarchical complexity of negative feedback systems postulated to represent mental processes suggests that tests (such as Powers') may often lead to false negative conclusions. While negative feedback control is a necessary component of the definition of a social object possessing the mental property of intentionality, it is not sufficient. The feedback model characterizes home thermostats and homeostatic biological processes as well as animal behavior. The observer who wants to establish a mental property in a feedback system must go on to propose other criteria. Indeed, such proposals have been made: (a) representation over time and transformation of input and goals; (b) hierarchical organization and ability to learn and reorganize goals; (c) the ability to lie; (d) the maintenance of specific invariant control states such as concepts of permanent objects, the self, other minds; and (e) awareness (Piaget, 1976; Powers, 1973; yon Glaserfeld, 1979; von Glasersfeld & Silverman, 1976). When one carefully considers these criteria, however, either their distinctly mental properties become ambiguous or the means by which they can be inferred from behavior appear inadequate: (a) Tests for 'representation' and 'transformation' can be successfully applied on the machine level (computer memories and logical manipulations), the biological level (DNA transcriptions), and the behavioral level (representation of the sun's declination and rate of movement in bees) without necessarily implicating mind. (b) Ashby (1952) demonstrated quite early in the search for a control model of the brain that an hierarchically arranged adaptive device did not require the concept of mind. (c)Turing's (1950)proposed test for machine intelligence in which a machine 'liar', competing against an honest human, would have to fool an interrogator into judging it to be a human liar, has been approximated by Weizenbaum's program ELIZA (1976) which, to some extent, convinced naive interrogators of its humanness. But it must be rejected as an adequate test because of the abridged context in which it operated. The myriad cases of insects deceiving conspecifics, predators and prey are generally not interpreted as evidence for mind. Even observations of deceit by apes (Woodruff& Premack, 1979; Menzel, 1973) are subject to a variety of interpretations regarding the processes which produced them. (d) Attempts to attribute particular control states to machines and animals remain ambiguous. The modern versions of machine intelligence such as pattern recognition (Uhr, 1973), object recognition and manipulation

236

P. S. Silverman

(Winograd, 1972) and chess playing (Berliner, 1977) have failed to evoke the belief that these machines have minds. While evidence for a self-concept or concept of another's mind would support the construct of mind, the existing methodologies are inadequate. Gallup's mirror-recognition technique (1977), in which one's body is identified with a mirror image, does not require identification of an invisible (or indivisible) 'self' within that body. Observations by Zazzo (1979) demonstrate this point. Human infants (26-30months)pass the Gallup test and then search behind a free-standing mirror. A study purporting to demonstrate that chimpanzees have an implicit theory of mind (Premack & Woodruff, 1978a), though ingeniously designed, is inconclusive since response associations may account for performance (Savage-Rumbaugh, Rumbaugh & Boysen, 1978a; Groves, 1978). ( e ) A s will be shown later, awareness cannot be inferred from behavior. As far back as Tolman's propositions for mental processes underlying animal behavior, and as recently as arguments for 'pongolinguistics', disputes have raged over the definition of 'sufficient evidence'. They are not limited to the non-human domain, but are current in debates over the ages at which young children form adult-like concepts. If the history of the field of cognitive research can serve as a guide, these arguments are not temporary phenomena to be resolved by certain pieces of evidence or theoretical developments, but are intrinsic to a discipline in which clear behavioral criteria for mental phenomena are lacking. Leaving aside the issue of the coincidence between intentionality and negative feedback control, the problem of establishing the existence of a controlled quantity in a previously unanalyzed system is a thorny one because of the high a priori possibility of false negative conclusions. To avoid the problem of prejudicial judgment, consider a hypothetical case in which the observer does not know the nature of the object to which he applies the test for the controlled quantity. A newly-arrived Martian stumbles across me, implements a test by manipulating a possible disturbance, and concludes that I am not goal-controlled. In reaching 'his' conclusion, he may have misconceived the possible sensors I possess, my repertoire of compensatory behaviors, or perhaps most likely, the potential goals governing my action during the test. The effictiveness of the test is limited by the extent to which the observer organizes his experience with sufficient similarity to the subject. Related arguments have been suggested in Braithwaite's property of 'variancy' (1953), Kelly's 'commonality corollary' (1963) and Maturana's 'domain of interaction' (1970). Human observers of animal behavior may believe that one can lessen these difficulties by 'getting to know' the subject prior to such tests. But it is not possible to know whether negative results imply absence of animal competence or a lack of sufficient experiential intersection. Given the room for false negative findings, the failure of an animal to succeed at a particular task involving high-level control (and most cognitive tasks are of this nature) is not sufficient proof that it does not have the capacity to succeed at a different task of similar complexity. Success, on the other hand, is insufficient to demonstrate 'mindful' intentionality. Symbolicity and awareness Though most definitions of awareness refer to personal experience, attempts have been made to infer its presence and character from behavior explicable only by the internal manipulation of a 'representational plane'. Representational abilities can be identified in any organism in which a response to an event occurs after a delay or in which a behavior is governed by a 'concept' which has no perceptual correspondant. But the equating of such criteria with a 'mental image' or 'awareness' is not without controversy. The most conservative criterion appears to be symbolic representation in the form of language use (Griffin, 1977, 1978; Piaget, 1976). Linguistic communication has also been proposed to circumstantiate animal

A ttn'buting mind to animals

23 7

thought and rationality (Bennett, 1964; Davidson, 1975). The nature of evidence for awareness can be examined by considering the demonstration of symbolic representation. As with the negative feedback/intentionality example, demonstration of symbolic capacity does not provide evidence sufficient to infer awareness. Savage-Rumbaugh, Rumbaugh and Boysen (1978b) have shown that young chimpanzees are able to transmit the name of a food or drink to a conspecific, who in turn uses the information to request the contents of a container, each animal then receiving a" share of the reward. Their claim is that the animals understand the names as symbols and know that symbols are tools for communication. In a parody of this study, Epstein, Lanza and Skinner (1980) presented a comparable demonstration of 'successful' communication between two pigeons. Serious analogies have been drawn between language use, trained woodpecker behavior (Chauvin-Muckensturm, 1974), and herring-gull shell dropping (Beck, 1980). Nevertheless, one is tempted to find fault with the analogies rather than to accept the conclusion that certain avian behaviors correspond to symbol use (Premack, 1978). What behavioral criteria, then, establish symbolicity? Operationalizing Peirce's (1958) assertion that a symbol has no link to its object apart from the meaning assigned by an interpreting mind, von Glasersfeld (1976) suggests that a sign which is a symbol 'must be semantically tied to a representation that is independent of the perceptual signals available at any time (not only at the time and place of the sign's use)' (p. 222). Thus, only a hypothetical bee 'which communicated about distance, direction, food sources, etc. without actually coming from, or going to, a specific location' (p. 222) could be said to use symbols. The Savage-Rumbaugh et al. study does not satisfy this definition since the referents occurred in the context of actual behavior on their objects. To establish that an object or action is used symbolically in the absence of the item to which it refers, it must occur as an element related to other symbols. How else would one know that a button press, hand gesture or dance element (of a bee) which corresponds to a referent continues to represent that referent when it is neither present nor about to be acted upon? Primatologists' definitions of 'symbol' are remarkably consistent with yon Glasersfeld's (Bronowski & Bellugi, 1970; Fours, 1974; Premack & Woodruff, 1978b; Rumbaugh & Gill, 1976). But despite this apparent agreement, only a few efforts to establish a symbolic capacity in apes have attempted to adopt behavioral criteria consistent with the above definition. Premack (1976) examined his animals' abilities to judge similarities and differences between two or more linguistic items, respond to symbolic interrogations with 'yes/no', and produce the appropriate symbolic response to a 'what is' interrogation. Reports of spontaneous word juxtapositions (Fours, 1974; Rumbaugh & Gill, 1976) and ape-human conversations (Rumbaugh & Gill, 1976; Patterson, 1978) are other cases in point. On the other hand, requests for tools (Savage-Rumbaugh et al., 1978c), object naming (Gill & Rumbaugh, 1974) and requests for food (Savage-Rumbaugh, Rumbaugh & Boysen, 1978b) are not in themselves adequate evidence for symbolicity. One would expect that demonstrations of symbolicity would be controversial only to the extent that methodological problems exist, and that when such temporary problems as iconic gesturing, the 'overinterpretative', 'fragmentary', and 'anecdotal' nature of reports, unintentional cuing, simple repetition, perceptual matching, unskilled trainers, etc. are resolved, a clear picture of the animals' abilities will issue forth (Gill & Rumbaugh, 1974; Savage-Rumbaugh, Rumbaugh & Boysen, 1978c; Sebeok & Sebeok, 1980; Seidenberg & Petitto, 1979; Terrace, Petitto, Sanders & Bever, 1979). I believe, however, that demonstrations of symbolicity pose a non-methodological problem as well; that of inferring awareness from symbolic communication. Applying the definition of symbolicity beyond the field of pongolinguistics, a chess-playing

238

,

P. S. Silverman

computer program which uses a 'plausible move heuristic' to represent check mate as a goal, matches this to one of several representations of end game move sequences and selects the best sequence as a correct 'description' of check mate, also satisfies the proposed criterion. The program would probably not be judged to be aware. But the operations, after all, are formally similar to those applied in judging the synonymity of two sentences, one of Premack's tasks. There are two arguments against the claim that the behavior of a computer program which fits the def'mition of symbolicity must also be considered aware. First, one cannot state that the chess program 'intended' a message for a receiver, presumably the opponent. Intentionality, in combination with symbolicity, then, becomes the criterion for 'awareness'. It was seen earlier that behavioral tests for negative feedback are not sufficient to establish evidence for intentionality. The same argument can be extended to cover the case of one animal communicating symbolically with another. Second, one might add to the definition of symbolicity the additional feature of creativity or, in linguistic terms, 'productivity' or 'openness' (Hockett, 1960; yon Glasersfeld, 1976). A computer program which never selects the same end game twice or which develops a new proof of a calculus theorem could be reasonably characterized as creative, however, but still might not be considered aware. And, to add to the difficulty, spontaneous production of novel, yet rule-governed, sentences and word combinations is precisely what is typically criticized as anecdotal evidence in linguistic studies. We shall never know whether the first utterances of 'waterbird', 'cryhurt fruit' and 'coke which is orange' (Fouts, 1974; Rumbaugh & Gill, 1976) were fortuitous juxtapositions or creative linguistic combinations meant to produce new meanings. There is, however, one condition in which both awareness and intentionality are easily attributed in a symbolic context, and that is during a conversation. In this setting, the condition of awareness is an intuitive judgment made by one interlocuter with respect to another. Precisely because reported ape-human conversations are based on long histories of shared 'experiences', interactive habits, and private traditions of interpretation, they provoke 'great expectations' (Sebeok & Sebeok, 1980) in the human involved and suggest awareness to him, but remain ambiguous to the uninitiated observer. Given the primitive nature of the grammars and lexicons, and the rich interpretations by human researchers, the contextual analyses recommended to ameliorate this weakness (Seidenberg & Petitto, 1979) may fail to provide convincing proof of awareness to the 'outsider'. To understand a discourse (whether produced by man, animal, or machine), the listener must share the speaker's knowledge about lexical truth (word meanings) and practical possibility (world meanings) (Miller, 1977). This suggests that a complex linguistic system and thematic range must be established before an observer can judge the semanticity of an ape-human conversation. The skeptical observer may find him/herself in the position of having to converse in the 'foreign' language before he/she is convinced that it exists! The identification of behavior controlled by a negative feedback system and of symbolic ability can be used to characterize an organism as 'cognitive'. Defined in this way, many biological and machine systems are cognitive. This view has led to the insight that all biological entities are knowledgeable, but it does not resolve the problem of distinguishing mindful from mindless systems (Goodwin, 1978; Lorenz, 1973). The use of negative feedback control to demonstrate intentionality and of symbolicity to show awareness is inad. equate since feedback and symbolicity are overinclusive and not coextensive with mental properties. On the one hand, objects which one intuitively would not consider as mindful meet the negative feedback and symbolicity criteria. On the other hand, one's intuitions suggest that some objects are mindful, despite the inability to prove them so. Early expectations that methodological innovations would resolve the dispute over animal mind

Attributing mind to animals

239

(Hediger, 1947) were mistaken. There appear to be no adequate non-intuitive criteria which justify the attribution of mental states to other organisms. Nevertheless, people do attribute mental origins to the behavior of animals. Though the contradiction may be a permanent feature of the epistemological landscape (Campbell, 1969), the practical and ethical problems of interpreting and applying research results call for a resolution.

Intuiting mind In making judgments, humans are generally subject to a number of biases which render them either reluctant or unable to recognize purely physical and statistical explanations for events (Shweder, 1977; Tversky & Kahneman, 1974) and to search for discomfirming evidence for the explanations they do propose (Mynatt, Doherty, & Tweney, 1977). As a working hypothesis, one can speculate that mindfulness is a readily available explanation attributed in the absence of objective evidence or presence of ambiguous circumstances. Guthrie (1980) has developed a theory of religion on this premise, as have several theorists of magical thinking (Evans-Pritchard, 1937; Malinowski, 1954; Parsons, 1958), the latter adding the condition of danger. The inherent inadequacy of objective evidence, discussed earlier, supports this speculation as do a number of direct studies. Jones (1979) has shown that there is a tendency to presume that the acts of other people reflect mental dispositions, even when faced with disconfirmatory evidence, and that there is a corresponding tendency to underestimate the influence of non-psychological constraints on their behavior. The predisposition to use intentionality, awareness and mindfulness as a means of understanding an action is so strong that the actions need not be those of a person or animal. Natural disasters, sports events, gambling, exams, lotteries and illness inspire magical thinking as a result of uncertainty, danger, lack of knowledge or belief in a 'just world' (Felson & Gmelch, 1979; Lewis, 1963; Rubin & Peplau, 1973; Wortman, 1976). Intentions and awareness are often attributed to f'timed neutral objects exhibiting patterned movement (Bassili, 1976; Heider & Simmel, 1944; Silverman, 1982). The tendency appears to be primitive, originating in the childhood belief that all activity is intentional (Laurendeau & Pinard, 1962; Piaget, 1929). Reviewing literature on the attribution of intent, Maselli and Altrocchi (1969) have summarized the specific circumstances under which people intuitively attribute specific intentions to others: personalism, acts which seem to be directed towards oneself; hedonic relevance, acts which have important effects on oneself; power, acts which suggest that the actor is more powerful than oneself; intimacy, emotional ties with the actor. All of these are easily engendered in a conversation. As noted earlier, however, an observer of a conversational dyad would not be subject to the same intuitions as the members of the dyad. There are two additional intuitively compelling circumstances which can be described. Fritz Heider (1958) suggested that 'unlikelihood' (in the sense of requiring a large number of physical coincidences) leads to the attribution of intention. As Brother Juniper's quest reflects, both the unlikely 'patternedness' of the universe on the large scale and randomness of particular events within it have served as traditional arguments for the existence of a god acting as the intentional creator or sustainer. Another circumstance, perhaps the most important for this discussion since it is crucial to the Romanes analogy, is the occurrence of similarity, in behavior, outcomes, or physical characteristics, between an object already 'known' to be intentional or aware (e.g. oneself) and a novel object. Based on these similarities, the object (or event) in question can be judged as a new case. Using this method of judging mindfulness, certainty of judgment varies with the degree of similarity and with the extent to which all previously known

240

P. S. Silverman

examples of the observed behaviors or outcomes are believed to require (and not simply permit) a mindful origin. Because it is this form of reasoning that is operative in the negative feedback and symbolicity tests for intentionality and awareness, the role of analogy as an intuitive strategy is considered in some detail in a later section. Intuition in scientific judgment is not unique to the attribution of mind from behavioral evidence. David Hume found that all causal attributions are based on no more than observations of past conjunctions among objects, one of which subsequently changes. The concepts of causality, change, induction and deduction are anchored in judgments of sameness and difference (yon Glasersfeld, 1974). Though these judgments are ultimately intuitive, they are necessary in any enterprise purporting to go beyond the particular and are clearly essential to the practice of science. However, the fact that some intuitive judgments are indispensable to the practice of science does not validate the use of others in theory construction. The apparent human predisposition to intuit mind under some circumstances is no more essential to science than is the predisposition to interpret some events as evidence for the existence of a deity. The intuitive means of judging the existence of mind do not invoke a workable form of parsimony. Rather, they reflect an assumption that is the mirror image of the Morganian view that 'simple' is equal to 'depersonal' and are inconsistent with the Popperian position that 'simple' corresponds to what is most easily falsifiable.

Intuition in animal research The adoption of the Morganian form of parsimony has not produced satisfactorily testable criteria by which mindfulness can be attributed to animals. On the other hand, intuitive judgments not only permit, but seem to demand, that some objects be considered mindful. Perhaps for this reason, the behaviorist rejection of mental processes has been unsatisfying. What role, then, does and should intuition play in research aimed at identifying mental processes in animals? While the 'similarity' argument for mind is probably the most accepted and widespread intuitive factor in animal research, other circumstances undoubtedly play a role. Menzel and Everett (1978) for example, draw on the factors of relative physical power and hedonic relevance in the following suggestion: 'at least on o c c a s i o n . . , we should proceed on the assumption that our subjects are at least as complicated and unpredictable as we ourselves are . . . . [This is useful] . . . especially when you don't know your subject well and he is stronger than you are and there is no cage wire between the two of you' (p. 14). Hebb (1946) implied that both intimacy and practical necessity contributed to the attribution of 'frankly anthropomorphic concepts of emotion and attitude' (p. 88) to chimpanzees at the Yerkes Laboratories, noting that 'the recognition of identity really begins only after several weeks of observation, and that the truth of one's conviction, that the identity is real, increases thereafter for months' (p. 105). The wistful suggestion by Savage-Rumbaugh, Rumbaugh & Boysen (1978d) that evidence for the cognitive abilities of their chimpanzees be presented in visual form rather than as verbal reports sounds a similar note as do the beliefs of pongolinguistic teams that their studies have yielded valid evidence while the others have not. Unfortunately, the familiarity required for the interpretation of evidence also breeds the intuitive certainty that an animal is aware of 'X' or intends 'Y'. Given the lack of 'a positive description of cognitive mechanisms or processes common to the defining observation... ' (Honig, 1978; 11), it is doubtful that firsthand intuitive experience will resolve what is a theoretical difficulty.

Attributing mind to animals

241

The use o f analogy

Intuitive similarities play a commonly accepted, but subtle, role in scientific judgments of animal mental processes and merit particular consideration. In scientific reasoning, the identification of similarities between phenomena typically occurs as the application of an analogy to form a hypothesis. Hull (1974) provides a concise definition of its use: 'the behavior of a poorly understood system is assimilated to a well understood paradigm system and the principles that govern the behavior of the paradigm system can be extrapolated to the poorly known system' (p. 105). A gas is hypothesized to be composed of particles with behavior analogous to that of billiard balls. Observations of the behavior of gas are compared to predictions based on the physical principles explaining the motion of billiard balls and, if the predictions hold, the behavior of the gas has been explained by those same principles. Analogies of this sort should not be applied to the judgment of animal mind from behavioral evidence because neither negative feedback nor symbolicity compose or cause mindfulness, and because analogical reasoning is legitimately used in hypothesis formation but not in hypothesis confirmation. Hesse (1966)suggests that valid analogies rely on independent similarity and causal relations. In the analogy 'elastic balls/bouncing :: gas molecules/pressure', elastic balls and gas molecules must be viewed as similar entities, as must bouncing and pressure. For this analogy to work, there must also be causal relation between elastic balls and bouncing which can be attributed to the relation between gas molecules and pressure. Without this causal relation, one would have no way of specifying the nature of the hypothetical term 'gas molecules'. When this analysis is applied to the analogy for animal mind, the causality relation is missing. Putting Romanes' analogy in more formal terms, and applying it to intentionality, 'human intentionality/negative feedback control :: animal intentionality/negative feedback control', where animal intentionality is the hypothetical term. The argument fails because there is no necessary co-occurrence or substantive causal relation between negative feedback control and intentionality. A similarly flawed argument characterizes the symbolicity/awareness analogy. Hesse also discusses the general role of analogy in scientific judgment. The function of analogy is to produce a new term (e.g. gas molecules) which can serve as a testable hypothesis enabling further prediction. In the analogy which proposes a mental state in animals, the subjective experience characterizing one's own behavior serves as the well-understood paradigm. Similar behaviors (animal or human) are then thought to be characterized by their own subjective experiences. The catch is that the analogy is untestable, since the subjectivity paradigm is not decomposable into a system whose parts can be used to make behavioral predictions. What the cognitive scientist tends to do is create behavioral tests from which cognitive system analogies can be justifiably demonstrated. He then continues to reason along the line that because he, a mindful creature, produces such systematic behavior, that evidence for the systems in question is also evidence for mind. This unjustifiable jump is well hidden by the fact that the behavioral material selected for study is typically an animal rather than a machine, an organ, or a species. An animal is considered to have the potential for mind, while the others are not likely candidates. It may be for this reason that the modern mind-body problem is more often addressed in terms of machine thinking than animal thinking, though virtually the same arguments about animal intellect (dating to the 17th century) are employed in debates over computer intellect (Gunderson, 1971). Controversies over animal mentality grow in this fertile soil. For Griffin (1976) the orientation behavior of a bee, bat or pigeon suffices to suggest mindfulness. For Skinner (1957) the verbal behavior of other people does not.

242

P. S. Silverman A note on intuition in human research

Though the specific arguments presented here have concerned the inference of animal mind, the principles can be easily applied to the inference of mental processes in other humans. An adequate consideration is far beyond the scope of this paper, but the disconcerting conclusion implied by the preceding discussion is that there can be no objective behavioral tests to buttress the claim for other minds, though intuitive belief is undeniably useful. The lack of an empirical 'anchor' is a primary factor in the perennial performance/competence controversies, with the ontogenetic border disputes of human cognition recalling the phylogenetic skirmishes. Tests of even the most easily definable Piagetian concepts provide an example. Ages of acquisition of conservation, of object permanence, of social decentering and the effects of training on the rate of development are subjects of continuing debate (Brainerd, 1978; Gelman, 1978).

Some practical implications The lack of clear criteria for the presence of mind presents boundary problems that are not trivial. Because the boundaries we set are arbitrary, evidence and methods acceptable for demonstrating mental properties for one species are insufficient for another. While Trevarthen and Hubley's (1978) claim that a ten-month-old infant possesses a 'faculty of intersubjectivity' seems reasonably based on the observation that she 'repeatedly looked up at her mother's face when receiving an object, pausing as if to acknowledge receipt', Margaret Evans's (1879) case for 'sagacity in a cat' is scarcely supported by the observation that a cat which heard a hungry child sobbing stole bread for it. Perhaps the manufacture of tools and their transportation from one site to another suggests 'mental imagery, foresight, and premeditation' (Beck, 1980: 239) in early hominids and modern apes. But transportation of objects (decorative and architectural) to nests, and tool construction and use by birds (Jones & Kamil, 1973) need not imply similar mental states. Similarly, we are more apt to accept rich contextual analyses based on intimacy as evidence for infants' concepts (Bretherington & Beeghly-Smith, 1981) than for animal concepts, though in either case such analyses exclude objective confirmation by an ~ninvolved' observer. Stich (1979) suggests that two factors are involved in attributing a 'belief' to an animal: the structure of the belief (akin to 'cognitive system') and the content (subjective experience) must be specifiable. The former is, in principle, describable; the latter is not. We can only attribute a belief with 'content' to an animal if 'we can assume the subject to have a broad network of related beliefs that is largely isomorphic with our own' (p. 22). But this assumption is not testable. Stich concludes that we can do little more than argue than an animal has 'belief-like states'. Should we adopt the phraseology 'belief-like states' or 'mind-like states' in describing the topic of cognitive studies (animal, machine and human)? If we do, terms like 'cognition' and 'cognitive system' would remain distinguishable from 'evolution', 'biological system' and 'information processing system' only for the reasons that their behaviors occur on different time scales and have different physical manifestations and origins. We could go on to observe that some cognitive systems are more similar to ours than are others, though relative similarity could not be considered as corresponding to degrees of awareness, intentionality or other attributes of mind. The classificatory differences between cognitive and other systems may prove useful in practice, but admission of a conventional distinction (which is drawn at different points by different theorists) limits the use of cognitive theory. Fodor (1975: 53) has acknowledged this limitation:

Attributing mind to animals

243

the states of the organism postulated in theories of cognition would not count as states of the organism for purposes of, say, a theory of legal or moral responsibility. But so what? What matters is that they should count as states of the organism for some useful purpose. In particular, what matters is that they should count as states of the organism for purposes of constructing psychological theories that are true. Fodor's 'so what' is unpalatable when one considers that in practice theory.and morality are not always separable; the treatment of animals in cognitive research being a case in point. What is considered a 'true' theory is as much a function of the uses it serves as it is a function of the tests which it survives. If the attribution of animal mind is treated as a function of the observer's intentions and anticipated conduct, rather than as a property intrinsic to the animal, the question of animal mind need have no single resolution. Instead of asking 'Do animals have minds?' one might ask 'Under what circumstances is it valid to attribute minds to animals?' The response to the latter question is a function of what one plans to do with the results. If it is to save one's neck in a dangerous situation involving a possible opponent, it is wise to intuit that the opponent is aware. If one is only observing such a conflict or given a report of it and is unable to offer help, the 'opponent' may justifiably be characterized as cognitive but non-aware and mindless, or at most as mindlike. In practice people do make these situation-dependent judgments. As Dennett (1978) points out, engaged in a game with a chess-playing machine which one hopes to beat, one easily makes the temporary pragmatic decision to treat it 'rather like an intelligent human opponent' (p. 5). Since the observer's intentions and conduct toward the organism are key elements in the decision, ethical considerations become relevant in judging whether or not those intentions and conduct require attribution of mind. At recent conferences on animal mind and the ethics of animal experimentation (reported by Cherfas, 1980; Solomon, 1982), proposals were made that animal mind should be scientifically evaluated: that suffering should be avoided in animals that 'feel', that captivity and boredom are not desirable for animals that may be 'aware' o f a loss of freedom, and that killing something which 'divines a personality' is unethical. Under a relativistic epistemological approach, this line of reasoning would be reversed. Because it cannot be determined empirically whether an animal feels, is aware or infers a personality, it is impossible to use these mental properties as absolute criteria determining the morality of various experimental manipulations.~" Instead, there are two options: one in which cognitive theory does not inform practice, another in which it does. The first is to consider that because it is possible that any object or event characterized as a cognitive system may be mindful, all are. One could then weigh the ethical 'gains' produced by a potential experimental result against an absolute ethical 'loss' involved in its implementation. On this basis, the loss involved in experimentation on certain computer programs or a rat would be no less than those for a chimpanzee or a human (unless other factors such as species extinction or moral anthropocentrism are introduced). In practice, however, moral considerations of relative value are rare (Bowd, 1980). Indeed, the potential value of any experiment is seldom predictable and the question of what constitutes death or discomfort for another cognitive system raises the original dilemma. A second option is to propose that any animal (or object or event) which demonstrates behavioral evidence for negative feedback control, symbolicity, or any other cognitive system property claimed as a necessary property of mindfulness, may have mental tlntuitive factors in demarcating the constructs of 'living' and 'non-livlng' lead to a similar conclusion. Ethical rather than scientific judgments apply to the points at which embryonic life begins and death occurs.

244

P. S. Silverman

experiences. Those described by cognitive systems relatively similar to our own (i.e. possessing more system properties analogous to human ones) would be judged more likely to be mindful. This approach adopts the empirical-intuitive mix described earlier as typical of current research. Since sufficient evidence for mindfulness is not possible, we fall back on the necessary system properties to make subjective probabilistic judgments. These judgments, however, would be applied only in circumstances requiring resolution of a moral issue. An animal found to have a cognitive system closely resembling ours which governs its behavior would be considered more human-like and thus more likely to possess mental states than an animal whose behavior is described by a more exotic system. It could not be claimed, however, that the more similar animal has a mind (or that a less similar one has none), but only that since it is more likely to have one that experiments involving what we would experience as pain, confinement, and death are less justifiable. The fundamental weakness of this approach is a reliance on the peculiar notion of 'subjective probability', in which an increase in the quantity of necessary conditions (cognitive properties) is judged as an increase in the likelihood than an undetectable sufficient condition (mindfulness) is present. The trick of either procedure is to apply them only to cases demanding 'ethical engineering', and to avoid them where empirically justifiable conclusions are required. Thus, for ethical purposes, a chimpanzee may be as aware and intentional as we are (or close to it), but in the context of purely empirical judgments, the most that can be offered is that its behavior reflects mind-like or belief-like states. Once again extending the argument to other humans, I can conclude that y o u are 'mindful' in the ethical context, but empirically I can only offer the observation that you are mind-like.

Conclusion A lesson might, after all, be drawn from Brother Juniper. While his question was formulated so as to be empirically testable, it rose out of an intuitive belief in heavenly justice. His consequent discovery of a pattern was treated as decisive evidence, and nature's behavior was found to be preordained and intentional. The psychologist studying animal cognition is confronted with an intuitive belief that mind exists in a variety of objects other than himself, and searches for evidence in animal behavior. Finding that the behavior is analogous to his own and therefore cognitive (e.g. feedback controlled or symbolic), he treats this as support for the existence of mindfulness. The approach is conservative and reasonable when the issue at hand is ethical. Brother Juniper's true 'creator', Thornton Wilder, approached the problem differently. Aware of the futility of proving God's existence, he sought the nature of the pattern itself. Wilder reached the conclusion that the pattern underlying the coincidence of collapse and five deaths was allegorical. The fallen bridge stood for the bond between the living and the dead. Outside of an ethical context which necessitates a decisive judgment, it would be wise to conclude that feedback-controlled and symbolic behaviors can 'stand for' mind in animals, but do not permit the conclusion that mind is really there.

References Ashby, W. R. (1952). Design for a Brain. London: Chapman & Hall. Bassili, J. N. (1976). J. pers. soc. Psychol. 33,680-685. Beck, B. (1980). Animal Tool Behavior. New York: Garland. Bennett, J. (1964). Rationality. London: Routledge & Kegan Paul. Berliner, H. J. (1977). In (P. N. Johnson-Laixd & P. C. Wason, Eds) Thinking. Cambridge: Cambridge University Press.

A ttributing mind to animals

245

Bishop, J. (1980). Mind 89, 1-16. Bowd, A. D. (1980). Am. Psychol. 35,224-225. Brainerd, C. J. (1978). Piaget's Theory o f Intelligence. Englewood Cliffs, NJ: Prentice-Hall. Bralthwaite, R. B. (1953). Scientific Explanation. Cambridge: Cambridge University Press. Bretherington, I. & Beeghly-Smith, M. (1981). Talking about internal states: The acquisition of an explicit theory of mind. Unpublished manuscript. Bronowski, J. & Bellugi, U. (1970). Science 168,669-673. Campbell, D. T. (1969). In (T. Mischel, Ed.) Human Action. New York: Academic Press. Campbell, D. T. & Blake, R. (1977). Am. Scient. 65,146. Capra, F. (1975). The Tao o f Physics. Berkeley, CA: Shambala. Chauvin-Muckensturm, B. (1974). Revue Co mportement A nim. 9, 185-207. Cherfas, J. (1980). New Scient. 85, 1002-1003. Davidson, D. (1975). In (S. Guttenplan, Ed.) Mind and Language. Oxford: University Press. Denned:t, D. C. (1978). Brainstorms-Philosophical Essays on Mind and Psychology. New York: Bradford Books. Dreyfus, H. L. (1972). What Computers Can't Do. New York: Harper & Row. Eddington, A. S. (1928). The Nature o f the Physical WorM. New York: Macmillan. Epstein, R., Lanza, R. P. & Skinner, B. F. (1980). Science 207,543-545. Evans, M. (1879). Nature 20,220. Evans-Pritchard, E. E. (1937). Witchcraft, Oracles and Magic Among the Azande. Oxford: Oxford University Press. Felson, R. B. & Gmelch, G. (1979). Current Anthro. 20,587-589. Fodor, J. A. (1975). The Language o f Thought. New York: Thomas Y. Crowell. Fouts, R. S. (1974). J. Hum. Ev. 3,475-482. Gallup, G. (1977). Am. Psychol. 32,329-338. Gelman, R. (1978). Ann. Rev. Psych. 29,297-332. Gill, T. V. & Rumbaugh, D. M. (1974). J. Hum. Ev. 3,483-492. Goodwin, B. C. (1978). J. social biol. Struct. 1, 117-125. Griffin, D. R. (1976). The Question of Animal Awareness. New York: Rockefeller University Press. Griffin, D. R. (1977). Am. Scient. 65,146-148. Griffin, D. R. (1978). Behav. Brain Sci. 4,527-538. Groves, C. P. (1978). Behav. Brain Sci. 4, 575-576. Gunderson, K. (1971). Mentality and Machines. Garden City, NY: Anchor Books. Guthrie, S. (1980). Curr. Anthrop. 21,181-194. Haugeland, J. (1978). Behav. Brain Sci. 2, 215-260. Hebb, D. O. 1946). Psychol. Rev. 53, 88-106. Hediger, H. (1947). Behavior 1,130-137. Heider, F. (1958). The Psychology o f Interpersonal Relations. New York: John Wiley. Heider, R. & Simmel, M. (1944). Am. J. Psychol. 57,243-259. Hesse, M. B. (1966). Models and Analogies in Science. Notre Dame, Indiana: University of Notre Dame Press. Hockett, C. F. (1960). In (W. E. Lanyon & W. N. Tavolga, Eds.) Animal Sounds and Communication. Washington: American Institute of Biological Sciences. Honig, W. K. (1978). In (S. H. Hulse, H. Fowler & W. K. Honig, Eds.) Cognitive Processes in Animal Behavior. Hillsdale, NJ: Lawrence Erlbaum. Hook, S. (1960). In (S. Hook, Ed.)Dimensions o f Mind. New York University Press. Hull, D. (1974). Philosophy o f Biological Science. Englewood Cliffs, NJ: Prentice-Hall. Jones, E. E. (1979). Am. Psychol. 34, 107-117. Jones, T. B. & Kamil, A. C. (1973). Science 180, 1076-1078. Kelly, G. A. (1963). A Theory o f Personality. New York: Norton. Kimble, G. A. & Permuter, L. C. (1970). Psychol. Rev. 77,361-384. Koffka, K. (1935). Principles o f GestalfPsychology. New York: Harcourt, Brace. Laurendeau, M. & Pinard, A. (1962). Causal Thinking in the Child. New York: International University Press. Lewis, L. (1963). Am. J. Socio. 69, 7-12. Lorenz, K. (1973). Behind the Mirror. New York: Methuen. Malinowski, B. (1954). Magic, Science and Religion. Garden City, NY: Doubleday. Maselli, M. D. & Altrocchi, J. (1969). Psychol. Bull 71,445-454.

246

P. S. Silverman

Maturana, I~. R. (1970). Biological Computer Laboratory Project, Report No. 90, Univeristy of Illinois. Menzel, E. W. Jr. (1973). In (E. W. Menzel, Jr., Ed.), Symposia of the Fourth International Congress of Primatology, Vol. I: Precultural Primate Behavior. Basel: Karger. Menzel, E.W. Jr. &Everett, J. W. (1978). Cognitive aspects of foraging behavior. Paper presented at the Animal Behavior Society, Seattle, Washington. Menzel, E. W. Jr. & Johnson, M. K. (1978). Behav. Brain Sci. 4, 586-587. Miller, G. A. (1977). In (P. N. Johnson-Laird & P. C. Wason, Eds.) Thinking. Cambridge: Cambridge University Press. Morgan, C. L. (1894). An Introduction to Comparative Psychology. London: Scott. Mynatt, C. R., Doherty, M. E. & Tweney, R. D. (1977). Q. Jl. exp. Psychol. 29, 85-95. Nagel, T. (1974). Philos. Rev. 83,435-450. Parsons, T. (1958). In (W. Lessa & E. Vogt, Eds.), Reader in Comparative Religion. Evanston, IL: Row, Peterson. Patterson, F. (1978). In (F. C. C. Peng, Ed.), Sign Language Acquistion in Man and Ape: New Dimensions in Comparative Psycholinguistics. Boulder, CO: Westview. Peirce, C. S. (1958). The Collected Papers o/Charles Sanders Peirce, Vol. 8. (A. Banks, Ed.). Cambridge, MA: Harvard University Press. Piaget, J. (1929). The Child's Conception o / t h e World. London: Routledge & Kegan Paul. Piaget, J. (1976). The Grasp o f Consciousness. Cambridge, MA: Harvard University Press. Popper, K. R. (1959). The Logic of Scientific Discovery. London: Hutchinson. Powers, W. T. (1973). Behavior: The Control o/Perception. Chicago: Aldine. Premack, D. (1976). Intelligence in Ape and Man. HiUsdale, NJ: Lawrence Erlbaum. Premack, D. (1978). In (S. H. Hulse, H. Fowler & W. K. Honig, Eds), Cognitive Processes in Animal Behavior. Hillsdale, NJ: Lawrence Erlbaum. Premack, D. & Woodruff, G. (1978a). Behav. Brain Sci. 4, 515-526. Premack, D. & Woodruff, G. (1978b). Behav. Brain Sci. 4, 616-629. Romanes, G. J. (1882). Animal Intelligence. London: Kegan Paul. Rosenblueth, A., Wiener, N. & Bigelow, J. (1943). Philos. Sci. 10, 18-24. Rubin, Z. & Peplau, A. (1973). Belief in a just world and reactions to another's lot: A study of participants in the national draft lottery. J. soc. iss. 29, 73-93. Rumbaugh, D.M. & GiU, T.V. (1976). In (S. R. Harnad, H. D. Steldis & J. Lancaster, Eds.) Origins and Evolution o/Language and Speech. (Annals of the New York Academy of Sciences). Savage-Rumbaugh, E.S., Rumbaugh, D. M. & Boysen, S. (1978a). Behav. Brain Sci. 4, 555-557. Savage-Rumbaugh, E. S., Rumbaugh, D. M. & Boysen, S. (1978b). Science 201, 641-644. Savage-Rumbaugh, E.S., Rumbaugh, D. M. & Boysen, S. (1978c). Behav. Brain Sci. 4, 539-554. Savage-Rumbaugh, E. S., Rumbaugh, D. M. & Boysen, S. (1978d). Behav. Brain Sci. 4, 614-616. Sebeok, J. U. & Sebeok, T. S. (1980). In (T. A. Sebeok & J. U. Sebeok, Eds.) Speaking of Apes. New York: Plenum. Seidenberg, M. S. & Petitto, L. A. (1979). Cognition 7,177-215. Shorter, J. M. (1967). In (P. Edwards, Ed.) The Encyclopedia of Philosophy, Vol. 6. New York: Macmillan & The Free Press. Shr~dinger, E. (1945). What is Life ? Cambridge: University Press. Shweder, R. A. (1977). Curt. Anthrop. 18,637-648. Silverman, P. (1982). Inference of intentionality and awareness in a novel object. Rocky Mountain Psychological Association, Albuquerque. Skinner, B. F. (1957). Verbal Behavior. New York: Appleton-Century-Crofts. Solomon, R. C. (1982). Psych. Today 16 (3), 36-45. Stich, S. P. (1979). Austral. J. Philos. 57, 15-28. Taylor, R. (1950). Philos. Sci. 17,310-317. Terrace, H. S., Petitto, L. A., Sanders, R. J. & Beret, T. G. (1979). Science 206, 891-902. Trevarthen, C. & Hubley, P. (1978). In (A. Lock, Ed.), Action, Gesture and Symbol: The Emergence o f Language. London, Academic Press. Turing, A. M. (1950). Mind 59,433-460. Tversky, A. & Kahneman, D. (1974). Science 185, 1124-1131.

Attributing mind to animals

247

Uhr, L. M. (1973). Pattern Recognition, Learning, and Thought. Englewood Cliffs, NJ: Prentice-Hall. yon Glasersfeld, E. (1974). Semiotica 12, 129-144. yon Glasersfeld, E. (1976). In (S. R. Harnad, H. D. Steklis & J. Lancaster, Eds.) Origins and Evolution of Language and Speech (Annals of the New York Academy of Sciences). yon Glasersfeld, E. (1979). In (M. N. Oxer, Ed.) A Cybernetic Approach to the Assessment of Children: Toward a More Human Use o f Human Beings. Boulder, CO: Westview Press. yon Glasersfeld, E. & Silverman, P. (1976). Commun. Assoc. Comput. Much. 19, 566-587. Weizenbaum, J. (1976). Computer Power and Human Reason. San Francisco: Freeman. Wiener, N. (1948). Cybernetics: Control and Communication in the Animal and the Machine. New York: Wiley. Winograd, T. (1972). Cog. Psychol. 3, 1-191. Woodruff, G. & Premack, D. (1979). Cognition 7,333-362. Wortman, C. B. (1976). In (J. H. Harvey, W. J. Ickes & R. F. Kidd, Eds.) New Directions in Attribution Research (Vol. 1). Hillsdale, NJ: Lawrence Erlbaum. Zazzo, R. (1979). Revue Psychol. appl. 29,235-246.