The influence of postreliance detection on the deceptive efficacy of dishonest signals of intent

The influence of postreliance detection on the deceptive efficacy of dishonest signals of intent

Evolution and Human Behavior 23 (2002) 103 – 121 The influence of postreliance detection on the deceptive efficacy of dishonest signals of intent Und...

159KB Sizes 0 Downloads 17 Views

Evolution and Human Behavior 23 (2002) 103 – 121

The influence of postreliance detection on the deceptive efficacy of dishonest signals of intent Understanding facial clues to deceit as the outcome of signaling tradeoffs Paul W. Andrews Department of Biology, University of New Mexico, Castetter Hall, Albuquerque, NM 87131-1091, USA Received 15 January 2001; received in revised form 8 March 2001; accepted 11 July 2001

Abstract Evolutionary communication theory posits that signalers and receivers are in a coevolutionary arms race. Receivers attempt to predict the behavior of signalers, and signalers attempt to manipulate the behavior of receivers (often through the use of dishonest signals of intent). This has led to the perception that deceitful signalers prefer perfectly deceptive signals. However, it is often easy for receivers to determine that a signal of intent was dishonest after relying on it to their detriment. Even the best deceivers may then acquire a reputation for being dishonest. For instance, in Prisoner’s Dilemma (PD)-like social situations, predictable defectors make better social partners than unpredictable defectors. When opportunities to engage in social interaction depend on one’s reputation for predictability, those who are better at concealing their defecting intentions may suffer the most from the reputations they acquire. Deceivers then face a tradeoff between the short-term benefits of successful deception and the long-term costs to their reputations. A mathematical model is developed and it is shown that the tradeoff often favors signalers who produce imperfectly deceptive signals over perfectly honest or perfectly deceptive ones. Implications for understanding human facial expressions and sociopathy are drawn. D 2002 Elsevier Science Inc. All rights reserved. Keywords: Cooperation; Deception; Defection; Dishonesty; Facial expressions; Honesty; Intent; Signal; Sociopathy

E-mail address: [email protected] (P.W. Andrews). 1090-5138/02/$ – see front matter D 2002 Elsevier Science Inc. All rights reserved. PII: S 1 0 9 0 - 5 1 3 8 ( 0 1 ) 0 0 0 8 4 - 8

104

P.W. Andrews / Evolution and Human Behavior 23 (2002) 103–121

1. Introduction In communication systems, receivers are under selection to predict the behavior of signalers, and signalers are under selection to manipulate receivers (often deceptively) through their signals (Krebs & Dawkins, 1984). If deception is to be a component of communication systems without destabilizing them, then frequency-dependent forces must limit deception as it becomes more prevalent. Such forces could include an increased ability to detect and discriminate against dishonest signals in response to repeated exposure to them, the threat of punishment, receivers exhibiting increasing reliance on costly signals, and a loss of reputation (Frank, 1988; Getty, 1998; Grafen, 1990; Semple & McComb, 1996; van Rhijn & Vodegel, 1980; Zahavi, 1975, 1977). These limiting forces require the ability of receivers to detect deceptive behavior either before relying on it (prereliance detection) or after relying on it (postreliance detection). It is often very easy for receivers to determine that a signal was dishonest after they have relied on it to their detriment. For instance, everyone in Czechoslovakia ‘‘felt’’ the postreliance effects of Hitler’s lie to Chamberlain about not intending to attack that country. Obviously, prereliance detection would be the better way to limit exposure to deception. Deception does appear to have been frequent enough in ancestral human environments so that people evolved cognitive adaptations for prereliance detection (Andrews, 2001; Cosmides & Tooby, 1992). However, the operation of accurate prereliance deception detection mechanisms may be costly (Humphrey, 1976). For instance, while people often perform deception detection tasks quickly and efficiently relative to similar nondeception detection tasks, they are more likely to look for clues that a signaler is trying to deceive when the signaler has an apparent incentive to deceive (Andrews, 2001; Cosmides & Tooby, 1992; Fein, 1996; Fein, Hilton, & Miller, 1990; Hilton, Fein, & Miller, 1993). Postreliance detection may exert powerful influences on signaling systems if dishonest signalers are punished. For instance, badge size is an indicator of dominance and status in male house sparrows (Møller, 1987), but large badges may be relatively inexpensive to produce (Gonzalez et al., 1999), and there is evidence that the honesty of the signaling system is maintained because males with falsely inflated badges are punished with higher rates of aggression by other males (Møller, 1987). In the face of limiting forces, most communication models tend to optimize the decision of whether or not to produce a dishonest signal. However, in principle, there is nothing preventing selection from optimizing the efficacy of a dishonest signal, or the probability that it will deceive. In the face of limiting forces, it may be profitable to produce imperfectly deceptive signals—signals that are similar enough to honest signals that they sometimes deceive, but that are distinct enough that receivers can sometimes distinguish them. The issue may be important because, at least among human beings, people seem to vary in their success at deceiving others about their intentions (e.g., Ford, 1996; Frank, 1988; Mealey, 1995; Wilson, Near, & Miller, 1996). In this paper, a model for the evolution of imperfectly deceptive signals driven by postreliance detection is described. The model applies to dishonest signals that purport to convey information about future behavior where the function of such signals is to exploit one

P.W. Andrews / Evolution and Human Behavior 23 (2002) 103–121

105

or more receivers (e.g., ‘‘Ron, you can trust me not to run off with your wife.’’), but it would not apply to signals about past behavior (e.g., ‘‘No, Ron, I have never slept with your wife!’’). Through postreliance detection, individuals acquire a reputation for the efficacy of their dishonest signals. Since bad deceivers should be preferred over good deceivers as social partners, the best deceivers may suffer the most from the reputations that they acquire. Thus, a tradeoff may exist between the short-term benefits of deception, and the long-term benefits of acquiring or maintaining a reputation as a predictable social partner. Signalers facing such a tradeoff often prefer imperfectly deceptive signals to perfectly honest or perfectly deceptive ones. Perfectly deceptive signalers lose because they acquire reputations as effective deceivers and are disfavored as social partners. Conversely, perfectly honest signalers lose because they never acquire the benefits of exploiting others and because they are too predictable and thus easily exploited themselves. Before the model, however, I first discuss some important terminology.

2. Terminology In communication systems, receivers evolve mechanisms for detecting the behaviors (or other traits) of actors, and for responding to them, because they contain otherwise unobservable information that receivers can use to make decisions about how to interact with them (e.g., what behaviors the actor is preparing to engage in, the actor’s level of need, the actor’s genetic quality, etc.). On the other hand, signalers are under selection to produce signals (a behavior, or some other trait, that is produced for the purpose of eliciting a response from receivers). Signals evolve for the benefit of signalers and they may or may not also benefit receivers. A signal is honest if it contains accurate information about the signaler’s internal state, and it is dishonest or deceptive if it contains inaccurate information. Honest signals are usually produced to elicit responses that will also benefit receivers, and dishonest signals are produced to elicit responses that benefit signalers (often at the expense of receivers). Dishonest signals may differ from honest signals in more or less subtle ways, and receivers should often be selected to distinguish them (Krebs & Dawkins, 1984). Moreover, an actor’s behavior may provide information to receivers even if its function does not involve eliciting responses from them. In such instances, the actor’s behavior is called a cue. Since cues cannot be deceptive, they are more likely than signals to contain accurate information about an actor, and receivers should be selected to distinguish them from signals (Andrews, 2001). Throughout the paper, I refer to receivers making ‘‘inferences’’ of the internal states of signalers. The term is not meant to imply that receivers necessarily make conscious inferences about the internal states of signalers (although in some communication systems receivers may make conscious inferences). Rather, ‘‘inference’’ refers to the computational process by which the nervous systems of receivers go about determining whether an observed behavior contains reliable information about the actor that can be used to craft a response. It is also meant to imply that receivers can make incorrect assessments about a signal’s veracity.

106

P.W. Andrews / Evolution and Human Behavior 23 (2002) 103–121

Similarly, I often refer to individuals as having ‘‘intentions’’ and producing ‘‘signals of intent.’’ This is not to imply that individuals necessarily have conscious intentions or that they consciously advertise intentions to receivers. However, part of the function of the nervous system is to assess the situational context and plan future behavior. Thus, ‘‘intention’’ refers to the end result of the computational process by which the nervous system goes about planning future behavior. ‘‘Signal of intent’’ refers to a signal whose function is to convey accurate or inaccurate information about the signaler’s future behavior to a receiver.

3. The model Assume a population in which individuals interact in pairs. Individuals interact in the iterated Prisoner’s Dilemma (PD; see Fig. 1) with a constant probability, w, that the game will continue from the current round to the next. Individuals interact with a particular partner for only a single round such that they search for a different partner on every round. A round is composed of several phases. In the search phase, individuals search for potential partners and encounters are random. In the decision phase, a potential partner is chosen with probability f. The probability that the potential partner will not be chosen is 1f. f is allowed to vary over the course of the game (i.e., on the i-th round of the game, the probability that the potential partner is chosen is fi). Its value reflects, in part, the degree to which accurate information is known about d (the probability that the potential partner will defect) and l (the probability that one can detect the potential partner’s intent to defect; see Fig. 2).

Fig. 1. The PD payoff matrix. See text for details.

P.W. Andrews / Evolution and Human Behavior 23 (2002) 103–121

107

Fig. 2. Tree diagram of the decision variables to be optimized.

In the signaling phase, individuals decide what tactics they intend to play. In the PD, individuals may choose to either cooperate (C) or defect (D). If an individual intends to play C he produces an honest signal of cooperative intent that receivers can detect with certainty. If instead he intends to play D, he produces a signal that mimics the signal of cooperative intent but it contains elements such that the dishonest signal differs from the honest one on a continuum of salience (or detectability). Receivers have probability l of detecting the difference and associating it with the intent to defect. When l=0, the signal is perfectly deceptive and receivers are always deceived; when l=1, the signal is honest and signalers are never deceived. Thus, l is a measure of the deceptive efficacy of a dishonest signal and is the variable to be optimized. Note that the ability to detect deceptions after the fact could provide individuals with the means to estimate l and d. By observing a potential partner’s interactions with others, a third party observer can determine: (1) how often the partner defects out of the total number of interactions observed; and (2) how often he can detect the partner’s signal of the intention to defect out of the total number of defections observed. If the observational sample size is large enough, the first quantity approaches d, and the second quantity approaches l. Even if third party observation is not possible, individuals within the community in which the potential partner operates can pool information (i.e., gossip) about their experiences to estimate l and d.

108

P.W. Andrews / Evolution and Human Behavior 23 (2002) 103–121

In the assessment phase, if an individual ascertains that his partner intends to defect, he also will intend to play D, regardless of his original intention. Otherwise, he sticks with his original intention. In the interaction phase, each player plays his intended tactic. In the payoff phase, payoffs are accrued. The payoff for mutual cooperation is bc; the payoff for defecting when one’s partner cooperates is b; the payoff for cooperating when one’s partner defects is c; and the payoff for mutual defection is zero. If a player is not chosen as a social partner, he receives the payoff of Q for that round. Take two individuals, ego and partner, who are involved in potential social interaction. Ego defects with probability de, produces a dishonest signal that partner perceives as deceptive with probability le, and is chosen for social interaction by partner with probability fe, which is an arbitrary function of de and le. Similarly, partner defects with probability dp, produces a dishonest signal that ego perceives as deceptive with probability lp, and is chosen for social interaction by ego with probability fp. Define re\p as the probability of a particular outcome between ego and partner where e is the move of ego and p is the move of partner. The probabilities of the outcomes between ego and partner are then rC \C =(1de )(1dp ), rD\C=de(1dp)(1le), rC\D=dp(1de)(1lp), and rD\D=dedp+de(1dp)le+dp(1de)lp . The evolutionarily stable strategy (ESS) condition (i.e., the condition at which no other strategy can invade; Maynard Smith, 1982) is at l*, d*, and f*. The procedure for finding the ESS relationship between l* and d* involves determining the payoff of ego in interaction ˆ , de=d* and with the ESS population assuming that he is a rare mutant with attributes le=l ˆ ˆ fe=f(l, d*, i). Then, the payoff function will be maximized with respect to l. At the ESS, ˆl=l* and this can be solved to give the ESS condition for l*. A similar procedure gives the ESS condition for d*. The two conditions can then be combined to give the ESS relationship between l* and d*. ˆ mutant interacts with partners playing the ESS strategy on every round. The payoff The l for the mutant is [Eq. (1)]: X X1 wi1 fˆ i fi ½ðb  cÞrC\C þ brD\C  crC\D þ wi1 ð1  fˆ i fi ÞQ ð1Þ Wlˆ ¼ i¼1 ˆ , d*, i), f*=f(l*, where, fˆ i=f(l d*, i) and the probabilities of the different outcomes are i 2 ˆ ), rC\D=d*(1d*)(1l*), and rD\D=(d*)2+d*(1d*) rC\C=(1d*) , rD\C=d*(1d*)(1l ˆ ). At the ESS, l ˆ =0, and f*i does not change over the course of the game ˆ =l*, @Wˆl/@l (l*+l d*)) because everyone has the same l* and d*. Note that at the ESS all the (i.e., f*=f(l*, i terms in Eq. (1) except wi1 are constant with respect to the summation. Applying the ESS conditions yields:    ˆf ðb  cÞ @rC\C þ b @rD\C  c @rC\D @ lˆ @ lˆ @ lˆ  @ fˆ þ ¼ 0: ½ðb  cÞrC\C þ brD\C  crC\D  Q

ˆ  @ lˆ l¼l

P.W. Andrews / Evolution and Human Behavior 23 (2002) 103–121

109

Plugging in the appropriate values for the probabilities and their partial derivatives and then simplifying gives the ESS condition for l*: ðb  cÞð1  d Þ2 þ bd ð1  d Þð1  l Þ  cd ð1  d Þð1  l Þ  Q 2 3 ˆ l,d ˆ Þ 7 6 fð ¼ bd ð1  d Þ4 . 5 @ fˆ ˆ  @ lˆ l¼l

ð2Þ

Now, introduce another rare mutant le=l*, de=dˆ , and fe=fˆ (l*, dˆ , i) into the ESS population, and the probabilities of the different outcomes become rC\C=(1dˆ)(1d*), rD\C =dˆ(1d*)(1l*), rC\D =d*(1dˆ)(1l*), and rD\D =dˆd*+dˆ(1d*)l*+d*(1dˆ)l*. Through a similar analysis we find that the ESS condition for d* is given by: ðb  cÞð1  d Þ2 þ bd ð1  d Þð1  l Þ  cd ð1  d Þð1  l Þ  Q 2 3  ˆ 7 ˆ ,dÞ 6 fðl ¼ ½cð1  d l Þ  bl ð1  d Þ 4 . 5 @ fˆ @ dˆ dˆ ¼ d

ð3Þ

Setting Eqs. (2) and (3) equal to each other yields the ESS relationship between l* and d*: 2 3 2 3   ˆ 7 ˆ l,d ˆ Þ7 ˆ ,dÞ 6 fð 6 fðl bd ð1  d Þ4 . 5 ¼ ½cð1  d l Þ  bl ð1  d Þ 4 . 5 ð4Þ @ fˆ @ fˆ ˆ  @ lˆ l¼l @ dˆ dˆ ¼ d The solution to Eq. (4) depends on how f is related to l and d. f is a probability and because it models the effects of reputation on partner choice, it is negatively related to d (because defection is not attractive) and positively related to l (because predictability is attractive). Perhaps the simplest possible relationship is f=l(1d). Plugging this into Eq. (4) and then solving for l* yields the following (where r = b/c): l ¼

1 : r þ d

ð5Þ

Calculated solutions for Eq. (5) are represented in Fig. 3 for different values of d* and r. A tradeoff between successful deception and reputation for predictability exists when l* is between zero and one. From the graph it is clear that the value of l* tends to decrease with increasing d* and increasing r. Similar ESS calculations have been made for f=l2(1d2) (where f increases with l at an increasing rate and decreases with d at an increasing rate) and f=l0.5(1d0.5) (where f increases with l at a decreasing rate and decreases with d at a decreasing rate) to test the

110

P.W. Andrews / Evolution and Human Behavior 23 (2002) 103–121

Fig. 3. Values of l* calculated from Eq. (5) for different values of r and d*.

generality of these conclusions. The analyses reveal that l* appears to always decrease with increasing r. Moreover, l* decreases with increasing d* except when two conditions are both met: (1) f increases with d at an increasing rate; and (2) the value of d* is relatively low. Under these conditions, the detrimental effects of defection on one’s chances of being chosen as a partner are small enough to reverse the normal negative relationship between l* and d*.

4. Discussion Frequency-dependent selective forces that limit the spread of deception are commonly considered in evolutionary arguments and models about communication systems. For instance, many have argued that the threat of punishment or social exclusion influences the decision of whether or not to produce a signal that attempts to deceive (e.g., Frank, 1988; Møller, 1987). However, since current models only optimize the decision of whether or not to produce a dishonest signal, they do not explain imperfectly deceptive signals. Instead, imperfectly deceptive signals are usually thought to be the result of a lag in the coevolutionary arms race between signalers and receivers (Krebs & Dawkins, 1984). In this race, signalers are under selection to at least occasionally exploit the behavior of receivers

P.W. Andrews / Evolution and Human Behavior 23 (2002) 103–121

111

through the use of dishonest signals of intent. Conversely, receivers are under selection to distinguish between honest and dishonest signals of intent for the purpose of accurately predicting the behavior of signalers. In response, deceitful signalers are under counterselection to devise better signals, and so the arms race view has led to the perception that deceitful signalers prefer perfectly deceptive signals. However, signalers can incur costs when receivers detect deceit and these costs can influence signal design. One aspect of receiver detection that can have important consequences is whether the deception is detected prereliance or postreliance. Both forms of detection may be costly to the signaler, but they may exert opposing selective forces on signal design. For instance, when the receiver discovers the deception prior to reliance, the signaler may lose the opportunity to interact with and exploit the receiver as well as acquire a reputation as a deceiver that may detrimentally impact on future interactions. In the absence of constraints on the efficacy of dishonest signals, minimization of the prereliance costs of detection favors eliminating the risk of detection by producing perfectly deceptive signals. Conversely, when the receiver discovers postreliance that a signal was dishonest, the signaler has already exploited the receiver and the possible costs of detection are limited to forms of punishment (such as loss of reputation as a good social partner). The effects on signal design of reducing postreliance costs may depend upon the ease with which receivers can discover that the signal was dishonest after relying on it, which in turn may depend on the purpose of the lie. For instance, if the purpose of the lie is to deceive about a fact (e.g., the quality of some object that is part of a proposed trade) or one’s future intentions in order to exploit the receiver, then the receiver will often ‘‘feel’’ the detrimental effects of relying on a dishonest signal, and it will often be very difficult for a signaler to prevent the receiver from discovering the deception. If signalers cannot prevent receivers from detecting their deceptions postreliance, the model suggests that signalers may allow receivers the chance of detecting the deceit prereliance so that they do not acquire reputations as effective deceivers that make them disfavored social partners.1 On the other hand, sometimes the purpose of a dishonest signal is not to exploit the receiver, but to avoid punishment. For instance, people often lie about their past behavior to avoid being punished for committing an exploitive act.2 In this situation, the receiver is less 1

The assumption that receivers are able to ‘‘feel’’ the detrimental effects of being exploited may not always be true. Consider a man who suspects that his friend and his wife are attracted to each other. He must go away on a brief trip, so he confronts his friend who deceitfully promises that he will not sleep with the wife. The man may have difficulty detecting the deceit merely by the fact that the friend did sleep with his wife while he was away. If so, the deceiver should produce fewer prereliance clues to the deceit. However, men may have postreliance adaptations for detecting their mates’ infidelities. For instance, a man might, in principle, be able to detect an infidelity by how closely the face of any resultant offspring resembled his own. 2 A deceptive signal whose purpose is to make the receiver vulnerable to exploitation is different from a deceptive signal whose purpose is to hide the identity of someone who has already carried out an exploitive act. In the former case, the receiver is more likely to discover the deception postreliance because he is likely to ‘‘feel’’ the costs of exploitation. In the latter case, the purpose of the signal is not to exploit, but to hide the identity of the exploiter.

112

P.W. Andrews / Evolution and Human Behavior 23 (2002) 103–121

likely to discover the deception because relying on the signal is not likely to immediately expose the receiver to further costs. If the receiver cannot ‘‘feel’’ the deception after relying on it, then selection for minimizing the postreliance costs of deception should favor perfectly deceptive signals. For this reason, signals that attempt to deceive about past behavior should generally be more successful than signals that attempt to deceive about future behavior. The greater ease of discovering dishonest signals of intent postreliance is incorporated into the model by assuming that defections (and thus deceptions) are always discovered after reliance. Under these conditions, the model shows that, for a wide range of conditions, an individual facing a tradeoff between the benefits of successfully lying about intentional state and maintaining a socially desirable reputation may prefer the production of an imperfectly deceptive signal to a perfectly honest signal or a perfectly deceptive signal. This may have implications for understanding imperfectly deceptive signals in species (such as Homo sapiens) in which individuals invest in reputations for being predictable (Frank, 1988; Gurven, Allen-Arave, Hill, & Hurtado, 2000). 4.1. Human facial expressions Human facial expressions are probably the best example of a signaling system in which imperfect deception occurs. For instance, sometimes people are able to correctly infer that others are lying from their facial expressions (Ekman et al., 1999; Ekman & O’Sullivan, 1991; Frank & Ekman, 1997), suggesting that people may produce facial clues of deceit. People could produce such clues by failing to produce a facial expression where one is expected, by producing an expression where none is expected, by producing a mixed expression where a pure expression is expected, by producing an expression of unexpected intensity, or a combination of these factors. The evolutionary study of facial expressions dates back to Darwin (1872/1998), making it one of the oldest domains of research falling under the banner of evolutionary psychology. While it has been an active area of research since the late 1960s, there are many basic questions about human facial expressions that remain unanswered. Perhaps the most important such question is the sort of information that receivers glean from the expressions of others. The emotions view (EV) proposes that facial expressions evolved to display the type and intensity of felt emotion (Darwin, 1872/1998; Ekman, 1992, 1998). The behavioral ecology view (BEV) proposes that facial expressions serve two functions (Fridlund, 1994). First, they sometimes prepare the face for some imminent activity or event (e.g., wincing in anticipation of a blow to the face or baring the teeth in preparation for biting). Second, facial expressions often function as signals that are designed to manipulate or influence the behavior of receivers by conveying information about the signaler’s intentions, needs, or other internal states. For their part, receivers must try to make accurate inferences about the actual internal states of signalers to avoid making maladaptive responses that leave them vulnerable to exploitation by deceptive signalers. The difference between the two views is sometimes subtle. For instance, the BEV states that facial expressions often convey information about the signaler’s likely future

P.W. Andrews / Evolution and Human Behavior 23 (2002) 103–121

113

actions (Fridlund, 1994), whereas the EV proposes that they convey information about emotion. However, emotions do serve the function of motivating behavior (Buss, 1999; Thornhill & Thornhill, 1989; Tooby & Cosmides, 1990; Trivers, 1971), and so displays of emotion could provide receivers with information about the signaler’s future actions. One crucial prediction of the EV is that particular facial expressions are specific to particular emotional states, at least among honest signalers. Conversely, the signaling aspect of the BEV predicts that particular facial expressions are not specific to particular emotions, but are specific to particular intentions or other internal states (Fridlund, 1994). For example, people may cry when they are happy or when they are sad, suggesting that the facial behavior involved in crying is indicative of a willingness to receive attention or succor. Similarly, Fridlund (1994, p. 136) argues that the facial display assumed to express anger by the EV is actually not unique to the state of anger: [T]he display comprised of a knit brow, pursed lips or retracted lip corners with bared teeth, and fixed gaze (i.e., the ‘‘about to attack’’ face), is held in emotions theory to express anger. For the behavioral ecologist, making an ‘‘about to strike’’ face functions to repel an interactant, and acting this way can occur for many reasons and amid circumstances that could connote many emotions. Indeed, we make this face not only when we are angry, but when we are helpless, frightened (‘‘defensive’’), frustrated, exasperated, bored, or engaging in a fit of bravado.

How people detect deceit from the facial expressions of others plays a central role in the debate between advocates of these two views. The EV explains imperfectly deceptive facial expressions as the telltale clues of emotions that are difficult to mask (Darwin, 1872/1998; Ekman, 1998). The EV argues that it is often advantageous to conceal facial expressions of emotional arousal to others, and that the best way to do this is to conceal facial expressions of one’s true emotional state (Ekman, 1992). People can conceal their facial expressions with the use of physical barriers (e.g., by placing the hands in front of the face), by attempting to put on a false facial expression, or by controlling one’s facial expressions without falsifying. However, the EV proposes that the capacity for deceptively controlling facial expressions is overlaid upon a nervous system that is designed to produce more extreme facial expressions as the intensity of affective input increases. As facial expressions evolved as signals, it presumably was more important to convey information about the intensity of felt emotion by producing more intense facial expressions than to deceive. Thus, the wiring of the nervous system that links the intensity of facial expressions to felt emotion acts as a constraint on the ability to produce deceptive facial expressions such that it is more difficult to prevent leaking facial clues to one’s true emotional state as the intensity of felt emotion increases (Ekman, 1992). Conversely, the BEV proposes that human facial expressions evolved as signals to elicit responses from receivers that favored the signaler. Often, this is accomplished through the use of dishonest signals. Thus, the BEV assumes that the nervous system is not as constrained in its ability to deceive as the EV suggests. Yet, signalers face tradeoffs when they attempt to deceive and understanding the nature of these tradeoffs is crucial to

114

P.W. Andrews / Evolution and Human Behavior 23 (2002) 103–121

understanding why receivers are sometimes able to correctly infer deceit from the facial expressions of others. These tradeoffs do operate as constraints on the nervous system, and so the BEV would seem similar to the EV in this respect. However, the EV proposes that the nervous system is wired in such a way that the signaler must leak clues of his true emotional state when he experiences intense emotions. Conversely, the BEV proposes a much more flexible wiring of the nervous system that pays attention to the costs and benefits of deception across different situations, including those involving intense emotions. Several BEV hypotheses are proposed later in the paper. Another important unresolved issue is whether the facial expressions from which receivers infer deceit are signals or cues or both. Because this is an unresolved issue, I will use the more general phrases ‘‘facial clues of deceit’’ or ‘‘imperfectly deceptive facial expressions’’ to refer to the facial expressions from which receivers infer deceit. Resolving this issue is important because it can help rule out certain hypotheses for why people produce facial clues of deceit. The literature on deceit indicates that many of the clues to deceit are not unique to deception (Ekman, 1992; Ford, 1996; Fridlund, 1994). For instance, people may become physiologically aroused when they attempt to deceive (e.g., dilation of pupils), but it is clear that people become physiologically aroused in nondeceptive situations as well. However, this raises the question of how people detect deception from cues that are not specific to deception. If a particular facial clue of deceit is not specific to the general state of dishonesty, but rather a particular internal state, then the facial expression would be produced when the actor is honestly conveying that internal state as well as when he is attempting to conceal it. In that event, the facial clue has the potential to reveal the precise internal state that the actor is attempting to conceal. For instance, the present model suggests that the components of a signal designed to convey purely cooperative intent are distinct from the components designed to convey purely defecting intent. Even if the dynamics of the situation favor pursuing the defecting option, the signaler may prefer neither the perfectly honest signal (which conveys the intent to defect) nor the perfectly deceptive signal (which conveys the intent to cooperate) due to the tradeoff between the benefits of successful deception and the long-term costs to reputation. Consequently, the signaler may produce a mixed (or a composite) signal that contains elements of both the cooperative and defecting signals. The relative degree to which the mixed signal is biased towards the signal of defection influences the probability that the receiver will detect the intention to defect. Finally, the more general literature on deceit indicates that people vary nonrandomly in their success at deceiving others about their intentions (Ford, 1996; Frank, 1988; Mealey, 1995; Wilson et al., 1996). For instance, sociopaths engage in uncooperative behavior more frequently than nonsociopaths and are more effective at deceiving others about their uncooperative intentions (Mealey, 1995). This is consistent with the present model’s prediction that there should be an inverse relationship between the presence of clues to deceit and the frequency of engaging in uncooperative behavior [see Eq. (5)]. Several other game theoretical models have explored the evolution of sociopathic behavior (Dugatkin, 1992; Dugatkin & Wilson, 1991; Harpending & Sobus, 1987), yet none have allowed the deceptive efficacy of dishonest signals to be optimized in the face of tradeoffs. The results and assumptions of the present model suggest that the suite of traits that

P.W. Andrews / Evolution and Human Behavior 23 (2002) 103–121

115

differentiate sociopaths from the rest of the human population might best be understood in terms of the tradeoff between the benefits of deception and its detrimental effects on reputation. The dishonest signals of sociopaths may be more efficacious because they place less value on the potential damage to their reputation for being too unpredictable. Indeed, sociopaths do not seem to care about the social expectations of others (Mealey, 1995). Moreover, sociopaths may migrate more to reduce the detrimental impact of the bad reputations that they acquire (Dugatkin, 1992). The present model favors imperfectly deceptive signals because it assumes a closed system in which defectors cannot escape the detrimental effects of their reputations. One might suppose that sociopaths would also be more effective than nonsociopaths in deceiving others about their intentions with their facial expressions. The EV could potentially account for such a pattern of variation by supposing that sociopaths have learned to control their facial expressions better than nonsociopathic people because they engage in defecting behavior more frequently. However, any substantial variation would suggest that people’s nervous systems are not as constrained in their ability to deceive as the EV presumes. 4.2. Hypotheses for imperfectly deceptive facial expressions There are at least six distinct hypotheses for why people are sometimes able to correctly infer deceit from the facial expressions of others. These hypotheses and their most important predictions are presented in Table 1. Many of these hypotheses are compatible with each other and it is possible that several of them may be required to fully explain human facial expressions. 4.2.1. The evolutionary lag hypothesis (ELH) The first hypothesis proposes that dishonest signalers are somewhat behind in the communications coevolutionary arms race. In other words, receivers may be under strong selection to distinguish between honest and dishonest signalers and dishonest signalers may not have yet caught up to honest signalers. Under the ELH, the differences between honest and dishonest facial expressions are cues, not signals. The hypothesis is compatible with some variation between individuals in the ability to deceive, and it suggests that selection may act on this variation to increase the deceptive efficacy of dishonest signalers. The ELH does not make any clear predictions. It is best supported after a systematic failure of all alternative hypotheses. 4.2.2. The learned skill hypothesis (LSH) It is also possible that people produce imperfectly deceptive facial expressions (that either mask felt emotion or conceal true intentions) because producing a successfully deceptive facial expression requires skills that must be learned. Under this hypothesis, the differences between honest and dishonest facial expressions are cues. It explains variation in the ability to deceive as the result of variation in experience and skill level. The idea that learning may play a role in the production of imperfectly deceptive facial expressions is consistent with many other hypotheses and, as described above, is an

116

P.W. Andrews / Evolution and Human Behavior 23 (2002) 103–121

Table 1 Predictions for each of the six hypotheses for imperfectly deceptive facial expressions Hypotheses

Important predictions

ELH

1. Facial clues of deceit are cues. 2. This hypothesis is best supported only when all others have been systematically rejected. 1. Facial clues of deceit are cues. 2. Complements many other hypotheses. If LSH is a complete explanation, within-subject variation in the ability to deceive should not covary with situational costs and benefits and should depend only on experience. 1. Facial clues of deceit are cues. 2. Presence of facial cues to deceit is positively correlated with the intensity of felt emotion. 1. Facial clues of deceit are cues. 2. If signalers know that receivers’ perception of incentive to deceive increases while actual incentive is held constant, the presence of facial clues of deceit should decrease. 1. Facial clues of deceit are cues and deceit is probably inferred by cross-referencing cue with situational context. 2. If signalers know that receivers’ perception of incentive to deceive increases while actual incentive is held constant, the presence of facial clues of deceit should increase. 1. Facial clues of deceit are signals. 2. If signalers know that receivers’ perception of incentive to deceive is held constant, the presence of facial cues of deceit should negatively covary with the actual incentive.

LSH

EH

SSH

PIH

ICH

explicit part of the emotions hypothesis. However, if the LSH is a complete explanation for imperfectly deceptive signals, then within subject variation should only depend on skill level. For instance, if facial clues to deceit are modulated by the costs and benefits of the situation, this would suggest that tradeoff hypotheses are operating, even if learning is also involved. 4.2.3. The emotions hypothesis (EH) As noted above, the EH explains facial clues to deceit as the telltale signs of emotions that are difficult to conceal (Ekman, 1992, 1998). The differences between honest and dishonest facial expressions are then cues. The EH can only account for variation in the ability to deceive by treading a fine line—the nervous system is constrained in its ability to deceive, but it is not so constrained that people cannot learn to control their facial expressions with effort (Ekman, 1992). Since the ability to control facial expressions of emotion should require some restructuring of the nervous system, people who learn to exhibit greater control over their facial expressions presumably do so at the expense of accurately conveying intensity of felt emotion. The EH proposes that the facial cues of deceit are not unique to deception because the clues that are leaked are also present with honest displays of emotion. Observers then detect

P.W. Andrews / Evolution and Human Behavior 23 (2002) 103–121

117

deceit from the failure to completely control facial expressions. For instance, advocates of the EH argue that an honest smile of enjoyment involves contraction of the orbicularis oculi muscle that surrounds the eye, and that this muscle does not contract in a false smile of enjoyment (Ekman, 1992). However, Fridlund (1994) makes a strong case that contraction of this muscle really protects the eye (e.g., against the buildup of ocular pressure when laughing). In any event, EH advocates propose that cues of deceit take the form of brief, involuntary facial expressions that are difficult to detect. Consistent with this view, the ability to correctly infer deceit is correlated with the ability to detect facial expressions from pictures that are quickly flashed on a screen (Ekman & O’Sullivan, 1991; Frank & Ekman, 1997). One prediction of the EH is that more facial clues of deceit will be produced as the intensity of felt emotion increases. Advocates of the EH take the reasonable view that emotional arousal will increase as the incentive to deceive increases (Ekman, 1992; Frank & Ekman, 1997). A series of experiments by Frank and Ekman (1997) suggests that people are better able to detect deceit from facial expressions when actors have a greater incentive to deceive. However, as will be explained, the results are also consistent with many of the remaining hypotheses. 4.2.4. The sufficient signal hypothesis (SSH) The remaining hypotheses invoke tradeoffs and thus they are all consistent with the BEV. The first BEV hypothesis makes use of the fact (noted above) that the cognitive effort needed to correctly discern the difference between honest and deceptive signals may be costly to receivers. Moreover, whether the welfare of receivers depends on correctly detecting deceit may influence the proportion of cognitive resources they devote to the problem. Thus, receivers may vary in their motivation and efficacy in detecting deceit (see also Ford, 1996). If producing a more perfectly deceptive signal is physiologically or energetically expensive to a signaler, the signaler might modulate the signal so that it is just similar enough to the signals produced by honest signalers to meet the level of scrutiny that receivers are likely to subject the signal to. The SSH suggests that receiver scrutiny drives the degree to which dishonest signals differ from honest signals. Since receiver scrutiny is likely to increase as the signaler’s apparent incentive to deceive increases (Andrews, 2001; Cosmides & Tooby, 1992; Fein, 1996; Fein et al., 1990; Hilton et al., 1993), the SSH makes two predictions. If signalers know that receivers’ perception of the incentive to deceive is held constant, they will know that receiver scrutiny should not increase and the presence of facial clues of deceit should not covary with the signaler’s actual incentive to deceive. However, if signalers know that receivers’ perception of the incentive to deceive increases while their actual incentive to deceive is held constant, the presence of facial clues of deceit should decrease. 4.2.5. The physiological interference hypothesis (PIH) Like polygraphs, people may use cues of physiological arousal to detect lies (Fridlund, 1994). However, physiological arousal is not a fail-safe way to detect deceit because people may become physiologically aroused even when they are not deceiving (Ekman, 1992; Ford, 1996; Fridlund, 1994). Physiological arousal merely indicates that the individual is preparing

118

P.W. Andrews / Evolution and Human Behavior 23 (2002) 103–121

for some activity or event. One reason why honest signalers might become physiologically aroused is because they must prepare for other activities while they are producing the signal (e.g., the mitigation of humiliation after making a risky disclosure, the need to appease the receiver, etc.). Honest signalers might also become physiologically aroused when they could be erroneously perceived as dishonest (Ekman, 1992; Ford, 1996; Fridlund, 1994). Thus, even the honest signaler may need to physiologically prepare for an escape attempt or possible aggression. This is particularly likely when it is clear to observers that the signaler has an apparent incentive to deceive, because signalers are much more likely to be treated with suspicion under those conditions (Andrews, 2001; Fein, 1996; Fein et al., 1990; Hilton et al., 1993). Physiological arousal will be a reliable cue to deception only when the situation is such that greater arousal is likely to be evoked if the signaler is deceiving. If people use cues of physiological arousal when attempting to make accurate inferences about whether others are attempting to deceive, they may use them in conjunction with information about the situational context. Cues of physiological arousal need not involve facial muscles (e.g., pupil dilation, piloerection, fidgetiness, etc.). However, if they do, they may interfere with facial expressions. The face has a limited capacity for muscular activity, and signalers may face a tradeoff between putting on effectively deceptive facial expressions and physiologically preparing for alternative actions. Even if people do not detect cues of physiological arousal directly, they may be able to detect how the actor’s facial expressions differ from expectations. Physiological interference with facial expressions may then be another possible explanation for facial clues of deceit. The PIH predicts that people will produce more facial clues to deceit when they have to prepare for alternative actions at the time they produce their deceptive facial expressions. Since receiver scrutiny is likely to increase when the signaler has an apparent incentive to deceive, signalers may have a greater need to physiologically prepare for the possibility of getting caught when observers perceive them to have a high incentive to deceive. This may mean that signalers are more likely to get caught by receivers when they appear to have a high incentive to deceive, and so the results of Frank and Ekman’s (1997) study are also consistent with the PIH. The greater likelihood of having to physiologically prepare for other actions as receiver scrutiny increases yields an identical prediction to the SSH: If signalers know that receivers’ perception of the incentive to deceive is kept constant, then neither receiver scrutiny nor the presence of facial clues to deceit should covary with the signaler’s actual incentive to deceive. In contrast, if signalers know that receivers’ perception of the incentive to deceive increases while their actual incentive to deceive is held constant, then receiver scrutiny and the presence of facial clues to deceit should both increase. This last prediction is different from that of the SSH. 4.2.6. The internal conflict hypothesis (ICH) Fridlund (1994) suggests people produce facial clues of deception because they are ambivalent about successfully deceiving the receiver, not because they are experiencing emotions that are difficult to mask. Consistent with this reasoning, when the effects of internal

P.W. Andrews / Evolution and Human Behavior 23 (2002) 103–121

119

conflict are controlled, one study suggests that people do not produce facial clues of deceit. Bavelas, Black, Chovil, and Mullett (1990) showed that subjects who had to lie in order to keep secret the surprise birthday party of a hypothetical friend (but in reality one of the experimenters) were completely successful in their lies and showed no indication of producing clues of deceit. In this situation, subjects should feel little or no ambivalence about deceiving because it is for the benefit of the hypothetical friend. However, because subjects had to pretend that the experimenter was a close friend, and the surprise party was not real, they may not have experienced emotional involvement in the situation, and so the results may also be consistent with the EH. In any event, the study does not bear on signals of intent whose function is to exploit receivers. In the model, the incentive to deceive is the value of concealing that one intends to defect against an otherwise cooperative partner (i.e., to get b instead of zero). Signalers might feel ambivalent about successfully deceiving receivers because they must balance the short-term incentive to deceive against the long-term costs to reputation from postreliance detection. A signaler facing such a tradeoff will often prefer neither a perfectly deceptive nor a perfectly honest signal but should produce a more effectively deceptive facial expression as the incentive to deceive increases [i.e., l*/1/r or 1/b; see Eq. (5) and Fig. 3]. This is the only hypothesis suggesting that the difference between honest and dishonest signals is itself a signal. However, the effects of receiver scrutiny could cloud the predicted negative relationship between the incentive to deceive and detection by receivers. As the apparent incentive to deceive increases, receivers may spend more effort to detect the clues of deception that are available. Even if the presence of such clues negatively covaries with deceiver incentive, the overall relationship between incentive and receiver detection could be positive, negative, zero, or even curvilinear! Thus, fewer clues of deceit may have been produced in the high incentive Frank and Ekman (1997) study, but the greater probability of detection may have been driven by enhanced receiver scrutiny. Indeed, a subsequent study found that more motivated receivers were better at detecting deceit when judging high stake lies (Ekman et al., 1999). Moreover, differential receiver motivation to detect deceit could interact with the proposed tradeoff facing the signaler in ways that the present model does not address. For instance, if the signaler is not likely to get any benefit from deceiving, or if no one is likely to be harmed by the deceit, then receivers may not care whether the signaler deceives, and the signaler may not incur any harm to his reputation for deceiving. The signaler may then produce fewer clues of deception under these conditions because he experiences little conflict over it. The present model deals with both of these issues by assuming that receiver scrutiny is constant. Thus, the ICH predicts that if signalers know that receivers’ perception of the incentive to deceive is held constant, the presence of facial cues to deception and the probability of detection will negatively covary with the actual incentive to deceive. In contrast, both the SSH and the PIH predict no relationship between the presence of facial clues to deception (and the probability of detection) and the signaler’s actual incentive to deceive under these conditions.

120

P.W. Andrews / Evolution and Human Behavior 23 (2002) 103–121

5. Concluding remarks The emotions hypothesis has been the reigning evolutionary hypothesis for imperfectly deceptive facial expressions since Darwin (1872/1998). However, many other hypotheses could potentially account for them as well. Existing studies have yet to provide the crucial tests for distinguishing among the possibilities. For instance, many studies do not pay much attention to the type of lie (e.g., whether it is about past behavior, future intentions, or a fact), the purpose of the lie (e.g., to exploit, to hide the identity of the exploiter, or to keep a secret for someone’s benefit), or the relative importance of the costs of prereliance and postreliance detection. These may all be important factors to consider in making predictions and designing experiments. Moreover, there are several other problems with existing studies on detecting deceit from facial expressions that are worth mentioning briefly. Subjects are rarely (if ever) given the option of attempting to deceive; instead, they are often instructed to deceive (e.g., DePaulo, Lanier, & Davis, 1983; Ekman & O’Sullivan, 1991; Frank & Ekman, 1997). Subjects who are instructed to deceive may be more likely to be ambivalent over deception and this could influence their facial expressions (Fridlund, 1994). On the other hand, subjects who are instructed to deceive may feel less concerned about deceiving because they perceive it to be a part of the experimental condition. Finally, subjects are often asked to deceive about things that are not obviously relevant to the human ancestral environments in which deceit and deceit detection evolved (e.g., lie to an interviewer that a gory movie that they had just viewed was very pleasant; Ekman & O’Sullivan, 1991; but see Frank & Ekman, 1997 [crime scenario]). Often, the incentive to deceive also does not seem very compelling from an evolutionary perspective (e.g., subjects are told that their ability to deceive on a particular topic is related to their career success; DePaulo et al., 1983; Ekman & O’Sullivan, 1991; but see Frank & Ekman, 1997 [monetary incentive]).

Acknowledgments Thanks to Eric Charnov, Astrid Kodric-Brown, Steve Gangestad, Randy Thornhill, William LaRue, and Pam Keil for comments and conversations that helped influence earlier drafts of the manuscript. Eric Charnov, Carla Wofsy, and Jack Ellison graciously provided mathematical advice. Thanks as well to Martin Daly, Margo Wilson, Alan Fridlund, and two anonymous reviewers whose comments greatly improved the paper.

References Andrews, P. W. (2001). The psychology of social chess and the evolution of attribution mechanisms: explaining the fundamental attribution error. Evolution and Human Behavior, 22, 11 – 29. Bavelas, J. B., Black, A., Chovil, N., & Mullett, J. (1990). Truth, lies, and equivocations: the effects of conflicting goals on discourse. Journal of Language and Social Psychology, 9, 135 – 161.

P.W. Andrews / Evolution and Human Behavior 23 (2002) 103–121

121

Buss, D. M. (1999). The evolution of happiness. American Psychologist, 55, 15 – 23. Cosmides, L., & Tooby, J. (1992). Cognitive adaptations for social exchange. In: J. H. Barkow, L. Cosmides, & J. Tooby (Eds.), The adapted mind: evolutionary psychology and the generation of culture ( pp. 163 – 228). Oxford: Oxford University Press. Darwin, C. (1872/1998). The expressions of the emotions in man and animals. Oxford: Oxford University Press. DePaulo, B. M., Lanier, K., & Davis, T. (1983). Detecting the deceit of the motivated liar. Journal of Personality and Social Psychology, 45, 1096 – 1103. Dugatkin, L. A. (1992). The evolution of the ‘‘con artist’’. Ethology and Sociobiology, 13, 3 – 18. Dugatkin, L. A., & Wilson, D. S. (1991). Rover: a strategy for exploiting cooperators in a patchy environment. American Naturalist, 138, 687 – 701. Ekman, P. (1992). Telling lies: clues to deceit in the marketplace, politics, and marriage. New York: W.W. Norton and Company. Ekman, P. (1998). Afterword. In: C. Darwin (Ed.), The expression of the emotions in man and animals ( pp. 363 – 393). Oxford: Oxford University Press. Ekman, P., & O’Sullivan, M. (1991). Who can catch a liar? American Psychologist, 46, 913 – 920. Ekman, P., O’Sullivan, M., & Frank, M. G. (1999). A few can catch a liar. Psychological Science, 10, 263 – 266. Fein, S. (1996). Effects of suspicion on attributional thinking and the correspondence bias. Journal of Personality and Social Psychology, 70, 1164 – 1184. Fein, S., Hilton, J. L., & Miller, D. T. (1990). Suspicion of ulterior motivation and the correspondence bias. Journal of Personality and Social Psychology, 58, 753 – 764. Ford, C. V. (1996). Lies! Lies!! Lies!!! The psychology of deceit. . Washington, DC: American Psychiatric Press. Frank, M. G., & Ekman, P. (1997). The ability to detect deceit generalizes across different types of high-stake lies. Journal of Personality and Social Psychology, 72, 1429 – 1439. Frank, R. (1988). Passions within reason: the strategic role of the emotions. New York: Norton. Fridlund, A.J. (1994). Human facial expressions: an evolutionary view. San Diego, CA: Academic Press. Getty, T. (1998). Handicap signaling: when fecundity and viability do not add up. Animal Behaviour, 56, 127 – 130. Gonzalez, G., Sorci, G., Møller, A. P., Ninni, P., Haussy, C., & DeLope, F. (1999). Immunocompetence and condition-dependent sexual advertisement in male house sparrows (Passer domesticus). Journal of Animal Ecology, 68, 1225 – 1234. Grafen, A. (1990). Biological signals as handicaps. Journal of Theoretical Biology, 144, 517 – 546. Gurven, M., Allen-Arave, W., Hill, M., & Hurtado, M. (2000). It’s a wonderful life: signaling generosity among the Ache of Paraguay. Evolution and Human Behavior, 21, 263 – 282. Harpending, H., & Sobus, J. (1987). Sociopathy as an adaptation. Ethology and Sociobiology, 8, 63s – 72s. Hilton, J. L., Fein, S., & Miller, D. T. (1993). Suspicion and dispositional inference. Personality and Social Psychology Bulletin, 19, 501 – 512. Humphrey, N. K. (1976). The social function of intellect. In: P. P. G. Bateson, & R. A. Hinde (Eds.), Growing points in ethology ( pp. 303 – 317). Cambridge: Cambridge University Press. Krebs, J. R., & Dawkins, R. (1984). Animal signals: mind-reading and manipulation. In: J.R. Krebs, & N.B. Davies (Eds.), Behavioral ecology: an evolutionary approach (2nd ed., pp. 380 – 402). Oxford: Blackwell. Maynard Smith, J. (1982). Evolution and the theory of games. Cambridge: Cambridge University Press. Mealey, L. (1995). The sociobiology of sociopathy: an integrated evolutionary model. Behavioral and Brain Sciences, 18, 523 – 541. Møller, A. P. (1987). Social control of deception among status signalling house sparrows Passer domesticus. Behavioral Ecology and Sociobiology, 20, 307 – 311. Semple, S., & McComb, K. (1996). Behavioural deception. TREE, 11, 434 – 437. Thornhill, R., & Thornhill, N. W. (1989). The evolution of psychological pain. In: R. Bell, & N. Bell (Eds.), Sociobiology and the social sciences ( pp. 73 – 103). Lubbock, TX: Texas Tech University. Tooby, J., & Cosmides, L. (1990). The past explains the present: emotional adaptations and the structure of ancestral environments. Ethology and Sociobiology, 11, 375 – 424. Trivers, R. L. (1971). The evolution of reciprocal altruism. Quarterly Review of Biology, 46, 35 – 57. van Rhijn, J. G., & Vodegel, R. (1980). Being honest about one’s intentions: an evolutionary stable strategy for animal conflicts. Journal of Theoretical Biology, 85, 623 – 641. Wilson, D. S., Near, D., & Miller, R. R. (1996). Machiavellianism: a synthesis of the evolutionary and psychological literatures. Psychological Bulletin, 119, 285 – 299. Zahavi, A. (1975). Mate selection—a selection for a handicap. Journal of Theoretical Biology, 53, 205 – 214. Zahavi, A. (1977). The cost of honesty (further remarks on the handicap principle). Journal of Theoretical Biology, 67, 603 – 605.