The mechanics of trust: A framework for research and design

The mechanics of trust: A framework for research and design

ARTICLE IN PRESS Int. J. Human-Computer Studies 62 (2005) 381–422 www.elsevier.com/locate/ijhcs The mechanics of trust: A framework for research and...

457KB Sizes 0 Downloads 60 Views

ARTICLE IN PRESS

Int. J. Human-Computer Studies 62 (2005) 381–422 www.elsevier.com/locate/ijhcs

The mechanics of trust: A framework for research and design Jens Riegelsberger, M. Angela Sasse, John D. McCarthy Department of Computer Science, University College London, Gower Street, London WC1E 6BT, UK Received 12 August 2004; received in revised form 18 December 2004; accepted 5 January 2005 Communicated by S. Wiedenbech

Abstract With an increasing number of technologies supporting transactions over distance and replacing traditional forms of interaction, designing for trust in mediated interactions has become a key concern for researchers in human computer interaction (HCI). While much of this research focuses on increasing users’ trust, we present a framework that shifts the perspective towards factors that support trustworthy behavior. In a second step, we analyze how the presence of these factors can be signalled. We argue that it is essential to take a systemic perspective for enabling well-placed trust and trustworthy behavior in the long term. For our analysis we draw on relevant research from sociology, economics, and psychology, as well as HCI. We identify contextual properties (motivation based on temporal, social, and institutional embeddedness) and the actor’s intrinsic properties (ability, and motivation based on internalized norms and benevolence) that form the basis of trustworthy behavior. Our analysis provides a frame of reference for the design of studies on trust in technology-mediated interactions, as well as a guide for identifying trust requirements in design processes. We demonstrate the application of the framework in three scenarios: call centre interactions, B2C e-commerce, and voice-enabled on-line gaming. r 2005 Elsevier Ltd. All rights reserved. Keywords: Trust; Social capital; Dis-embedding; Interpersonal cues; Human computer interaction; Computer mediated communication; Computer supported collaborative work; Decision-making; Game theory; E-commerce

Corresponding author. Tel.: +44 207 679 3643; fax: +44 207 387 1397.

E-mail addresses: [email protected] (J. Riegelsberger), [email protected] (M.A. Sasse), [email protected] (J.D. McCarthy). 1071-5819/$ - see front matter r 2005 Elsevier Ltd. All rights reserved. doi:10.1016/j.ijhcs.2005.01.001

ARTICLE IN PRESS 382

J. Riegelsberger et al. / Int. J. Human-Computer Studies 62 (2005) 381–422

1. Introduction In the past, interactions between individuals who never met face-to-face used to be rare. Today, an ever-increasing number of first-time encounters are mediated by technology: people find business partners in on-line discussion fora and dating partners on Yahoo! Personals. In many such encounters, actors do not expect to ever meet ‘in real life’: people buy and sell goods from each other on eBay or spend hours playing against each other on Xbox-live without ever communicating face-to-face. These interactions involve different types and levels of risk, and they are only possible if users trust each other, and the systems they use to meet, communicate and transact—as well as the organizations that provide them. Yet, in many recent applications, this essential quality has proved difficult to attain. The widely reported ‘lack of trust’ in e-commerce (Egger, 2001; Consumer Web Watch, 2002; GrabnerKraeuter and Kaluscha, 2003) demonstrates that insufficient trust can lead to users ‘‘staying away’’ from a technology altogether. There is also a less topical—but more far-reaching—argument to make trust a core concern of systems design. Any technical system that is brought into an organization can only work efficiently as part of the larger socio-technical system— i.e. the organization and its human actors (Checkland, 1999). Organizations are more productive if they have social capital1 (Putnam, 2000). Some authors claim that reported failures of systems to yield the expected productivity gains in organizations (Landauer, 1996) partially stem from a reduction in opportunities to build social capital (Resnick, 2002). Trust can be formed as a by-product of informal exchanges, but if new technologies make many such exchanges obsolete through automation, trust might not be available when it is needed. Many studies show the economic benefits of high-trust interactions: trust enables exchanges that could otherwise not take place, reduces the need for costly control structures, and makes social systems more adaptable (Uslaner, 2002). We find similar considerations in the field of sociology and public policy: the drop in indicators of social capital seen in modern societies in recent years has been attributed—among other factors—to the transformations of social interactions brought about by advances in communication technologies (Putnam, 2000). Interactions that used to be based on long-established personal relationships and face-to-face interaction are now conducted over distance or with automated systems—a process known as dis-embedding (Giddens, 1990). According to this view, by conducting more interactions over distance or with computers rather than with humans, we deprive ourselves of opportunities for trust building. If we are to realize the potential of new technologies for enabling new forms of interactions without these undesirable consequences, trust and the conditions that affect it must become a core concern of systems development. Virtual organizations, 1

‘‘ysocial capital inheres in the structure of relations between actors and among actors.’’(Coleman, 1988, p. S98) Social capital becomes manifest in obligations and expectations, information channels, and social norms. Trust is an important factor of social capital (Coleman, 1988; Fukuyama, 1995; Glaeser et al., 2000).

ARTICLE IN PRESS J. Riegelsberger et al. / Int. J. Human-Computer Studies 62 (2005) 381–422

383

on-line gaming or dating, e-government services, and ambient services are only possible if users can trust these technologies, and the people they interact with through these technologies. For such technologies the role of systems designers and researchers is thus not one of solely increasing the functionality and usability of the systems that are used to transact or communicate, but to design them in such a way that they support trustworthy action and—based on that—well-placed trust. Designers must be aware of their role as social engineers when creating on-line markets, meeting places, and environments. The design of these systems will shape how people behave—and it will impact the level of trust and trustworthy behavior. User trust in e-commerce is as a relatively well-researched area within the wider human computer interaction (HCI) trust debate. A large part of this work is dedicated to establishing guidelines for increasing the perceived trustworthiness of technology or that of the actors it represents (e.g. Nielsen, 1999; Sapient, 1999; Egger, 2001). Many of the current e-commerce trust design guidelines are based on surveys and interviews with users, capturing the interface elements they currently interpret as signifiers of trustworthiness. This approach offers important practical guidance for designers, who want to improve the interface of a specific e-commerce vendor. However, if guidelines for trustworthy design are followed by untrustworthy actors to increase perceived trustworthiness, the guidelines may lose their value. Exemplary for this risk are ‘phishing’ sites2 that are often constructed in conformance with established guidelines for trustworthy e-commerce interface design. If—through interactions with such sites or media reports—users experience that they cannot rely on their trust perceptions, trust in the technologies and application domains may be lost, or result in a system burdened with costly regulation and control structures. Hence, we argue that the scope of e-commerce trust research needs to be widened towards researching users’ ability to differentiate trustworthy from less trustworthy vendors (Fogg, 2003a). Users’ perceptions of trustworthiness may be inaccurate; they may, for instance, transfer interpretations from other contexts without considering their reliability. Studies investigating user trust should thus incorporate vendors’ actual trustworthiness as an independent variable. None of the recent e-commerce trust studies reviewed by Grabner-Kraeuter and Kaluscha (2003) did so, however. An additional problem of design guidelines that are based on self-reports is that they are limited to current signifiers of trustworthiness in a specific domain. These may not be stable over time, and they are not necessarily transferable to other domains or technologies. Several researchers have recognized the need for models of trust and credibility in technology-mediated interactions that are independent from specific technologies and domains (Table 1). Rather than identifying specific interface elements that are perceived as signals for trustworthiness, these models deconstruct perceived trustworthiness into its subcomponents, such as ease of use, reputation, honesty, fairness, etc. Hence they provide guidance for researchers and practitioners that is applicable across a wider range of technologies and contexts. Such models mostly treat trustworthiness as an attribute 2

See http://www.antiphishing.org/

ARTICLE IN PRESS 384

J. Riegelsberger et al. / Int. J. Human-Computer Studies 62 (2005) 381–422

Table 1 1. Tan and Thoen (2000) presented a generic model of trust in e-commerce. 2. McKnight and Chervany (2000, 2001) developed a domain-free model of trust in technically mediated interactions. 3. Corritore et al. (2003) developed a high-level model of trust perceptions of informational and transactional websites. 4. Fogg (2003a) introduced a model of computer credibility and applied it to websites.

of the trusted party and focus on the perception of trustworthiness on the side of the trusting party. Hence, these models create a focus on well-placed trust. The framework presented in this paper aims to complement these models of trust by taking a novel approach that focuses on the perspective of the trusted actor. Rather than treating trustworthiness as a relatively stable attribute of the trusted actor, it asks for the factors that lead the trusted actor to act in a trustworthy manner in a specific situation. Hence we are not only advocating designing for well-placed trust, but also for trustworthy behavior. In taking this approach, we aim to identify the ‘mechanics of trust’ that underlie many of the on-line and offline approaches to problems of trust and trustworthiness. Our goal is to illustrate why and how they work. We are not claiming that taking this perspective prevents untrustworthy behavior or misplaced trust. Human behavior, after all, may be influenced but cannot be singularly determined by the design of a system. Knowledge about the factors that enable trustworthy behavior helps designers to fully explore the available design space for trust in technology-mediated situations. Our analysis offers guidance to both researchers and practitioners, by exposing salient features of trust in current socio-technical systems, and thus provides a basis for extrapolation to new technologies and contexts of use. By incorporating the trusted actor’s perspective, it takes a wider view than existing models and allows accommodating existing classifications of different types of trust. The perspective of the framework is a descriptive one, since it categorizes the factors that support the emergence of trustworthy behavior, based on empirical evidence. Consequently it allows deducing under which circumstances it would be wise to place trust. In Section 2.1 we first lay the terminological and structural foundations for the framework. We then consider how trustworthiness can be signalled (2.2), and introduce and illustrate a range of trust-warranting properties (2.3 and 2.4). To complete the discussion of the mechanics of trust, we link our analysis to existing categories of trust (2.5). Section 3 illustrates how our framework informs research (3.1) and how it can be applied by practitioners in the analysis of scenarios such as telephone banking, ecommerce, and on-line-gaming (3.2). Finally, Section 4 summarizes our findings and presents design heuristics that build on the mechanics of trust we identified. 2. Mechanics of trust In this Section we lay the foundation for a framework of trust in technologymediated interactions. We start by introducing the structural conditions that define trust-requiring situations.

ARTICLE IN PRESS J. Riegelsberger et al. / Int. J. Human-Computer Studies 62 (2005) 381–422

385

2.1. The basic model 2.1.1. Trust-requiring situations Many researchers state that trust is only required in situations that are characterized by risk (Deutsch, 1958; Mayer et al., 1995; Corritore et al., 2003). Unfortunately, risk itself is a concept that has widely differing definitions and connotations in fields such as economics, sociology, risk management theory, and in everyday use (Adams, 1995). At times risk is understood to express the probability of an adverse outcome. In this understanding it is sometimes used synonymously with uncertainty (e.g. Grabner-Kraeuter and Kaluscha, 2003). In other contexts risk is applied to describe the compound of probability and the magnitude of the adverse outcomes (Adams, 1995). In our view, the latter interpretation applies to trust: trust will only be required if there are things at stake and if there is the possibility of adverse outcomes. In many everyday or habitual interactions, even though there is a theoretical possibility of adverse outcomes, they will not be considered and trust will be replaced by an expectation of continuity—it ceases to be a topic of concern (Luhmann, 1979). The terms uncertainty and risk also require further delineation. Adams (1995), following Knight (1921), uses the term risk to describe adverse events with a known probability. Uncertainty, on the other hand, describes situations in which adverse events are possible, but for which no probabilities are known. Luhmann (1979) sees trust as particularly relevant in situations that are characterized by uncertainty. He points out that individuals will often not have the time or ability to work out probability expectations due to the complexity and interdependencies of social systems that determine future outcomes (Luhmann, 1979). In situations of uncertainty, trust allows short-cutting probability calculations and thus reduces complexity (Luhmann, 1979). Uncertainty, and thus the need for trust, stems from the lack of detailed knowledge about others’ abilities and motivation to act as promised (Deutsch, 1958). If we had accurate insight into the trusted actor’s reasoning or functioning and that of the actors he relies on, trust would not be an issue (Giddens, 1990). 2.1.2. The basic situation The abstract situation on which we build our framework is based on an analysis of the Trust Game (Dasgupta, 1988; Berg et al., 2003) by Bacharach and Gambetta (2003). We develop the framework from the sequential interaction between two actors, the trustor (trusting actor) and the trustee (trusted actor). Fig. 1 shows a model of a prototypical trust-requiring situation. We have two actors (trustor and trustee) about to engage in an exchange. For now we can assume that they are human, in Section 2.1.5 we discuss how the framework applies to non-human actors. Both actors can realize some gain by conducting the exchange. Our exchange may depict a first encounter, or a ‘snapshot’ of an established relationship consisting of many subsequent exchanges. Prior to the exchange, trustor and trustee perceive signals (1) from each other and the context. The trustor’s level of trust will be influenced—among other factors such as her disposition to trust (McKnight and Chervany, 2000; Egger, 2001; Corritore et al.,

ARTICLE IN PRESS 386

J. Riegelsberger et al. / Int. J. Human-Computer Studies 62 (2005) 381–422 TRUSTOR

TRUSTEE 1 Signals

Outside Option

2b Withdrawal

2a Trusting Action

3a Fulfillment

3b NonFulfillment

Fig. 1. The basic interaction between trustor and trustee.

2003)—by the signals perceived. Trust in the terms of this framework is understood as an internal state of the trustor regarding the expected behavior of the trustee in the given situation. This understanding reflects the commonly used definition of trust as an attitude of positive expectation that one’s vulnerabilities will not be exploited.3 It emphasizes that trust is an internal state of the trustor with cognitive and affective components rather than an observable behavior (McKnight and Chervany, 2000; Corritore et al., 2003). Depending on her level of trust and other factors (e.g. the availability of outside options) the trustor will either engage in trusting action (2a) or withdraw from the situation (2b). Trusting action is defined as a behavior that increases the vulnerability of the trustor. Hence, trust is a necessary but not a sufficient condition for trusting action. Trusting action puts the good at stake in the hands of the trustee. The reason for a trustor to engage in trusting action, is that she can realize a gain if the trustee fulfils his part of the exchange (3a). However, the trustee may also lack the motivation to fulfil and decide to exploit the trustor’s vulnerability, or he might simply not have the ability. Both possibilities result in non-fulfillment (3b). 2.1.3. Separation in time and space The signals (1) exchanged prior to trusting action are of key interest for trust research, as they form the basis of the trustor’s trust. In the case of repeated encounters the signals perceived may identify the trustee and thus allow the trustor to relate the present situation to previous encounters with the same trustee. The signals perceived from the trustee and the context also allow inferences about the trustee’s ability and motivation. However, these inferences are bound to be imprecise for two reasons. Firstly, they depend on the trustor’s ability to interpret the signals. Secondly, untrustworthy trustees will also try to emit signals to appear trustworthy (2.2.1).

3

This definition closely mirrors those proposed by Corritore et al. (2003), McAllister (1995), Mayer et al. (1995), Rousseau et al. (1998).

ARTICLE IN PRESS J. Riegelsberger et al. / Int. J. Human-Computer Studies 62 (2005) 381–422

387

If trustor and trustee are separated in space, their interactions will be mediated (e.g. by mail, email, telephone) and some of the signals that are present in face-toface encounters may not be available or become distorted. This effect is captured in communication theory by models of channel reduction (Do¨ring, 1998) such as social presence (Short et al., 1976), presence (Lombard and Ditton, 1997; Biocca et al., 2003), and media richness (Daft and Lengel, 1986; Rice, 1992). This loss of information is often considered to increase uncertainty, and to result in lower trust (e.g. Handy, 1995; Shneiderman, 2000). However, this view has to be balanced with the fact that technology also offers the chance to supply information that otherwise would not be easily available in a face-to-face situation (e.g. reputation rating scores; Section 2.3.1.2). In many situations, mediation may also increase the delay between trusting action (2a) and fulfillment (3a), for example if an exchange relies on the postal system. This separation in time prolongs the period of uncertainty for the trustor. Thus, temporal as well as spatial separation of trusting action and fulfillment can increase uncertainty and thus the need for trust (Giddens, 1990; Brynjolfsson and Smith, 2000; Fig. 2). 2.1.4. Trust-warranting properties The trustor’s trusting action (2a) gives the trustee an incentive not to fulfil (3b, e.g. keeping a customer’s money but not delivering a good). In the absence of any other motivating factors being trusted and then refusing to fulfil (3b) is the best outcome for the trustee. Hence, in the absence of other such factors, the trustor should not engage in trusting action. However, in most real-world situations, we observe trusting actions and fulfillment in spite of situational incentives to the contrary: vendors deliver goods after receiving payment, banks return money, individuals do not sell their friends’ phone numbers to direct marketers. The reason for this behavior is that in many cases, trustees’ actions will be motivated by contextual and intrinsic properties whose effects outweigh the immediate gain from non-fulfillment. An example of a contextual property is the existence of law enforcement agencies and the associated fear of being punished for non-fulfillment (see 2.3.1.3). An example of TRUSTOR

Separation in Space

TRUSTEE

+ UNCERTAINTY

Outside Option

1 Signals

2b Withdrawal

2a Trusting Action Separation in Time + UNCERTAINTY

3a Fulfillment

Fig. 2. Effects of separation in time and space.

3b NonFulfillment

ARTICLE IN PRESS 388

J. Riegelsberger et al. / Int. J. Human-Computer Studies 62 (2005) 381–422

an intrinsic property are the pangs of conscience a trustee may have to deal with if he would not fulfil (see 2.3.2.2). Expressed in the terms of the framework, a trustee will only fulfil if he has the ability and if the motivation provided by the contextual and intrinsic properties outweighs the motivation to realize the gain from non-fulfillment (Bacharach and Gambetta, 2001). We define ability as an intrinsic property, but most of our discussion is focused on properties that relate to motivation. Bacharach and Gambetta (2001) refer to properties that can cause fulfillment as trust-warranting properties. This is because trust in the trustee is only warranted if these properties are present and if they motivate the trustee sufficiently. Identifying and reliably signalling trust-warranting properties is the key concern if we want to develop systems that foster trustworthy behavior and well-placed trust. 2.1.5. Actors Initially we used the framework to describe interactions between human actors; now we are widening the scope to other types of actors. The wide range of literature on institutional trust (e.g. Zucker, 1986; Shapiro, 1987; Lahno, 2002) and trust between firms in sociology (e.g. Raub and Weesie, 2000a) and organizational behavior research (e.g. Macauley, 1963; Ring and Van de Ven, 1994; Rousseau et al., 1998) advocates also applying the framework to groups, organizations, and institutions in the role of trustors and trustees. The question of human trust in technology is also widely researched. Ratnasingam and Pavlou (2004), for instance, use the term technology trust in B2B e-commerce transactions, to differentiate it from trust in the trading partner. Similarly, Lee and Turban (2001) address trust in technology by integrating trustworthiness of the Internet shopping medium as one dimension in their model of trust in e-commerce. Trust in technology is of particular importance for delegating to or relying on decision aids or software agents (Muir, 1987; Milewski and Lewis, 1997; Dzindolet et al., 2003). However, applying the term trust to describe attitudes towards technology has also been criticized (Solomon and Flores, 2001; Corritore et al., 2003). The argument is that technology is not a moral actor, characterized by free will, and therefore the concepts of motivation and trustworthiness do not apply (at least for today’s technology, for an outlook see Norman, 2004). In addressing these concerns we incorporate trust in technology in our framework, but we restrict its applicability for technological trustees to the property ability rather than motivation. In many cases trust in technology will be linked to trust in the socio-technical systems which this technology is part of. The full framework can be used to analyze these systems. These theoretical considerations notwithstanding, human trustors are known to treat technological artifacts in similar ways as they treat human ones (Reeves and Nass, 1996; Fogg, 2003a). Hence, while technological artifacts do not possess motivation, users may perceive them in such a way and consequently researchers and designers should reflect this in their approach (Corritore et al., 2003). Finally, there is also a body of literature on technology in the role of the trustor. Automated trust management systems or software agents often hold trust scores about trustees (e.g. Witkowski et al., 2000; Jensen et al., 2004; Pitt, 2005). Our

ARTICLE IN PRESS J. Riegelsberger et al. / Int. J. Human-Computer Studies 62 (2005) 381–422

389

discussion does not cover such agent systems,4 but the framework presented here may be helpful to the development of agent systems by highlighting existing approaches to maintaining trust in socio-technical systems. 2.1.6. Types of vulnerability and fulfillment Vulnerability, trusting action, and fulfillment do not only apply to financial goods, but to anything of value to the actors: money, time, personal information, or psychological gratification, e.g. realized in the form of entertainment or sociability. A single trusting action can result in several vulnerabilities. Ordering a product online may lead to a loss of privacy, illicit access to credit card information, or nondelivery of the goods ordered (Egger, 2001). For each of these vulnerabilities, several forms of non-fulfillment are possible: a product may not arrive at all, or it may arrive late or damaged. Different vulnerabilities or levels of non-fulfillment can be analyzed with several instances of the framework, each analyzing one particular combination to determine the configuration of trust-warranting properties that are relevant for this instance. 2.1.7. Scope In many cases the outcome of an exchange depends on factors over which the trustee has only limited control. Raub and Weesie (2000a) introduce the term parametric risk to capture such occasions of non-fulfillment. In the context of ecommerce, Grabner-Kraeuter and Kaluscha (2003) use the term system-dependent uncertainty, which was first coined by Hirshleifer and Riley (1979). An ordered good may be lost due to the failure of the postal service, not because the vendor failed to fulfil. An analysis of parametric risk or system-dependent uncertainty can be performed with the framework by making the sources of parametric risk the focal point, i.e. the trustee, in a new instance of the framework. In the above example, a new instance of the framework would be created with the postal service in the role of the trustee. Any real-world situation that requires trust is likely to contain several such embedded trust relationships. It is also likely to be embedded in several layers of higher-level relationships with actors that guard trust in the present situation. These guardians of trust are themselves trustees that need to be trusted (Shapiro, 1987): contractual arrangements are guarded by a legal system, but the legal system itself requires trust. The use of the framework to analyze embedded situations can be exemplified with a bank clerk interaction. We can use the framework to analyze the exchange with the bank clerk in the role of the trustee. However, if we ‘zoom out’, we can analyze the interaction of the trustor with the organization as the trustee. Alternatively, by ‘zooming in’, if the bank’s on-line banking system mediates the exchange, we can analyze it in the role of the trustor and see the underlying Internet technology in the role of the mediating channel. As we change the focal point of the framework, different trust-warranting properties come into play, and they will be signalled in 4

We include simulation studies in economics or sociology in our discussion. In these studies software agents are used as proxies for human or organizational decision-makers (2.3.1.2).

ARTICLE IN PRESS 390

J. Riegelsberger et al. / Int. J. Human-Computer Studies 62 (2005) 381–422

different ways. However, for practical reasons it will often be sufficient to limit the considerations to the most salient trustees. 2.1.8. Structure The framework is a template for asynchronous and asymmetric exchanges—the trustor and trustee act sequentially and under different situational incentives. This models many real-life trust situations: a sale, acting on advice, lending money to someone, etc. It also applies to relationships in which we speak of mutual trust, such as work teams or life partners. In such relationships we can identify specific exchanges, often overlapping or succeeding quickly, in which one actor is the trustor and the other actor is the trustee. The framework also applies to problems of trust in advice (e.g. Zimmerman and Kurapati, 2002) or when delegating important tasks to others (e.g. Milewski and Lewis, 1997). A trustor, when delegating, needs to rely on cues from the trustee and the context (1) to assess his ability and motivation. She makes herself vulnerable by not executing the task herself, but by relying on the trustee (2a). The trustee has a situational incentive not to do the delegated task (3b; e.g. saving time or effort), but is likely to be motivated by contextual and intrinsic properties to perform within the limits of his ability (3a). In the case of acting on advice, the advice itself can be interpreted as a signal (1), following advice equals trusting action (2a) and the realization whether advice was good or bad is captured by fulfillment (3a) and nonfulfillment (3b), respectively. 2.1.9. Re-stating the trust problem After explaining its higher-level components, we now link the framework to the terminology of existing trustor-centric models: a trustee who lacks trust-warranting properties, i.e. the motivation (based on contextual or intrinsic properties) or ability to fulfil in a specific situation is untrustworthy in that situation. Uncertainty and hence the need for trust arises from the fact that trust-warranting properties are only indirectly observable via signals, which may be unreliable (2.2). As an example, the existence of a law enforcement agency may lead the trustor to believe that the trustee’s actions will be guided by fear of being punished for non-fulfillment, but she cannot be sure. Similarly, after chatting to the trustee for a while, the trustor might think that the trustee is the type of person who would feel uncomfortable about not fulfiling, but again she cannot be certain. Hence, the trustor’s trust has to be based on perceived trustworthiness (Corritore et al., 2003), rather than on the trustee’s actual trustworthiness. Note that in this use the term trustworthiness is not understood as a stable attribute of a trustee, but as a configuration of trustwarranting properties in a specific situation. The use of the terms trust and trustworthiness in the framework is intentionally kept very broad at this stage to accommodate a wide range of trust-warranting properties and types of trust. Many researchers define trust with only a subset of the trust-warranting properties introduced in the framework. Section 2.4 links these definitions to the framework components.

ARTICLE IN PRESS J. Riegelsberger et al. / Int. J. Human-Computer Studies 62 (2005) 381–422

391

2.2. Signalling trustworthiness Analyzing technology as the channel for exchanges (rather than as the trustee, see 2.1.5), we can identify several functions. It can (1) transmit signals prior to trusting action (e.g. on an on-line vendor’s site), (2) be the channel for trusting action (e.g. when entering personal information in a web form), and (3) be used for fulfillment (e.g. emailing ordered software). We focus our discussion on the signalling function of technology (1). Signals from the trustee and the context allow the trustor to form expectations of behavior. As many mediated transactions are relatively novel, the observed lack of trust can often partially be explained by a lack of experience and inappropriate assumptions about baseline probabilities (Riegelsberger and Sasse, 2001). In first-time or one-time interactions, the signalling of trust-warranting intrinsic and contextual properties is particularly important because no previous experience with a trustee is available. In repeated exchanges, it becomes important to signal identity, as this allows the trustor to extrapolate from knowledge about the trustor that was accumulated in previous encounters. To act on cues of trust-warranting properties or identity, users need to trust the channel and the channel provider to transmit them reliably and without bias.

2.2.1. Mimicry A key problem of trust is that of mimicry (Bacharach and Gambetta, 2003). Nontrustworthy actors may aim to appear trustworthy in order to obtain the benefits. Trustees who do not act under the influence of trust-warranting properties will aim to emit signals of the presence of these properties in order to persuade trustors to engage in trusting action to then refuse fulfillment. Burglars may wear couriers’ uniforms, people may look you in the eyes when they lie to you, and an ‘on-line bank’ may be hosted from a teenager’s bedroom. Mimicry has a cost for untrustworthy actors (Luhmann, 1979; Goffman, 1959). To stay within the above example: fake uniforms have to be bought, steady gaze while lying has to be trained, and fake websites have to be designed. However, successful mimicry can have substantial benefits for untrustworthy trustees, as they will be treated like trustworthy ones. The presence of ‘phishing sites’ illustrates this fact. Hence, mimicry will occur if the cost of emitting a signal for being trustworthy is smaller than the expected value from being treated as trustworthy. Bacharach and Gambetta (2003) give a detailed account of the conditions of mimicry in situations that are covered by a Trust Game structure. In the view of many consumers who abstain from e-commerce, moving transactions on-line reduces the cost for mimicry (Riegelsberger and Sasse, 2001). The presence of mimicry implies that the knowledge of trust-warranting properties and their signals is not sufficient for designing a system that results in high levels of trust and trustworthy behavior. Rather, designers need also take into account the reliability of such signals and their cost structure. Good signals are cheap and easy to emit for trustworthy actors and costly or difficult for untrustworthy ones.

ARTICLE IN PRESS 392

J. Riegelsberger et al. / Int. J. Human-Computer Studies 62 (2005) 381–422

2.2.2. Symbols and symptoms of trustworthiness Drawing on a distinction from semiotics, we can identify two broad categories of signals: symbols and symptoms (see Fig. 3; Riegelsberger et al., 2003). Symbols of trustworthiness. Symbols have an arbitrarily assigned meaning. They have been specifically created to signify the presence of trust-warranting properties. Examples of symbols for such properties are e-commerce trust seals or uniforms. Symbols can be protected by either making them very costly or by sanctioning their misuse. To keep the cost of emitting symbols low for trustworthy actors, they are often protected by sanctions. Symbols are a common way of signalling trustworthiness, but their usability is limited. Because they are created for specific settings, the trustor has to know about their existence and how to decode them. At the same time, trustees need to invest in emitting them and in getting them known (Bacharach and Gambetta, 2003). Symptoms of trustworthiness. Symptoms are not specifically created to signal trustwarranting properties; rather, they are given off as a by-product of trustworthy actions. For example, the existence of a large number of customer reviews for a product sold on an e-commerce site can be seen as a symptom of a large customer base. Professionalism in site design and concern for usability have also been mentioned as symptoms of a professionally run, and hence trustworthy company (Riegelsberger and Sasse, 2001; 3.2.2). Symptoms come at no cost to trustworthy actors, but need to be mimicked at a cost by untrustworthy ones. Trust can also be based on the absence of symptoms of untrustworthiness (e.g. nervousness, scruffy looks, a poorly kept shop or an ineptly designed web interface). Interpersonal cues (e.g. eye-gaze) are often considered to be symptomatic of emotional states and thus thought to give insight into people’s trustworthiness (Baron and Byrne, 2004). However, it has been shown that people overestimate their abilities to read

TRUSTOR

TRUSTEE Ability Motivation

Signals • Symbols of trustworthiness • Symptoms of trustworthiness

Withdr.

Trusting Action

Fulfillment

Non Fulf.

Fig. 3. Ability and motivation as foundation of trustworthiness. Trustworthiness can be signalled via symbols and symptoms.

ARTICLE IN PRESS J. Riegelsberger et al. / Int. J. Human-Computer Studies 62 (2005) 381–422

393

interpersonal cues (Horn et al., 2002). These cues can be used strategically in face-toface situations, and even more so in mediated communication that allows the sender more control over the cues he gives off (e.g. in pre-recorded video or in an avatarrepresentation; Garau et al., 2003). Nonetheless, people seem to be predisposed to take them into account in their trust formation (Reeves and Nass, 1996; Fogg, 2003a). Interpersonal cues are discussed in more depth in Section 2.3.2.4. 2.3. Trust-warranting properties Above we have established the terminology to describe trust-requiring situations. This enables us to introduce in more detail factors that support trustworthy behavior. In the interest of creating a parsimonious framework (Sober, 1981) we delineate the main classes of trust-warranting properties, rather than describing all specific motivational factors or attributes of ability. We distinguish between contextual and intrinsic properties. Intrinsic properties can preliminarily be defined as relatively stable attributes of a trustee that lead him to behave in a trustworthy manner. Most established models of trust focus on intrinsic properties. Contextual properties, on the other hand, are attributes of the context or the situation that provide motivation for trustworthy behavior. While introducing the properties, we discuss evidence for their effect taken from empirical studies, sociological and economic theories, and business practices. 2.3.1. Contextual properties A rational self-interested actor faced with the basic trust-requiring situation described in Section 2.1.2, would maximize his benefit and decide to not fulfil. However, there are several factors that can induce such an actor to behave in a trustworthy manner (Diekmann and Lindenberg, 2001). The framework refers to such factors as contextual properties. Their effect allows trustors to make themselves vulnerable, even if they know very little about the personal attributes of the trustee. Consequently, contextual properties do not fully capture human trust and many researchers contest the use of the term trustworthy to describe actors who only fulfil because they are coerced by contextual properties. Hence, in our framework they are complemented by a discussion of intrinsic properties which will be introduced in Section 2.3.2. This distinction is closely matched by Tan and Thoen’s (2000) dichotomy of control trust and party trust, where control trust captures trust in enforcement systems, and party trust is based on attributes of the trustee. In many cases such control system will be of socio-technical nature, relying on security technologies (Ratnasingam and Pavlou, 2004). These can be analyzed individually with a new instance of the framework that puts them into the role of the trustee (see 2.1.7). Raub and Weesie (2000b) identified three categories of factors that can lead rational self-interested actors to fulfil. We incorporate them as contextual trustwarranting properties into our framework. They are temporal, social and institutional embeddedness (Fig. 4).

ARTICLE IN PRESS 394

J. Riegelsberger et al. / Int. J. Human-Computer Studies 62 (2005) 381–422

TRUSTOR

TRUSTEE Ability Motivation

Temporal

Social

Institutional

Context Incentive

Fig. 4. Contextual trust-warranting properties give the trustee incentives for fulfillment.

Contextual properties often work by providing negative incentives for nonfulfillment. If non-fulfillment cannot be attributed to trustees’ actions, no negative consequences need to be expected by the trustee. Thus, an important condition for the effect of these contextual properties is that the outcome of a situation can be traced to the action (or inaction) of trustees, i.e. that parametric risk (2.1.7) is low and that identities are stable. It should be noted that incentives do not fully determine trustworthy behavior: two trustees acting under the same contextual properties may exhibit different behaviors. This is due to differences in trustee’s ability to act rationally (Simon, 1982), in evaluating the likelihood of outcomes of actions (e.g. the likelihood of being convicted of a crime), and differences in intrinsic properties (2.3.2). Hence, while contextual properties give additional information about the likelihood of trustworthy behavior in a given situation, they do not allow fully predicting it. 2.3.1.1. Temporal embeddedness. Axelrod (1980) called the prospect of further interactions the ‘‘shadow of the future’’, as it gives trustees an incentive to fulfil in a present encounter. This effect has been shown empirically in several studies with iterated Prisoner Dilemmas (Axelrod, 1980; Sally, 1995; Kollock, 1998) and Trust Games (e.g. Bohnet et al., 2001). If trustees have reason to believe that they will interact again with a given trustor in a situation where they are recognizable (i.e. have stable identities), fulfillment can become preferable for trustees. While a trustee could realize an immediate gain from non-fulfillment, he also knows that in the case of non-fulfillment, the trustor would not place trust in future encounters. Nonfulfillment in the present encounter thus leads to the loss of the gains that could be

ARTICLE IN PRESS J. Riegelsberger et al. / Int. J. Human-Computer Studies 62 (2005) 381–422

395

realized in future exchanges (Friedman, 1977).5 Additionally, if future encounters with reversed roles can be expected, non-fulfillment carries the danger of retaliation. Temporal embeddedness, i.e. information about likely future encounters, creates an incentive for the trustee to act in a trustworthy manner. Hence, a trustor can interpret temporal embeddedness as a signal for the likelihood of trustworthy behavior in the present situation. While individuals may not explicitly consider the incentive-setting function of temporal embeddedness in everyday interactions, it helps explaining several known signals of trustworthiness. In the context of business, for instance, a trustee’s initial investments in a relationship can create incentives for trustworthy behavior. The high cost of attracting a customer and closing a first transaction often puts a high premium on customer loyalty and subsequent transactions (Reichheld and Schefter, 2000). Initial investments in a relationship may thus be interpreted as signals for trustworthiness in a given situation. Barriers to entry and exit and a shortage of other trustors are structural conditions that support trustworthy behavior by putting a premium on repeated transactions (Porter, 1985). They can thus be interpreted as signals for general levels of trustworthiness in a market. In the case of interactions between work colleagues, the continuation of exchanges—often with reversed roles—is institutionally assured (2.3.1.3). One can thus expect a higher level of cooperative behavior from an unknown new work colleague than from an unknown person on the street. In other situations, shared group membership (e.g. in a sports club) or geographical proximity (e.g. in a neighborhood) can be an indicator for the likelihood of future encounters. If trustors share group membership with others across a range of social setting, future interactions with reversed roles become more likely (Resnick, 2002). In a more trustor-centric view of temporal embeddedness, repeated interactions also allow the trustor to accumulate knowledge about the trustee, and thus to make better predictions about his future behavior. Hence, assuming stability of attributes, repeated interactions can decrease uncertainty. By extrapolating from past behavior, trust in future encounters can be won (Luhmann, 1979). System designers can capitalize on the shadow of the future: stable identities and information about the likelihood of future encounters (e.g. shared group membership, investment in initial transactions) are the key design implementations to enable this contextual trust-warranting property. 2.3.1.2. Social embeddedness. Social embeddedness allows for the exchange of information about a trustee’s performance among trustors. This information is known as reputation and it is included in many models of trust, such as the ones by Corritore et al. (2003) and Fogg (2003a). From the perspective of these models 5 In the known last round of an experimental Trust Game and a Prisoner Dilemma the shadow of the future is diminished and non-fulfillment is commonly observed. By way of backward induction one could then also expect non-fulfillment in the penultimate round, and so on up to the first round. However, empirically such complete backward induction is not observed and cooperation commonly only erodes in the few last rounds (Poundstone, 1993).

ARTICLE IN PRESS 396

J. Riegelsberger et al. / Int. J. Human-Computer Studies 62 (2005) 381–422

reputation is historic information about trustors’ attributes such as honesty, reliability, or dependability (Sapient, 1999; McKnight and Chervany, 2000; Corritore et al., 2003; see Section 2.3.2). On the assumption of the stability of such attributes across time and context, they can form the basis of trust in present encounters. However, reputation has also a second function, as it provides an incentive to fulfil, even if a trustee does not expect a future interaction with a specific trustor. If the trustor is socially embedded, the trustee’s interest in future interactions with any trustor who might gain access to reputation information creates an incentive for fulfillment in the present encounter (Raub and Weesie, 2000a). Reputation can thus act as a ‘hostage’ in the hands of socially well-embedded trustors (Raub et al., 2000a). Glaeser et al. (2000) demonstrated the incentive-setting effect of social embeddedness in an experimental study with undergraduates. They found that individuals with a higher degree of social embeddedness (measured in this study by the number of friends, involvement in volunteer activities, family connections, and having a sexual partner) elicited more trustworthy behavior from trustees. However, trust based on reputation alone is vulnerable to strategic misuse, as inherently untrustworthy actors can build up a good reputation to ‘cash in’ by not fulfiling in the final transaction. Anecdotal evidence of such behavior exists for online-auction sites (Lee, 2002); it also commonly emerges in game-theoretic laboratory experiments, where rates of non-fulfillment considerably increase towards the end of the experimental sessions, when the reputations gained within the game cease to be of further value (Sally, 1995; Bos et al., 2002; Bohnet et al., 2003). The incentive-setting aspect of reputation has been studied extensively from the perspective of game theory (e.g. Keser, 2002; Bohnet et al., 2003; Dellarocas and Resnick, 2003; Baurmann and Leist, 2004; Ely et al., 2004; Bolton et al., 2004). This research relied on simulations with agents and on empirical lab studies with human subjects. In a simulation study, Huberman and Fang (2004) found that public past information (i.e. reputation) lead to more rapid fluctuations in the perceived trustworthiness of actors than private past information (i.e. temporal embeddedness alone). Factors that influence the effect of reputation are identifiability and traceability, the social connectedness of the trustor (Glaeser et al., 2000), the topology of the social network (Granovetter, 1973), the cost of capturing and disseminating reliable past information (McCarthy and Riegelsberger, 2004), and the degree to which such information itself can be trusted to be truthful (Bacharach and Gambetta, 2003). The Internet allows for the cheap dissemination of reputation information across a large but loosely knit network. Hence, reputation systems are increasingly receiving attention in the HCI community (e.g. at the MIT Reputation Systems Symposium: Dellarocas and Resnick, 2003; at the Symposium on ‘Trust and Community on the Internet’: Baurmann and Leist, 2004; or at the iTrust Trust Management Conference Series: Jensen et al., 2004). However, eliciting reputation information from trustors poses a social dilemma in itself: trustors may have no personal benefit from sharing this information, but usually incur a cost (e.g. time spent entering feedback) from making it public. Approaches to solving this problem include showing the personal efficacy of feedback and collecting implicit feedback, by recording actors’ behavior

ARTICLE IN PRESS J. Riegelsberger et al. / Int. J. Human-Computer Studies 62 (2005) 381–422

397

(for a detailed discussion see McCarthy and Riegelsberger, 2004). In summary, reputation mechanisms—like the expectation of future encounters—require stable identities and the traceability of actions. Additionally, reputation information needs to be accurate, easy to disseminate, and easy to elicit. 2.3.1.3. Institutional embeddedness. Raub and Weesie (2000a) define institutions as ‘‘ yconstraints for human action that result from human action itself and structure the incentives for transactions’’ (p. 22). Institutions often take the form of organizations that influence the behavior of individuals or other organizations. Examples of institutions are law enforcement agencies, judicial systems, trade organizations, or companies, among others. Trusted Third Parties (TTP) or escrow services are examples of institutional approaches that have been very popular in on-line environments. Institutions are often embedded in wider networks of trust in which one institution acts as guardian of trust for another one (Shapiro, 1987). As an example, appropriate behavior of a company might be safeguarded by trade organizations. Trade organizations, in turn, are answerable to governmental consumer protection agencies. As indicated in Section 2.1.7 the framework can be used to focus on the trust relationships at various levels of these embedded networks. The institution that acts as a contextual trust-warranting property at one level can be analyzed in the place of the trustee at the next level. As most everyday interactions are conducted within a web of institutional embeddedness (Rousseau et al., 1998; McKnight and Chervany, 2000), the effect of institutions often does not come to mind as long as a situation conforms a template of standard interactions. In their trustor-centric model McKnight and Chervany (2000) refer to this as the trustor’s perception of situational normality. When new technologies transform the way in which people interact, their templates of situational normality may not apply any more. This can lead to a perception of increased vulnerability until a sufficient number of successful transactions establish a new perception of situational normality. Additionally, if technology increases the spatial distance between the actors, the trustee may be based in a different society or culture, and consequently the trustor may be less familiar with the institutions that govern his behavior (Jarvenpaa and Tractinsky, 1999; Brynjolfsson and Smith, 2000). Organizations as a special case of institutions should be discussed in more depth: they structure the behavior of employees through job roles, salary, career opportunities, disciplinary actions, etc. Trust in an organization and its processes and control structures, allows the trustor to make herself vulnerable towards a representative of a company even if she knows very little about the personal attributes of that particular representative (Raub et al., 2000a). Many sociologists state that cooperation in everyday interactions is increasingly based on trust in institutions such as companies, or their brands, rather than on trust in individuals and their personal attributes (Zucker, 1986; Giddens, 1990; Putnam, 2000). This process of replacing personal trust with institutional trust vastly increases the efficiency of everyday trust decisions, as one does not have to rely on observing an individual’s behavior over time, but can aggregate trust in a corporation or a brand.

ARTICLE IN PRESS 398

J. Riegelsberger et al. / Int. J. Human-Computer Studies 62 (2005) 381–422

However, many researchers also maintain that institutional trust is almost always complemented with at least rudimentary forms of trust in personal attributes (see Giddens’ (1990) concept of re-embedding, or Lahno (2002)) and that relying on institutional trust alone can be problematic (see 2.4). For institutional embeddedness to have an effect on trustees’ behavior, sanctions in the case of non-fulfillment need to pose a credible threat. Hence, beyond stable identities and low parametric risk, clear definitions of non-fulfillment and fulfillment are needed, and the cost of investigation has to be in balance with the cost of nonfulfillment (Macauley, 1963; Diekmann and Lindenberg, 2000). Due to these constraints, some researchers state that institutional trust can only support trustworthy action if there is some level of mutual understanding, shared norms, and goodwill between the actors (Macauley, 1963; Fukuyama, 1999). In the terms of Tan and Thoen (2000), there needs to be control trust and party trust. In the absence of any shared norms, pre-specifying all eventualities and closing all potential loopholes would be very costly. Norms are discussed in Section 2.3.2.2 as an intrinsic property. Finally, in the view of trustor-centric models, institutions can also signal personal attributes such as ability or honesty (McKnight and Chervany, 2000). Many institutions select their members carefully and thus give valuable information about the members’ intrinsic properties (2.3.2). To summarize, institutions support trustworthy action by providing incentives and sanctions for behavior. Most everyday actions are embedded in a web of institutions. For institutions to have an effect on a trustee’s behavior, his actions must be traceable and the cost of investigation and sanctioning must be low compared to the cost of non-fulfillment. Companies are a special form of institutions and they or their brands can bundle institutional trust and thus replace—to some extent—the need for trust in personal attributes of employees. 2.3.1.4. Contextual properties: signals and incentives. In this section we introduced contextual properties. They provide incentives for trustworthy behavior of the trustee. Because incentives can increase the likelihood of trustworthy behavior they can be interpreted as signals for trustworthiness. In addition, contextual properties can signal intrinsic properties of the trustee. Repeated interactions (temporal embeddedness), for instance, allow gaining knowledge about a trustee’s personality traits; reputation information (social embeddedness) can be understood as giving information about stable attributes; membership in an organization (institutional embeddedness) can signal specific professional qualifications. Fig. 5 summarizes both signalling and incentive functions of contextual properties. 2.3.2. Intrinsic properties While contextual properties can motivate rational self-interested trustees to fulfil (see Fig. 5), they do not fully explain how actors behave empirically (Fukuyama, 1999; Riegelsberger et al., 2003). In everyday terms, one would not describe someone as trustworthy when he is cooperating only because he is coerced by contextual properties to do so. In an excellent review article, Fehr and Fischbacher (2003)

ARTICLE IN PRESS J. Riegelsberger et al. / Int. J. Human-Computer Studies 62 (2005) 381–422

TRUSTOR

399

TRUSTEE Ability Motivation

Temporal

Social

Institutional

Context Signal Incentive

Fig. 5. Contextual trust-warranting properties signal incentives and attributes.

discuss experimental evidence for human altruism and intrinsic trustworthiness in the absence of contextual properties. A common way of accounting for such behavior within models of rational self-interested actors is to assign seemingly altruistic acts non-monetary utility (Bacharach and Gambetta, 2001; Lahno, 2002). A trustee may not maximize his monetary gain by fulfiling, but he may experience intrinsic gratification from acting in accordance with his ethical beliefs, which he might value more than other types of gain. We define intrinsic properties as relatively stable attributes of a trustee. They include factors that provide intrinsic motivation for fulfillment (Deci, 1992) as well as the ability to fulfil. Most established models of trust focus on the trustor’s perceptions of these attributes. However, many of the attributes listed in existing models are behavioral (e.g. responsiveness, reliability in McKnight and Chervany’s (2000) model), which makes them difficult to separate them from the behavioral consequences of contextual properties. The approach taken here is to provide a framework for many of the finer-grained sets of attributes that are listed by existing models. This approach results in a parsimonious framework with only three types of intrinsic properties: ability (2.3.2.1), internalized norms (2.3.2.2), and benevolence (2.3.2.3; Fig. 6). These properties reflect the components of trustworthiness defined by Mayer et al. (1995): ability, integrity, and benevolence. They are also easily mapped to Tan and Thoen’s (2000) concept of party trust, which covers attributes of a trustee (as opposed to control trust, which refers to the context). Corritore et al. (2003) define the perception of trustworthiness as being dependent on the attributes honesty, expertise, predictability, and reputation. Honesty is captured by the intrinsic property internalized norms and expertise can be linked to the property ability. Predictability,

ARTICLE IN PRESS 400

J. Riegelsberger et al. / Int. J. Human-Computer Studies 62 (2005) 381–422

TRUSTOR

TRUSTEE Ability Motivation

Temporal

Social

Institutional

Internalized Norms Benevolence

Context Signal

Fig. 6. Intrinsic trust-warranting properties impact the trustee’s fulfillment independent from the presence of contextual incentives.

on the other hand, is a behavioral outcome that may be traced back to several properties (e.g. ability and institutional embeddedness in the case of a health professional). Reputation has been discussed in terms of social embeddedness; it can give information about other attributes and it can also act as a hostage in the hands of the trustor. Fogg (2003a) introduces expertise and trustworthiness as key dimensions of computer credibility. Again, expertise is captured by the property ability. Trustworthiness, defined by Fogg (2003a) as an actor being unbiased, fair, and honest, is captured by the property internalized norms. Many of the attributes listed by McKnight and Chervany (2000) can also be seen as arising from specific internalized norms. Their mapping will be discussed in more detail in Section 2.3.2.2. 2.3.2.1. Ability. Ability as the counterpart to motivation in Deutsch’s (1958) classic definition of trustworthiness stands somewhat outside the discussion of this framework, which is mainly focused on a trustee’s motivation to act in a trustworthy manner. This focus reflects much of the game-theory inspired work on trust in mediated interactions (Rocco, 1998; Jensen et al., 2000; Bos et al., 2002; Olson et al., 2002; Davis et al., 2002). However, the more salient concern in everyday trust decisions will often be whether an actor is able to fulfil. Mayer et al. (1995) define ability for human and institutional actors as a ‘‘y group of skills, competencies, and characteristics that enable a party to have influence within some specific domain’’ (p. 717). McKnight and Chervany (2000) use the sub-constructs competence, expertise, and dynamism to capture ability in their model. Fogg (2003a) and Corritore et al. (2003) introduce expertise, which we see as a specific case of ability. Ability has domain-specific (e.g. technical knowledge) and general components (e.g.

ARTICLE IN PRESS J. Riegelsberger et al. / Int. J. Human-Computer Studies 62 (2005) 381–422

401

intelligence). Signals for a trustee’s ability are given through contextual properties (previous encounters, reputation, or institutional certification), but also directly through interpersonal cues (2.3.2.4) and by observing behavior. The intrinsic property ability also applies to technical trustees. Ratnasingam and Pavlou (2004) list the following dimensions as relevant for trust in technology: confidentiality, integrity (accuracy, reliability), authentication, non-repudiation, access-control, and availability. Dzindolet et al. (2003) also include error rates, as a measure of a technology’s trustworthiness. They note that a system that explains causes of errors is perceived as more trustworthy. Ratnasingam and Pavlou (2004) include a further dimension of technology trust: best business practice. However, in keeping a systemic view, we argue that processes involving human or organizational actors are best analyzed with a new instance of the framework that puts them into the role of trustee. In many cases trust in technology is guarded (Shapiro, 1987) by the socio-technical system it is embedded in (see 2.1.7). 2.3.2.2. Internalized norms. Internalized norms give trustees an intrinsic motivation (Deci, 1992) to fulfil, so that they may favour fulfillment over non-fulfillment even in the absence of contextual properties. Granovetter’s (1985) classic example of the economist, who—against all economic rationality—leaves a tip in a roadside restaurant far from home6 illustrates the effect of internalized norms. In many cases, norm compliance will be internalized to such an extent that it becomes habitual (Fukuyama, 1999). However, while behavior will be influenced by internalized norms, it will not be completely governed by them. The prospect of realizing a very large gain by non-fulfillment may, for instance, outweigh the effect of internalized norms (Granovetter, 1985). Inducing trustees to internalize social norms is a difficult and lengthy process. The foundation is laid in individuals’ socialization, in which they are ‘‘culturally embossed’’ with basic social norms of their culture (Brosig et al., 2002). However, social norms often differ across groups and cultures, they have to evolve over time, and triggering them may depend on the trustor’s signalling of group membership (Bohnet et al., 2001; Fukuyama, 1999). Not all norms are desirable per se, as strong in-group reciprocity may come at the cost of hostility or aggression towards nonmembers (Fukuyama, 1999). Thus, when encouraging the evolution and internalization of specific norms, one has to be aware of the potential for such undesirable consequences. Mayer et al. capture the effect of internalized norms in their construct of integrity: ‘‘y the trustee adheres to a set of principles that the trustor finds acceptable.’’ (Mayer et al., 1995, p. 719). Sitkin and Roth (1993) capture a similar notion, when they note value congruence as a condition for trust among employees within an organization. Most models of trust focus on the content of these principles, values, or norms and on their behavioral consequences. McKnight and Chervany (2000) use the term 6 This situation is not embedded temporally (he will not visit the restaurant again), socially (the waiter cannot tell relevant others about his behavior), or institutionally (there is no formal way of enforcing tipping).

ARTICLE IN PRESS 402

J. Riegelsberger et al. / Int. J. Human-Computer Studies 62 (2005) 381–422

integrity to capture a trustee’s honesty, credibility, reliability, and dependability. They also include further attributes that can be traced back to internalized norms. These include good morals, good will, benevolent caring, responsiveness, and openness, among others. In the domain of advertising and marketing, it is considered essential for brands to develop a trustworthy personality (Aaker, 1996; Kotler, 2002), i.e. to reliably exhibit some of the norms and attributes listed above. Similarly, firms publish in mission statements and corporate philosophies their standards and values. Such statements may be interpreted as purely symbolic, as they may not reflect any real intrinsic attributes ingrained in the companies’ culture. However, if they are carefully crafted and ingrained in the corporate culture, they can form part of employees’ sets of internalized norms (Aaker, 1996). The desire to act in accordance with internalized norms is an important intrinsic property that ensures trustworthy action beyond the effect of contextual properties. Many researchers (Macauley, 1963; Sitkin and Roth, 1993; Fukuyama, 1999) note that a minimum of shared norms is necessary for contextual properties such as contract law and institutional control to have an effect. Designers can capitalize on it by strengthening a sense of group identity, and allowing actors to exchange information about group membership. 2.3.2.3. Benevolence. The intrinsic property benevolence captures the trustee’s intrinsic gratification from the trustor’s well-being and his company. Hence, it is different from the expectation of future returns that is the source of motivation arising from the contextual property temporal embeddedness. The capacity for benevolence is an attribute of the trustee, but the specific level of benevolence in a trust-requiring situation is an attribute of the relationship between trustor and trustee. A trustee may act benevolently towards one trustor, but not towards another one. Granovetter (1985), in discussing the effect of social ties on trustworthiness notes that ‘‘(y) continuing economic relations often become overlaid with social content that carries strong expectations of trust and abstention from opportunism’’ (p. 490). This indicates that benevolence is typically an attribute of long-established relationships. Strong feelings of benevolence only evolve over repeated episodes of trusting and fulfiling (McAllister, 1995). In benevolent relationships, actors do not expect immediate or equal returns. Exchanges that are based on benevolence are also said to be characterized by knowledge-based or identification-based trust (Rousseau et al., 1998; Koehn, 2003). It should be noted that benevolence defined in this way is different from McKnight and Chervany’s (2000) use of the term to summarize trustee attributes such as good morals or good will. These were covered in our discussion of internalized norms (2.3.2.2). Benevolence between organizations will often take the form of benevolent relationships between their representatives, whose job-related interactions have been overlaid by social relations (Macauley, 1963; Granovetter, 1985). Benevolence can also be used to describe the behavior of organizations towards consumers. Many companies aim to be good corporate citizens; they donate money to cultural events

ARTICLE IN PRESS J. Riegelsberger et al. / Int. J. Human-Computer Studies 62 (2005) 381–422

403

or humanitarian efforts; they aim to exceed customers’ expectations, or to enrich their lives. Such behavior may increase trust, because it signals an actor’s willingness to forego situational temptations and to derive gratification from the good of others. That said, actors can clearly aim to appear benevolent for strategic reasons, and much apparently benevolent behavior can be explained by the desire to be perceived as trustworthy as a means of attracting business, improving government relations, etc. However, the strategic use of signals of benevolence carries the danger of destroying trust when they are perceived as manipulative (Riegelsberger and Sasse, 2002). Benevolence can be encouraged by creating opportunities for the formation of long lasting relationships between trustor and trustee, in which they get to know each other and are likely to derive gratification from each other’s company. Examples are repeated interactions, and possibilities to express vulnerability (e.g. through selfdisclosure), good intentions (e.g. free offers, presents), and liking (e.g. interpersonal cues). 2.3.2.4. Interpersonal cues and intrinsic properties. Interpersonal cues play a special role in signalling and triggering intrinsic trust-warranting properties in interactions between humans. As mediating face-to-face interactions or replacing them with human-computer interaction often leads to the loss of some interpersonal cues (Do¨ring, 1998), their role merits a brief discussion. The presence of intrinsic trust-warranting properties is widely believed to manifest itself through interpersonal cues (Goffman, 1959; Bacharach and Gambetta, 2003). The symptomatic nature (2.2.2) of interpersonal cues is also supported by empirical studies. Swerts et al. (2004), for instance, showed that speakers emit para- and nonverbal cues of certainty that can be decoded by trustors to make inferences about a speaker’s ability. Information about a trustee’s socio-demographic background and thus about the set of norms that are likely to guide his behavior can be inferred from interpersonal cues (Baron and Byrne, 2004). Regarding benevolence, nonverbal cues have been shown to be good signals for internal states and affect (Harper et al., 1978; Hinton, 1993). Following these findings, the ‘lack of trust’ in on-line environments can be partially attributed to a lack of possibilities for assessing trustees’ intrinsic trust-warranting properties. While interpersonal cues can give information about intrinsic properties, they are also subject to individuals’ impression management (2.2.1). Interpersonal cues are not only important because they carry information about the trustee, but also because they can trigger involuntary visceral reactions. These reactions may have an effect on trustees’ and on trustors’ behavior. Regarding the trustee, several studies showed that social presence (Short et al., 1976) or social proximity (Davis et al., 2002) afforded by interpersonal cues can call into action culturally embossed norms. By reducing perceived social distance, interpersonal cues can induce human trustees to fulfil. Visual identification (Bohnet and Frey, 1999), a photo of the interaction partner (Olson et al., 2002), and even a synthetic voice (Davis et al., 2002) have been shown to increase cooperation in social dilemma studies.

ARTICLE IN PRESS 404

J. Riegelsberger et al. / Int. J. Human-Computer Studies 62 (2005) 381–422

However, interpersonal cues can also lead to immediate visceral responses of human trustors. Such cues can create some level of affective trust, even if there is no rational basis for such trust attributions. This was amply demonstrated by Reeves and Nass (1996) in their studies with computers as social actors. Even a synthetic animated character that exhibited only very simplistic interpersonal cues was found to increase trust (Rickenberg and Reeves, 2000). Further evidence for immediate affective responses to interpersonal cues comes from the field of neuroscience. O’Doherty et al. (2003) found that just seeing a photo of a smile, results in immediate activation of a brain region that is associated with gratification.7 Hence, representations of humans— real or synthetic—can be expected to induce some level of affective trust, independent of whether this is warranted by underlying properties or not. 2.4. Forms of trust In the introduction we stated that there is an abundance of trust constructs, categories and concepts. In this section we show how the different types of trust that were identified by other researchers can be accommodated by the framework. Each type of trust relates to a belief about a specific configuration of trust-warranting properties. Subsequently, types of trust differ in the way trustworthiness is signalled and perceived, in their stability over time, how wide their scope is, and what types of vulnerabilities are normally covered by them (Corritore et al., 2003). Our fundamental distinction between contextual and intrinsic properties is reflected in the discussion of other researchers. Trust based on contextual properties is also called reliance or assurance-based trust (Lahno, 2002; Yamagishi and Yamagishi, 1994). Describing a similar concept, Rousseau et al. (1998), and Koehn (2003) use the terms calculus-based trust or calculative trust, respectively. Other terms that have been coined are guarded trust (Brenkert, 1998; Corritore et al., 2003), deterrence-based trust (Lewis and Weigert, 1985; Lewicki and Bunker, 1996), and control trust (Tan and Thoen, 2000). Institutional trust (Lahno, 2002; Zucker, 1986) specifically captures the effect of the contextual property institutional embeddedness. Rousseau et al. (1998) see it as a backdrop that envelopes and safeguards our everyday interactions, thus closely matching McKnight’s (McKnight and Chervany, 2000) concept of situational normality. Technology trust (Ratnasingam and Pavlou, 2004) and Internet trust (Lee and Knox, 2001) are largely based on the ability of technological systems used to carry out interactions. However, as they are embedded in socio-technical systems, technology trust commonly also entails other types of trust in non-technical actors that safeguard technology. Types of trust that are mainly based on intrinsic properties of the trustee are relational (Rousseau et al., 1998), party (Tan and Thoen, 2000), partner (Ratnasingam and Pavlou, 2004), knowledge-based (Koehn, 2003), or respect-based trust (Koehn, 2003). Relational trust develops over time and is founded on a history of successful exchanges. It has a higher bandwidth (Rousseau et al., 1998; Corritore 7

For a more detailed investigation into the processing of facial trustworthiness in the field of neuroscience see Winston et al. (2002).

ARTICLE IN PRESS J. Riegelsberger et al. / Int. J. Human-Computer Studies 62 (2005) 381–422

405

Table 2 Different types of trust linked to the stages in a relationship Stage Early

Author Medium

Deterrence-based Knowledge-based Calculus-based Basic/guarded Swift Calculative

Mature Identification-based Relational Extended Knowledge-based, respect-based

Lewicki and Bunker (1996) Rousseau et al. (1998) Corritore et al. (2003) Meyerson et al. (1996) Koehn (2003)

et al., 2003), i.e. ensures risk-taking across a wider range of situations. Some of the authors describing different types of trust link these to different stages in a relationship (Table 2). However, it is important to note that the relevance of individual trust-warranting properties is also determined by other factors than just the level of acquaintance. Tan and Thoen (2000), for instance, hold that most trust relationships are founded on party trust (i.e. intrinsic properties) and control trust (i.e. contextual properties). Time may shift this balance, but it does not fully explain the configuration of trust-warranting properties in a given situation. It is important to emphasize that identification-based or relational trust is not the optimal form of trust in all situations (Rousseau et al., 1998). While these forms are certainly the most stable ones and while they may be the preferable from a moral or ethical perspective (Koehn, 2003), they come at a high cost in terms of time requirements and lower flexibility. Aiming for relational levels of trust in most exchanges would be very costly, and would tie the trust to an individual rather than to a role. It would be very time-consuming to aim for close relationships with sales assistants, bank clerks, or airline pilots. Indeed, as indicated in Section 2.3.1.3, trust in organizations, brands, and institutions allows for much of the flexibility and complexity of today’s life (Giddens, 1990). Most exchanges are conducted on the level of calculus—or knowledge-based trust—often assured by institutional embeddedness (Rousseau et al., 1998). Many researchers differentiate cognitive and affective/emotional trust (Corritore et al., 2003; Lewis and Weigert, 1985; McAllister, 1995; Rocco et al., 2000). Cognitive trust is based on ‘‘good rational reasons why the object of trust merits trust’’ (Lewis and Weigert, 1985, p. 972).8 It is thus based on evaluating trustee’s incentive structures and ability. It reflects the economic understanding of trust as a rational choice. Affective trust, on the other hand, is based on immediate affective reactions, on attractiveness, aesthetics, and signals of benevolence. Often trusting action will result from a mix of affective and cognitive trust (Zajonc, 1980; Corritore et al., 8

While the distinction between affective and cognitive trust is widely used and helpful in clarifying the mechanics of trust, it is important to note that cognition and affect are not distinct systems of decisionmaking. Findings in neuroscience indicate that a clear distinction between rational and affective reasoning cannot be upheld (Damasio, 1994; see also Norman, 2004; O’Doherty et al., 2001; Zjonc, 1980).

ARTICLE IN PRESS 406

J. Riegelsberger et al. / Int. J. Human-Computer Studies 62 (2005) 381–422

2003). The amount invested in advertising, in its current form largely concerned with building affective trust, indicates the power of affective elements for everyday consumer decision-making (Aaker, 1996). However, approaches that purely aim to build trust based on affective reactions can easily backfire: trustors might find themselves misplacing trust, and as a consequence withdraw trust from a domain in the long run. As a result, what we commonly observe is trustees signalling the influence of trust-warranting properties and aiming to elicit positive affective reactions.

3. Application In this section, we discuss how the framework (Fig. 7) can reshape the research agenda, inform the design of studies, and generate hypotheses. We then show its application in three scenarios. 3.1. Research 3.1.1. Generating hypotheses While the relationships between the variables in the framework are based on empirical findings, more research is necessary to validate the whole framework— and, where necessary, elaborate it. In this process, it can be used to identify and categorize key variables for specific application domains. Current studies in economics focus on the effects of contextual properties such as reputation. HCI research, on the other hand, has largely focused on trustors’ perceptions of trustees’ characteristics, i.e. intrinsic properties in the domain of e-commerce. By bringing together contextual and intrinsic properties, this framework can be used to investigate their interplay. One question that could be asked at this intersection is for the effect of visual identification (insight into intrinsic properties) on the effect of contextual properties, e.g. in the form of reputation systems. Another question would be whether and under what conditions the absence of contextual properties (e.g. a reputation system) favours the emergence of self-organizing groups with codified norms of conduct (cf. Cheng et al., 2002). In bringing together the strands of previously unrelated research on on-line trust, the framework can be seen as one step in the process of formulating a theory of trust in technology-mediated transactions. 3.1.2. Setting the focus on well-placed trust and trustworthy behavior In focusing primarily on the factors that lead to trustees’ trustworthy action rather than on the factors that contribute to trustors’ trust, this framework diverged from the approach of established on-line trust and credibility models (McKnight and Chervany, 2000; Tan and Thoen, 2000; Corritore et al., 2003; Fogg, 2003a) and guidelines for trustworthy design (Sapient, 1999; Nielsen et al., 2000; Shneiderman, 2000; Egger, 2001). Design guidelines in particular tend to be focused on interface elements that increase perceived trustworthiness. The danger of this approach is that these design elements lose their significance once they are identified and used by

ARTICLE IN PRESS J. Riegelsberger et al. / Int. J. Human-Computer Studies 62 (2005) 381–422

TRUSTOR

407

TRUSTEE Ability

Temporal

Motivation

Social

Institutional

Internalized Norms Benevolence

Context Signal Incentive

Fig. 7. The complete framework.

untrustworthy actors. On-line trust models, by contrast, focus on underlying attributes that contribute to perceived trustworthiness. They are thus likely to be more long-lived and independent from specific application domains. Our approach in establishing this framework had been to go back one step further and ask for the factors that lead trustees to act in a trustworthy manner. This opens up a wider research and design space, as it helps not only to address how trustworthy actors can signal their trustworthiness, but how to design a system so that it encourages trustworthy behavior. This approach also allowed showing how known signifiers of trustworthiness relate to contextual and intrinsic properties. 3.1.3. Researching trust rather than reliance By categorizing incentives for trustworthy action into contextual and intrinsic properties, we pinpointed a terminological ambiguity that has not been discussed widely in HCI research: the distinction between reliance and trust. It is often claimed in the security literature (e.g. Schunter et al., 1999), but also in HCI research (e.g. Sapient, 1999), that security, control, and enforcement structures, as well as institutional embedding increase trust. Based on our analysis above, we argue that such assurance mechanism can increase observable trusting action by increasing one component of trust—reliance (Lahno, 2002). However, reliance is largely independent from the intrinsic properties of the trustee, and cooperation based on it is likely to cease when contextual properties are not in place. The philosopher Onora O’Neill (2002) even argues that we might be losing chances to build trust by increasingly relying on assurance mechanisms. Drawing on empirical studies, Sitkin and Roth (1993) drew similar conclusions for legalistic approaches (i.e. those based on

ARTICLE IN PRESS 408

J. Riegelsberger et al. / Int. J. Human-Computer Studies 62 (2005) 381–422

institutional embeddedness) in the field of organizational behavior: ‘‘(y) adopting legalistic mechanisms may not only fail to restore trust, but may lead to an escalating spiral of formality and distance that increases distrust.’’ (p. 385). This insight suggests that we also need to focus on intrinsic properties for trustworthy behavior. In the context of interactions that are mediated by technology, this is particularly important, because contextual properties lend themselves more easily to automation and technical solutions than intrinsic ones, and thus may seem more appropriate to HCI researchers and practitioners. Email allows to easily trace and record past statements and behaviors (temporal embeddedness), the power of reputation information can be amplified through electronic networks (social embeddedness), and software applications can check a multitude of certificates in seconds (institutional embeddedness). However, intrinsic properties are at least as important, even if they do not scale as well as approaches based on contextual properties. In many everyday situations, trust and fulfillment rely—with varying degree—on contextual and intrinsic trust-warranting properties. Thus, research needs to address how the design of digital environments can support the evolution and internalization of norms and the growth of benevolent relationships. If researchers disregard intrinsic properties, and try to ensure cooperation through external incentive mechanisms only, they run danger of replacing trust with reliance on assurance mechanisms (Yamagishi and Yamagishi, 1994; Lahno, 2002; O’Neill, 2002). Pure reliance can be costly and inflexible as contextual properties rely on a priori definitions of fulfillment and non-fulfillment, continued interactions, identification, traceability, etc. Furthermore, fulfillment based only on the effect of contextual properties is bound to break down in their absence. 3.2. Scenarios To illustrate the usefulness of the framework for analyzing trust in mediated interactions, we apply it to three scenarios. First we take a brief look at how technology can transform the effect of trust-warranting properties by comparing local bank branch transactions to call centre transactions. We then use the framework to discuss research on trust in e-commerce. Finally, we apply it to voiceenabled on-line gaming to generate design ideas. 3.2.1. Branch to call centre: the transformation of trust through technology Below we use the hypothetical scenario of a rural community in which the local bank branch was replaced by call centre access. This scenario is discussed to illustrate how the systemic framework can be used to structure the analysis of the impact of technology on trust relationships. While some of the effects and trends described are supported by anecdotal and empirical evidence (Putnam, 2000; Economist, 2004), we want to emphasize that this section is provided as an example of how the framework can be applied to structure empirical phenomena, rather than as a report on existing phenomena.

ARTICLE IN PRESS J. Riegelsberger et al. / Int. J. Human-Computer Studies 62 (2005) 381–422

409

In a very small town, the local bank branch manager was likely to be a member of the same local community as the customer, sharing the same network of friends and neighbors. Both could be expected to be members of these communities for the foreseeable future (temporal and social embeddedness). Furthermore, the bank branch manager, based on previous encounters, had a good knowledge of the customer’s integrity, ethics and ability to work (intrinsic properties). Based on the effect of these contextual and intrinsic properties, the bank manager (in the role of the trustor) might have been willing to override the bank’s lending rules and give the customer an unusually high short-term credit in the case of an unforeseen emergency. Conversely, the customer (in the role of the trustor) would have been very likely to trust the branch manager not to provide him with an inadequate or overpriced credit. Now imagine the local branch has been closed, but customers are given access to a 24 h call centre that processes customers’ calls abroad. The only way in which the call centre employee can establish the customer’s creditworthiness is by transaction history and by applying baseline probabilities established by the bank’s rating system. Conversely, for the customer to establish the employee’s trustworthiness, she has to rely on the institutional assurance provided by the bank’s job roles. It is unlikely that she will interact with a given call centre employee in the future, and they do not share any social network; if the call centre is placed in another country she does not even know under which laws and levels of law enforcement the employee is operating. The bank’s brand is now the main signal for the contextual property institutional embeddedness. Thus, branding becomes paramount: the waiting loop plays the bank’s auditory logo, and the employee gives his organizational affiliation beyond his name. In terms of intrinsic properties, again the customer has to rely on the bank’s selection process for employees and on the paraverbal cues she can pick up in the voice of the employee (confidence, doubt, etc.). This example illustrates how the framework could be used to analyze the transformation of trust relationships among actors brought about by new technology. Clearly, the framework could also be used for a more in-depth analysis, focusing on individual vulnerabilities, technologies, and further institutional trustees. Technologies such as call centers are introduced to increase the efficiency of transactions. However, they might also transform the configuration of trustwarranting properties, which can result in unexpected and/or undesirable consequences (Landauer, 1996; Putnam, 2000; Resnick, 2002). 3.2.2. E-commerce In this scenario we illustrate how the framework can be used to structure and analyze existing trust design guidelines for B2C e-commerce. The framework acts as a theoretical backdrop and is used to explain why certain design choices have been found to signal trustworthiness. More importantly, however, it shows their limitations in terms of being reliable trust cues. It further shows areas that could be addressed by trust research and design solutions. In applying the framework to B2C e-commerce we can differentiate between vulnerabilities that are related to the Internet technology as trustee (e.g. reliability of

ARTICLE IN PRESS 410

J. Riegelsberger et al. / Int. J. Human-Computer Studies 62 (2005) 381–422

data transmission, interception; see Lee et al., 2001; Ratnasingam and Pavlou, 2004), and those that are related to the vendor’s actions. We chose to apply the framework to the vulnerabilities that are related to the vendor in the role of the trustee. Looking at temporal embeddedness (see Fig. 8), vendors can signal trustworthiness in different ways: the first is to indicate that they are in the market for the long term and that they are interested in a continued business relationship with the customer. Signals are visible investment in the business and the site. Many researchers investigating the topic identified professionalism of the website as a core indicator of trustworthiness (Shneiderman, 2000; Nielsen et al., 2000; Egger, 2001; Fogg, 2003b; Riegelsberger and Sasse, 2003). Professionalism signals—among other things—an interest in continued business and thus susceptibility to the effects of temporal embeddedness. Professionalism includes absence of technical failures, absence of mistakes, breadth of product palette, aesthetic design, and information about physical assets. Corritore et al. (2003) name perceived ease of use as an important factor for the perception of trustworthiness. This is supported by findings reported by Egger (2001) and Sapient (1999). While this factor, as a key part of Davis’ (1989) Technology Acceptance Model (TAM), is likely to influence use of an e-commerce site directly, it can also be seen as a further signal for an investment in a site, and thus for an on-line vendor’s desire to be in business for an extended period of time. Vendors can also show a continued interest in a relationship with a specific client by investing in the first transaction. This can take the form of investment in market communications, trial-offers (e.g. Amazon’s first time visitor’s voucher), trialpurchases, or loyalty schemes (Egger, 2001). Similarly, if a vendor has a high street presence or global brand, the likelihood of repeat purchases increases and so does the vendor’s interest in good fulfillment (Jarvenpaa and Tractinsky, 1999). This aspect favours big, well-known players in markets that are perceived as risky (cf. Gatti et al., 2004). Social embeddedness, i.e. reputation, is an important factor for purchase decisions. In our initial interviews in the earlier days of e-commerce (Riegelsberger and Sasse, 2001), users said that they paid much attention to their friends’ and families’ recommendations. Similarly, media coverage or consumer reports were taken in account. From a customer’s perspective, reputation was not considered an incentive for trustworthy behavior, but as information about the vendor’s intrinsic properties (e.g. ability). Taking a systemic view, reputation provides incentives for trustees to act in a trustworthy manner. The Internet itself can be used to facilitate the formation and dissemination of reputation information about vendors: services such as Epinions9 or Bizrate10 aim to do just that. However, they suffer from the additional cost users face when entering feedback on a different site after they received fulfillment. A more usable system would be integrated e.g. in the user’s browser and could record repeat purchases implicitly (McCarthy and Riegelsberger, 2004), thus reducing the cost of contributing reputation information.

9

www.epinions.com www.bizrate.com

10

ARTICLE IN PRESS J. Riegelsberger et al. / Int. J. Human-Computer Studies 62 (2005) 381–422

411

Institutional embeddedness can be signalled by giving a company’s physical location, such as an office address (Shneiderman, 2000; Egger, 2001). However, such information can also indicate a lack of institutional assurance, e.g. in the case of impressive headquarters on the Cayman Islands. This example shows how a signal that can be perceived as an indicator of trustworthiness can become a signifier of untrustworthiness depending on the configuration of other contextual properties. Taking a systemic view helps practitioners to anticipate such effects for their specific application scenario. In the absence of a clear legal framework, when dealing with vendors in obscure locations, or when conditions of fulfillment are hard to specify, the possibility of legal recourse is a poor incentive for trustworthy action and thus a poor signal for trustworthiness. In such situations trust seal programs (e.g. Sapient, 1999) have been suggested as ways of institutionally assuring trustworthiness. Such programs work by establishing rules of conduct (e.g. with regard to security technology or privacy policies) and checking their members’ performance against these rules. Complying members are awarded trust seals: small icons they can display on their site. These seals are commonly linked to the certifying body’s site to enable checking their veracity. The disadvantage of many seal programs is that the certifying organizations are not well known and thus have no trust they could transfer (Riegelsberger and Sasse, 2003). This approach by itself cannot build trust but just moves the problem of trust one step further. Trust seals given by well-known organizations that ‘sublet’ their trust by endorsing unknown vendors are more promising, as their own reputation is at stake. Amazon’s zShops go beyond giving seal-based endorsements by hosting independent vendors and enforcing codes of conduct. Vendors’ intrinsic properties can also be inferred by site elements. Professionalism in site design can be seen as a symptom for competence or ability to fulfil. Additional information about a company’s intrinsic properties can be gathered from mission statements, privacy policies, disclosure of terms and conditions, and costs (e.g. shipping; Egger, 2001). However, these signals are symbolic, and can easily be mimicked by untrustworthy vendors at a relatively low cost. Strong benevolence as it is identified in long-standing relationships between humans does not apply to e-commerce. However, with a continued business relationship a form of benevolence between vendor and customer can grow. This can take the shape of strong brand loyalty (Riegelsberger and Sasse, 2002) on the side of the customer and loyalty schemes, more lenient payment conditions, or access to internal information on the side of the vendor. Policies such as ‘‘returns with no questions asked’’ increase the vendor’s vulnerability and can indicate that they want a benevolent relationship. As said before, these efforts can also backfire: aiming to appear benevolent without sufficient actions to found these claims can be perceived as manipulative and thus decrease trust (Riegelsberger and Sasse, 2002). For the extensively researched area of trust in e-commerce, the framework showed how and why identified signifiers of trust do work–but also how their impact can change depending on the situational configuration of contextual properties, and thus where their limitations are. Fig. 8 summarizes our discussion.

ARTICLE IN PRESS 412

J. Riegelsberger et al. / Int. J. Human-Computer Studies 62 (2005) 381–422

CUSTOMER

VENDOR Temporal

Ability

• Initial Investments (trial offers, physical assets, professionalism) • Longevity

Motivation

Social • Recommendations • Customer Feedback • Reputation Systems

Institutional • Brand • Trust Seal • Location (applicability of law enforcement)

Professionalism

Contextual properties provide incentives but they are largely interpreted as signals for intrinsic properties. The effect of contextual properties (e.g. location information) depends on structural factors and has to be evaluated individually Internalized Norms • Policies: Privacy, etc.

Benevolence • “No questions asked” Return Policies

Fig. 8. The framework applied to trust in consumer e-commerce.

3.2.3. Voice-enabled gaming environments In this section, the framework is used to explore the available design space in a novel application area that is currently suffering from low levels of trust and cooperation: voice-enabled on-line games (Hails, 2003). In many of these games, players are randomly matched to play and talk with others, about whom they know relatively little. Such platforms allow encounters with a relatively high social presence (Short et al., 1976), while they offer little possibilities to pre-select interaction partners. High levels of reported unpleasant encounters and rude behavior are the result (Hails, 2003). Unlike trust in e-commerce, this domain has received relatively little research attention. Hence, rather than presenting and explaining existing findings in the light of the framework, it is used to suggest design and research approaches. Following the framework, a first pass at overcoming the situation is to create stable identities so that contextual properties can take effect. Stable identities allow users to take note of those who behaved in an unfair or unpleasant manner, and they can avoid future encounters with these actors (temporal embeddedness). The design of the technology can assist by providing block lists, so that players do not have to remember others’ IDs. This approach solves the problem for individual players after the fact, i.e. after they had a negative encounter. However, in a very large population of actors, loosing the chance for future encounters with a single actor is not a strong incentive for appropriate behavior on the side of the trustee—there will always be someone who has not been annoyed yet. Hence, designers of such systems could build on the effects of social embeddedness and allow users to make their block lists

ARTICLE IN PRESS J. Riegelsberger et al. / Int. J. Human-Computer Studies 62 (2005) 381–422

413

public. This would give every actor in the system a basic public reputation score. The effects of reputation can be improved by including positive or richer feedback that allows inferring players’ intrinsic properties (e.g. playing abilities). Institutional embeddedness can be harnessed in several ways. Users could report misbehavior to a rules enforcement body that would investigate it (e.g. through conversations archives or logs of in-game behavior) to penalize misbehaving actors (e.g. by banning them for a period of time). Obviously, such an approach is costly and open to dispute (Davis et al., 2002). Its applicability is limited to cases of gross misconduct. Another way of harnessing institutional embeddedness would be to allow for the formation of groups or organizations within the gaming community. Such organizations would be allowed to formulate their own rules of conduct and to admit and dispel members based on their behavior (as e.g. observed by Cheng et al., 2002). Such evolving organizations could then gain reputations and enforce appropriate behavior of their members. To support this, systems must allow users to form organizations, discuss and define rules, to regulate membership and to allow them to create reliable signals for membership. A very different approach would be to focus on exposing intrinsic properties prior to in-game encounters, and to allow players to select partners based on these intrinsic properties. Profiles containing textual personal information (e.g. age, gender, education), photos, or voice statements could be the basis of such choices (Jensen et al., 2002). This additional knowledge about other actors can also be expected to call into action norm-compliant behavior, as it shows other players as individuals that are otherwise only perceived as dis-embodied voices (see 2.3.2.4). Finally, by allowing users to not only keep track of people that annoyed them (e.g. with block lists), but also of good gaming encounters, e.g. in the form of buddy lists, designers can enable and encourage the development of benevolent relationships, i.e. friendships, among players. Additional tools to increase the likelihood of re-encounters are presence indicators, the possibility to email, or chat or to invite each other to games across platforms. We demonstrated several ways of encouraging trustworthy behavior and trust based on the application of the framework in a scenario that is currently suffering from low-trust interactions. Current approaches focus on contextual properties in the form of e.g. control by moderators (institutional embeddedness) or block lists (temporal embeddedness). The framework (see Fig. 9) showed many further options for supporting trust in on-line gaming communities. In our view, and based on empirical evidence in virtual communities (Cheng et al., 2002), giving the players the tools for self-organization and pre-selection, while providing assurance against the grossest incidents of misconduct is a very promising strategy.

4. Summary Trust is an integral part of human interactions. It allows actors to engage in exchanges that leave both parties better off; it reduces the cost of these transactions; and on a societal level, trust correlates positively with productivity, low crime and health (Resnick, 2002). New technologies allow interactions between individuals who

ARTICLE IN PRESS 414

J. Riegelsberger et al. / Int. J. Human-Computer Studies 62 (2005) 381–422

PLAYER 1

PLAYER 2 Temporal

Ability

• Stable Identities • Block Lists

• Personal Profile Motivation

Social • Public Block Lists • Public Feedback

Contextual properties provide negative incentives for anti-social behavior. Providing the tools for group formation allows for the evolution of cooperative norms.

Institutional • Policing Agencies • Clans / Gilds • Group Membership Signals

Internalized Norms • Fostered by Clans • Rich Channels (for pre-game interaction) Benevolence • Tools for Friendships (buddy lists, presence indicators, chat)

Fig. 9. The framework applied to trust in on-line games.

know little about each other prior to the encounter. Exchanges that have traditionally been conducted face-to-face are now mediated by technology or even executed with technology as a transaction partner. These changes put the responsibility for supporting trust and trustworthy action on the designers of the technical systems involved. This situation has led to a surge of research in on-line trust. Much of this research has focused on creating guidelines for increasing the perceived trustworthiness of trustees such as e-commerce vendors. We argued that a focus on well-placed trust is important for the long-term success of any technology. Existing models of trust (1) in technology-mediated interactions enable such a focus, as they break trust into subconstructs that are related to attributes of the trustee. They are also mostly technology and domain independent. To complement these models, we put forward a framework of trust in technologymediated interaction that focuses on the antecedents of trustees’ behavior. It does not take trustworthiness as relatively stable attribute of the trustee, but shows how the design of technology can influence trustworthy behavior in a specific situation. We wanted to expose the ‘mechanics of trust’, i.e. show how contextual and intrinsic properties influence the behavior of the trustee. In a second step, we looked at how the presence of these properties can be signalled to build trust. Signals indicate a trustee’s trustworthiness in a given situation and thus form the basis for a trustor’s trust. However, these signals are subject to mimicry, because non-trustworthy trustees will aim to be perceived as trustworthy. Hence, knowledge about trustwarranting properties cannot be expected to completely solve problems of trust. However, by designing socio-technical systems with knowledge about the underlying

ARTICLE IN PRESS J. Riegelsberger et al. / Int. J. Human-Computer Studies 62 (2005) 381–422

415

mechanics, designers can aim to minimize mis-placed trust and untrustworthy behavior. We identified two types of signals: symbols and symptoms. Symptoms are signals of trustworthiness that are given as by-product of behavior. They are preferable to symbols, which can be costly to emit, are less reliable, and subject to mimicry. Different instances of the framework can be used to analyze the different vulnerabilities and trustees that might be relevant in a specific situation. The framework allows analyzing trust in first-time, one-time, and continued interactions. For continued interactions, it allows for an analysis of trust at different stages of the relationship. Contextual properties (temporal, social, and institutional embeddedness) will be of higher importance in first interactions and in one-time encounters, when little information about the intrinsic properties of a trustee is available. Intrinsic properties of the trustee (ability, internalized norms, and benevolence), on the other hand, are more important in continued exchanges and become increasingly relevant as trust matures and trustor and trustee get to know each other better. By focusing on the trustee’s behavior, this framework advocates researching whether trust is well-placed, i.e. treating actual trustworthiness as an independent variable. It further aids researchers in interpreting results from studies within a larger frame of reference. For designers, the benefit of the model is that it provides a domain-independent tool to explore the available design space for systems that enable well-placed trust and encourage trustworthy behavior. To summarize the insight gained from our analysis of the mechanics underlying trust in risky exchanges, we present design heuristics that can be expected to support trustworthy action as they sustain contextual and intrinsic trust-warranting properties (Table 3). While this overview will be helpful in ensuring the presence of the basic requirements for trustworthy behavior, it should not lead researchers and designers to assume that trust and trustworthy action can be ‘designed into a system’ by following such heuristics. Firstly, while designers can aim to set appropriate incentives, they cannot fully control trustee’s behavior. This will also depend on trustees’ susceptibility to incentives, on their ability to act in their own best interest (Simon, 1982), and on the norms they have internalized (2.3.2.2). Secondly, we believe that a system also needs to give room for the evolution of groups and informal institutions (Diekmann et al., 2001). These can develop codes of conduct, signals for membership, and eventually can lead to the formation of intrinsic preferences in the form of internalized norms. A good example of such an approach are role playing games that allow the formation of gilds with rules and membership signs (Cheng et al., 2002). Thus, by ensuring the presence of the fundamental cornerstones for trustworthy behavior and by leaving room for evolution, a culture of trust can emerge over time.

5. Future research As a next step the systemic framework needs to be tested, validated and elaborated, by applying it to the exploration of further scenarios, to structure

ARTICLE IN PRESS 416

J. Riegelsberger et al. / Int. J. Human-Computer Studies 62 (2005) 381–422

Table 3 Design heuristics for trust-supporting systems Heuristic

Description

Relevant property

Stable identity

Stable identities are a key requirement for contextual properties to take an effect. They are also important for signaling intrinsic properties in repeated encounters.

Contextual and intrinsic properties

Traceability, accountability

Contextual properties can only provide incentives if parametric risk is low, i.e. if outcomes can be traced to actions.

Temporal, social, institutional embeddedness

Group membership

Shared group membership and group size give indication for the likelihood of re-encounters and thus determine the strength of the contextual property temporal embeddedness.

Temporal, institutional embeddedness

Group identity

Furthermore, shared group membership and a strong group identity support the emergence of group-specific norms of cooperation or generalized reciprocity. Group membership can also be an indicator for competence and abilities and for the rules by which a trustee will be bound.

Ability internalized norms

Social presence

Several studies have shown that social presence calls norm compliant behavior into action.

Internalized norms benevolence

Recording outcomes

Only if trustors have a possibility to record the outcome of interactions, temporal and social embeddedness can become incentives for trustworthy action.

Temporal, social embeddedness

research approaches, and to generate testable hypotheses. Validation requires testing the predicted relationships (1) in different application domains, (2) with different methods, and (3) including more framework variables. Further relevant application domains are, for instance, trust in ambient systems or in virtual organizations. The methods used to establish the present framework should be complemented with others such as longitudinal field studies, macro-analyzes of the effect of different institutional environments, and experimental studies with behavioral measures. Finally, the framework—in linking several trustor, trustee, and context variables— predicts complex interactions between these variables. Future research should incorporate more of these variables to enable easier integration of new findings. The framework was kept parsimonious to ensure its applicability across a wide range of application areas. It also abstained from declaring specific norms or attributes of trustworthiness. Rather, it aimed to show how such attributes fit into the greater ecology of trustworthy behavior. There is space for further elaborating the model by adapting it to specific application areas. We are convinced that what we sometimes see as a ‘lack of trust’ is not an unavoidable consequence of technology-mediated exchanges. Rather, these examples

ARTICLE IN PRESS J. Riegelsberger et al. / Int. J. Human-Computer Studies 62 (2005) 381–422

417

are often the symptoms of difficulties in adapting traditional ways of trust signalling to new structural conditions. This should, however, not be interpreted as ‘‘in time, the trust problem will go away because people will adapt’’—negative trust experiences can cause long-term damage to the technologies and/or application domains involved. The damage will not only be to commercial companies— technology providers and organizations offering innovative services—but may also deprive individuals and society of the benefits a technology or service could offer. Such developments would disenfranchise those who can least afford to take financial risk, and/or lack in-depth knowledge and technical savvy to distinguish trustworthy actors from untrustworthy ones. Researchers and designers must support the transformation process by providing empirical evidence on the formation of trust via technical channels and by creating well-informed designs. Thus, rather than seeing them as a threat to cooperation and social capital, we welcome the potential of these technologies to enable exchanges with others we would otherwise never have interacted with.

Acknowledgements We would like to thank our colleagues at UCL, Licia Capra, Ivan Flechais, Hina Keval, and Hendrik Knoche, as well as Richard Boardman at Google for their comments and helpful discussions. We are in particular indebted to the anonymous reviewers and the editor, whose thoughtful and detailed comments helped to substantially improve this paper. References Aaker, D.A., 1996. Building Strong Brands. The Free Press, New York. Adams, J., 1995. Risk. UCL Press, London. Axelrod, R., 1980. More effective choice in the prisoner’s dilemma. Journal of Conflict Resolution 24 (3), 379–403. Bacharach, M., Gambetta, D., 2001. Trust as Type Detection. In: Castelfranchi, C., Tan, Y. (Eds.), Trust and Deception in Virtual Societies. Kluwer, Dordrecht, pp. 1–26. Bacharach, M., Gambetta, D., 2003. Trust in Signs. In: Cook, K.S. (Ed.), Trust in Society. Russell Sage, New York, NY. Baron, R., Byrne, D., 2004. Social Psychology, ninth ed. Allyn & Bacon, Boston, MA. Baurmann, M., Leist, M., 2004. Trust and Community on the Internet: Opportunities and Restrictions for Online Cooperation. Special Issue of Analyze & Kritik (1/2004). Berg, J., Dickhaut, J., McKabe, K., 2003. Trust, reciprocity, and social history. Games and Economic Behavior 10, 122–142. Biocca, F., Harms, C., Burgoon, J.K., 2003. Criteria for a theory and measure of social presence. Presence 12 (5), 456–480. Bohnet, I., Frey, B.S., 1999. The sound of silence in prisoner’s dilemma and dictator games. Journal of Personality and Social Psychology 38, 43–57. Bohnet, I., Frey, B.S., Huck, S., 2001. More order with less law: On contract enforcement, trust and crowding. American Political Science Review 95, 131–144. Bohnet, I., Huck, S., Tyran, J.R., 2003. Instinct or Incentive to be trustworthy? The role of informational institutions. In: Holler, M.J. (Ed.), Jahrbuch fu¨r Neue Politische O¨konomie, pp. 213–221.

ARTICLE IN PRESS 418

J. Riegelsberger et al. / Int. J. Human-Computer Studies 62 (2005) 381–422

Bolton, G.E., Katok, E., Ockenfels, A., 2004. How effective are electronic reputation mechanisms? An experimental investigation. Management Science 50 (11), 1587–1602. Bos, N., Olson, J.S., Olson, G.M., Wright, Z., Gergle, D., 2002. Rich media helps trust development. CHI2002 Conference Proceedings. ACM Press, New York, NY. Brenkert, G.G., 1998. Trust, morality and international business. Business Ethics Quarterly 8 (2), 293–317. Brosig, J., Ockenfels, A., Weimann, J., 2002. The effects of communication media on cooperation. German Economic Review 4 (2), 217–241. Brynjolfsson, E., Smith, M., 2000. Frictionless commerce? A comparison of internet and conventional retailers. Management Science 46 (4), 563–585. Checkland, P., 1999. Soft Systems Methodology. A 30-year Retrospective. Wiley, Chichester. Cheng, L., Farnham, S., Stone, L., 2002. Lessons learned: building and deploying shared virtual environments. In: Schroeder, R. (Ed.), The Social Life of Avatars. Springer, London, pp. 90–111. Coleman, J., 1988. Social Capital in the Creation of Human Capital. American Journal of Sociology 94, Supplement: Organizations and Institutions: Sociological and Economical Approaches to the Analysis of Social Structure, S95–S120. Consumer Web Watch, 2002. A Matter of Trust: What User Want From Web Sites. http:// www.consumerwebwatch.org/news/report1.pdf Corritore, C.L., Kracher, B., Wiedenbeck, S., 2003. On-line trust: concepts, evolving themes, a model. International Journal of Human Computer Studies 58 (6), 737–758. Daft, R.L., Lengel, R.H., 1986. Organizational Information Requirements, Media Richness and Structural Design. Marketing Science 32, 554–571. Damasio, A.R., 1994. Descarte’s Error: Emotion, Reason and the Human Brain. Avon, New York, NY. Dasgupta, P., 1988. Trust as a Commodity. In: Gambetta, D. (Ed.), Trust. Making and Breaking Cooperative Relations. Basil Blackwell, Oxford, pp. 49–71. Davis, F., 1989. Perceived usefulness, perceived ease of use and user acceptance of information technology. MIS Quarterly 13, 319–340. Davis, J.P., Farnham, S.D., Jensen, C., 2002. Decreasing online ‘bad’ behavior. In: Extended Abstracts CHI2002. ACM Press, New York, NY, pp. 718–719. Deci, E.L., 1992. A History of Motivation in Psychology and Its Relevance for Management. In: Vroom, V.H., Deci, E.L. (Eds.), Management and Motivation, second ed. Penguin Books, London, pp. 9–33. Dellarocas, C., Resnick, P., 2003. Reputation Systems Symposium. http://www.si.umich.edu/presnick/ reputation/symposium/ Deutsch, M., 1958. Trust and suspicion. Journal of Conflict Resolution 2 (3), 265–279. Diekmann, A., Lindenberg, S., 2001. Sociological aspects of cooperation. In: Smelser, N.J., Baltes, P.B. (Eds.), International Encyclopedia of the Social & Behavioral Sciences. Elsevier, Oxford, pp. 2751–2756. Do¨ring, N., 1998. Sozialpsychologie des Internet. Hogrefe, Go¨ttingen. Dzindolet, M.T., Peterson, S.A., Pomranky, R.A., Pierce, L.G., Beck, H.P., 2003. The role of trust in automation reliance. International Journal of Human Computer Studies 58, 697–718. Economist, 2004. Branching Out. Economist. Egger, F.N., 2001. Affective Design of E-Commerce User Interfaces: How to maximise perceived trustworthiness. In: Helander, M., Khalid, H.M., Tham (Eds.), Proceedings of CAHD: Conference on Affective Human Factors Design. ACM Press, New York, NY, pp. 317–324. Ely, J.C., Fudenberg, D., Levine, D.K., 2004. When is Reputation Bad? Harvard Institute of Economic Research Discussion Paper, 2035. Fehr, E., Fischbacher, U., 2003. The nature of human altruism. Nature (425), 785–791. Fogg, B.J., 2003a. Persuasive Technology. Using Computers to Change What We Think and Do. Morgan Kaufmann, San Francisco, CA. Fogg, B.J., 2003b. Prominence-Interpretation Theory: Explaining How People Assess Credibility Online. In: CHI2003 Extended Abstracts. ACM Press, New York, NY, pp. 722–723. Friedman, J.W., 1977. Oligopoly and the Theory of Games. North-Holland Publishers, Amsterdam. Fukuyama, F., 1995. Trust. Free Press, New York.

ARTICLE IN PRESS J. Riegelsberger et al. / Int. J. Human-Computer Studies 62 (2005) 381–422

419

Fukuyama, F., 1999. Social Capital and the Civil Society. In: Second Conference on Second Generation Reforms. IMF, Washington, DC. Garau, M., Slater, M., Vinayagamoorhty, V., Brogni, A., Steed, A., Sasse, M.A., 2003. The Impact of Avatar Realism and Eye Gaze Control on Perceived Quality of Communication in a Shared Immersive Virtual Environment. In: Proceedings of CHI2003. ACM Press, New York, NY, pp. 529–536. Gatti, R., Chirmiciu, A., Kattuman, P., Morgan, J., 2004. Price vs. Location: Determinants of demand at an online price comparison site. Trust and Triviality: Where is the Internet going? Conference at University College London, London, UK, 12 November 2004. Giddens, A., 1990. The consequences of modernity. Stanford University Press, Stanford. Glaeser, E.L., Laibson, D., Scheinkman, J.A., Soutter, C.L., 2000. Measuring Trust. Quarterly Journal of Economics 65, 811–846. Goffman, E., 1959. The Presentation of Self in Everyday Life. Doubleday, Garden City. Grabner-Kraeuter, S., Kaluscha, E.A., 2003. Empirical Research in on-line trust: a review and critical assessment. International Journal of Human Computer Studies 58, 783–812. Granovetter, M.S., 1973. The Strength of Weak Ties. American Journal of Sociology 78, 1360–1380. Granovetter, M.S., 1985. Economic Action and social Structure: The Problem of Embeddedness. American Journal of Sociology 91, 481–510. Hails, M., 2003. Gamertag: Gecko Dance. A Newbie’s Diary. All Xbox GameWatcher: Live Wire. http:// www.allxbox.com/news/stories/livewire122602b.asp Handy, C., 1995. Trust and the virtual organization. Harvard Business Review 73 (3), 40–50. Harper, R.G., Wiens, A.N., Matarazzo, J.D., 1978. Nonverbal Communication: The State of the Art. Wiley, New York, NY. Hinton, P.R., 1993. The Psychology of Interpersonal Perception. Routledge, London. Hirshleifer, J., Riley, J.G., 1979. The analytics of uncertainty and information-an expository survey. Journal of Economic Literature 17 (4), 1374–1421. Horn, D.B., Olson, J.S., Karasik, L., 2002. The effects of spatial and temporal video distortion on lie detection performance. In: CHI2002 Extended Abstracts. ACM Press, New York, NY, pp. 716–718. Huberman, B.A., Fang, W., 2004. Dynamics of Reputations. Journal of Statistical Mechanics: Theory and Experiment P04006. Jarvenpaa, S.L., Tractinsky, N., 1999. Consumer trust in an Internet store: a cross-cultural validation. Journal of Computer Mediated Communication 5 (2). Jensen, C., Farnham, S.D., Drucker, S.M., Kollock, P., 2000. The effect of communication modality on cooperation in online environments. In: CHI2000 Conference Proceedings. ACM Press, New York, NY, pp. 470–477. Jensen, C., Davis, J., Farnham, S., 2002. Finding others online: reputation systems for social online spaces. In: Proceedings of CHI2002. ACM Press, New York, NY, pp. 447–454. Jensen, C.D., Poslad, S., Dimitrakos, T., 2004. Trust Management, Second International Conference, iTrust 2004, Oxford, UK, March 29—April 1, 2004, Proceedings. Lecture Notes in Computer Science, vol. 2995. Springer, Berlin. Keser, C., 2002. Trust and Reputation Building in E-Commerce. CIRANO Working Paper 2002-75. Cirano—Centre interunversitaire de recherche en analyze des orgnisations: Montreal. Knight, F.H., 1921. Risk, Uncertainty, and Profit. Hart, Schaffner & Marx, Boston, MA. Koehn, D., 2003. The nature of and conditions for online trust. Journal of Business Ethics 43, 3–19. Kollock, P., 1998. Social dilemmas: the anatomy of cooperation. Annual Review of Sociology 24, 183–214. Kotler, P., 2002. Marketing Management, 11th ed. Prentice Hall, Englewood Cliffs, NJ. Lahno, B., 2002. Institutional Trust: A Less Demanding Form of Trust? Revista Latinoamericana de Estudios Avanzados (RELEA): Caracas. Landauer, T.K., 1996. The Trouble with Computers: Usefulness, Usability, and Productivity. MIT Press, Cambridge, MA. Lee, J., 2002. Making Losers of Auction Winners. New York Times. Lee, M.K.O., Turban, E., 2001. A trust model for consumer Internet shopping. International Journal of Electronics Commerce 6 (1), 75–91.

ARTICLE IN PRESS 420

J. Riegelsberger et al. / Int. J. Human-Computer Studies 62 (2005) 381–422

Lewicki, R.J., Bunker, B.B., 1996. Developing and Maintaining Trust in Work Relationships. In: Kramer, R.M., Tyler, T.R. (Eds.), Trust in Organizations. Frontiers of Theory and Research. Sage, Thousand Oaks, CA, pp. 114–136. Lewis, J.D., Weigert, A., 1985. Trust as a social reality. Social Forces 63, 967–985. Lombard, M., Ditton, T., 1997. At the heart of it all: the concept of presence. Journal of Computer Mediated Communication 3 (2). Luhmann, N., 1979. Trust and Power. Chichester, Wiley. Macauley, S., 1963. Non-contractual relations in business: a preliminary study. American Sociological Review 28 (1), 55–67. Mayer, R.C., Davis, J.H., Schoorman, F.D., 1995. An integrative model of organizational trust. Academy of Management Review 20 (3), 709–734. McAllister, D.J., 1995. Affect- and cognition-based trust as foundations for interpersonal cooperation in organizations. Academy of Management Journal 38 (1), 24–59. McCarthy, J.D., Riegelsberger, J., 2004. The designer’s dilemma: approaches to the free rider problem in knowledge sharing systems. In: COOP 2004 Proceedings of the Sixth International Conference on the Design of Cooperative Systems, May 2004, French Riviera, France. McKnight, D.H., Chervany, N.L., 2000. What is trust? A conceptual analysis and an interdisciplinary model. In: AMCIS 2000—Proceedings of the American Conference on Information System, pp. 827–833. McKnight, D.H., Chervany, N.L., 2001. What trust means in e-commerce customer relationships: an interdisciplinary conceptual typology. International Journal of Electronic Commerce 6 (2), 35–59. Meyerson, D., Weick, K.E., Kramer, R.M., 1996. Swift trust and temporary groups. In: Kramer, R.M., Tyler, T.M. (Eds.), Trust in Organizations. Frontiers of Theory and Research. Sage, Thousand Oaks, CA, pp. 166–195. Milewski, A.E., Lewis, S.H., 1997. Delegating to software agents. International Journal of Human Computer Studies 46 (4), 485–500. Muir, B.M., 1987. Trust between humans and machines, and the design of decision aids. International Journal of Man-Machine Studies 27, 527–539. Nielsen, J., 1999. Trust or Bust: Communicating Trustworthiness in Web Design. http://www.useit.com/ alertbox/990307.html Nielsen, J., Molich, R., Snyder, S., Farrell, C., 2000. E-Commerce User Experience: Trust. Nielsen Norman Group. Norman, D.A., 2004. Emotional Design: Why We Love (Or Hate) Everyday Things. Basic Books, New York. O’Doherty, J., Kringelbach, M.L., Rolls, E.T., Hornak, J., Andrews, C., 2001. Abstract reward and punishment representations in the human orbitofrontal cortex. Nature Neuroscience 4 (1), 95–102. O’Doherty, J., Winston, J.S., Critchley, H., Perrett, D., Burt, D.M., Dolan, R.J., 2003. Beauty in a smile: the role of medial orbitofrontal cortex in facial attractiveness. Neuropsychologia 41 (2), 147–155. O’Neill, O., 2002. A Question of Trust. The Reith Lectures 2002. Cambridge University Press, Cambridge. Olson, J.S., Zheng, J., Bos, N., Olson, G.M., Veinott, E., 2002. Trust without touch: jumpstarting longdistance trust with initial social activities. In: Proceedings of CHI2002. ACM Press, New York, NY, pp. 141–146. Pitt, J., 2005. The Open Agent Society. Wiley, London. Porter, M.E., 1985. Competitive Advantage. Free Press, New York. Poundstone, W., 1993. Prisoner’s Dilemma. Anchor Books, New York. Putnam, R.D., 2000. Bowling Alone: The Collapse and Revival of American Community. Simon & Schuster, New York. Ratnasingam, P., Pavlou, P.A., 2004. Technology trust in internet-based interorganizational electronic commerce. Journal of Electronic Commerce in Organizations 1 (1), 17–41. Raub, W., Weesie, J., 2000a. The Management of Durable Relations. Thela Thesis, Amsterdam. Raub, W., Weesie, J., 2000b. The management of matches: a research program on solidarity in durable social relations. Netherland’s Journal of Social Sciences 36, 71–88.

ARTICLE IN PRESS J. Riegelsberger et al. / Int. J. Human-Computer Studies 62 (2005) 381–422

421

Reeves, B., Nass, C., 1996. The Media Equation: How People Treat Computers, Television, and New Media Like Real People and Places. CSLI Publications, Stanford. Reichheld, F.F., Schefter, P., 2000. E-loyalty: your secret weapon on the web. Harvard Business Review 78, 105–113. Resnick, P., 2002. Beyond bowling together: sociotechnical capital. In: Carroll, J.M. (Ed.), HCI in the New Millennium. Addison-Wesley, Boston, MA, pp. 247–272. Rice, R.E., 1992. Task analyzability, use of new medium and effectiveness: a multi-site exploration of media richness. Organization Science 3 (4), 475–500. Rickenberg, R., Reeves, B., 2000. The effects of animated characters on anxiety, task performance, and evaluations of user interfaces. In: Proceedings of CHI2000. ACM Press, New York, NY, pp. 49–56. Riegelsberger, J., Sasse, M.A., 2001. Trustbuilders and trustbusters: the role of trust cues in interfaces to ecommerce applications. In: Schmid, B., Stanoevska-Slabeva, K., Tschammer, V. (Eds.), Towards the E-Society: E-commerce, E-Business and E-Government. Kluwer, Norwell, pp. 17–30. Riegelsberger, J., Sasse, M.A., 2002. Face it: photographs don’t make websites trustworthy. In: CHI2002: Extended Abstracts. ACM Press, New York, NY, pp. 742–743. Riegelsberger, J., Sasse, M.A., 2003. Designing e-commerce applications for consumer trust. In: Petrovic, O., Ksela, M., Fallenboeck, M. (Eds.), Trust in the Network Economy. Springer, Wien, pp. 97–110. Riegelsberger, J., Sasse, M.A., McCarthy, J., 2003. The researcher’s dilemma: evaluating trust in computer-mediated communication. International Journal of Human Computer Studies 58 (6), 759–781. Ring, P.S., Van de Ven, A.H., 1994. Developmental process of cooperative interorganizational relationships. Academy of Management Review 19 (1), 90–118. Rocco, E., 1998. Trust breaks down in electronic contexts but can be repaired by some initial face-to-face contact. In: CHI98 Conference Proceedings. ACM Press, New York, NY, pp. 496–502. Rocco, E., Finholt, T.A., Hofer, E.C., Herbsleb, J.D., 2000. Designing as if trust mattered. Collaboratory for Research on Electronic Work (CREW) Technical Report. Rousseau, D.M., Sitkin, S.B., Burt, R.S., Camerer, C., 1998. Not so different after all: a cross-discipline view of trust. Academy of Management Review 23 (3), 393–404. Sally, D., 1995. Conversation and cooperation in social dilemmas. A meta-analysis of experiments from 1958 to 1992. Rationality and Society 7 (1), 58–92. Sapient, 1999. eCommerce Trust. http://www.cheskin.com/think/studies/ecomtrust.html Schunter, M., Waidner, M., Whinett, D., 1999. The SEMPER framework for secure electronic commerce. Wirtschaftsinformatik 41 (3), 238–247. Shapiro, S.P., 1987. The social control of impersonal trust. American Journal of Sociology 93 (3), 623–658. Shneiderman, B., 2000. Designing trust into online experiences. Communications of the ACM 43 (12), 57–59. Short, J., Williams, E., Christie, B., 1976. The Social Psychology of Telecommunications. Wiley, London. Simon, H.A., 1982. Models of Bounded Rationality Book, Vols. 1 and 2. MIT Press, Cambridge, MA. Sitkin, S.B., Roth, N.L., 1993. Explaining the limited effectiveness of legalistic ‘‘remedies’’ for trust/ distrust. Organization Science 4 (3), 367–392. Sober, E., 1981. The principle of parsimony. British Journal for the Philosophy of Science 32, 145–156. Solomon, R.C., Flores, F., 2001. Building Trust in business, Politics, Relationships, and Life. Oxford University Press, New York. Swerts, M., Krahmer, E., Barkhuysen, P., van de Laar, L., 2004. Audiovisual cues to uncertainty. In: Proceedings of the ISCA Workshop on Error Handling in Spoken Dialogue Systems, Chateau-D’Oex, Switzerland, 2003. Tan, Y., Thoen, W., 2000. Toward a generic model of trust for electronic commerce. International Journal of Electronics Commerce 5, 61–74. Uslaner, E.M., 2002. The Moral Foundations of Trust. Cambridge University Press, Cambridge. Winston, J.S., Strange, B., O’Doherty, J., Dolan, R.J., 2002. Automatic and intentional brain responses during evaluation of trustworthiness of faces. Nature Neuroscience 5 (3), 277–283.

ARTICLE IN PRESS 422

J. Riegelsberger et al. / Int. J. Human-Computer Studies 62 (2005) 381–422

Witkowski, M., Artikis, A., Pitt, J., 2000. Experiments in building trust in a society of objective-trust based agents. In: Falcone, R., Singh, M., Tan, Y.H. (Eds.), Trust in CyberSociety: Integrating Human and Machine Perspective. Springer, Berlin, pp. 111–132. Yamagishi, T., Yamagishi, M., 1994. Trust and commitment in the United States and Japan. Motivation and Emotion 18, 129–166. Zajonc, R.B., 1980. Feeling and thinking. preferences need no inferences. American Psychologist 35, 151–175. Zimmerman, J., Kurapati, K., 2002. Exposing profiles to build trust in a recommender. In: CHI2002 Extended Abstracts. ACM, New York, pp. 608–609. Zucker, L.G., 1986. Production of trust: Institutional sources of economic structure, 1840–1920. Research in Organizational Behavior 8, 53–111.