All in due time: The development of trust in computer-mediated and face-to-face teams

All in due time: The development of trust in computer-mediated and face-to-face teams

Organizational Behavior and Human Decision Processes 99 (2006) 16–33 www.elsevier.com/locate/obhdp All in due time: The development of trust in compu...

483KB Sizes 0 Downloads 19 Views

Organizational Behavior and Human Decision Processes 99 (2006) 16–33 www.elsevier.com/locate/obhdp

All in due time: The development of trust in computer-mediated and face-to-face teams 夽 Jeanne M. Wilson a,¤, Susan G. Straus b, Bill McEvily c,1 a

The College of William and Mary, School of Business Administration, P.O. Box 8795, Williamsburg, VA 23187-8795, USA b RAND, 201 N. Craig St., Suite 202, Pittsburgh, PA 15213, USA c Tepper School of Business, Carnegie Mellon University, 5000 Forbes Avenue, Pittsburgh, PA 15213, USA Received 21 March 2002 Available online 19 September 2005

Abstract This study examines the development of trust and cooperation in computer-mediated and face-to-face teams. Fifty-two, threeperson teams worked on a mixed-motive task over a 3-week period using computer-mediated or face-to-face interaction. Results showed that trust started lower in computer-mediated teams but increased to levels comparable to those in face-to-face teams over time. Furthermore, this pattern of results also held for teams that switched from face-to-face to electronic media and vice versa. Content analysis showed that high levels of inXammatory remarks were associated with slow trust development in computer-mediated teams. The results challenge prevailing assumptions about how trust develops in distributed teams and suggest modiWcations to established theories of computer-mediated communication.  2005 Published by Elsevier Inc. Keywords: Trust; Computer-mediated communication; Distributed groups

Introduction Most forms of team work involve interdependence; team members must rely on each other in various ways to accomplish personal and team goals. With the advent of distributed teams, trust becomes a more salient issue (Lawler, 1992; Mayer, Davis, & Schoorman, 1995). Trust is essential to the loose coupling that allows distributed groups to work, but it is tight coupling (dense relations and the ability to observe each other) that is often 夽 We thank Joe Walther, Sara Kiesler, Kurt Dirks, Ray Reagans, Denise Rousseau, Dave Harrison and three anonymous reviewers for their helpful comments on the paper. A preliminary version of this paper was presented at the Academy of Management conference in Chicago, August, 1999. * Corresponding author. E-mail address: [email protected] (J.M. Wilson). 1 Present address: Joseph L. Rotman School of Management, University of Toronto, 105 St. George Street, Toronto, Ontario, Canada M5S 3E6.

0749-5978/$ - see front matter  2005 Published by Elsevier Inc. doi:10.1016/j.obhdp.2005.08.001

assumed to be the necessary ingredient for trust to develop (Burt & Knez, 1996; Coleman, 1990). This represents a critical paradox for distributed group work. If members of distributed groups are going to engage in cooperative activities, they must either trust each other or be able to monitor each other (Ouchi, 1981). When members are working in diVerent locations and interacting primarily by telephone or computer, most traditional forms of monitoring and control are not feasible. Team members cannot, for instance, observe the amount of eVort others are expending or overhear what team members say when they are interacting with others. This “behavioral invisibility” is likely to be associated with added risks such as cheating, neglect of others’ interests, and mis-anticipation of others’ actions (Sheppard & Sherman, 1998), which can undermine the development of trust. Lack of trust among team members is problematic because it is typically associated with added costs that can translate into decreased team eVectiveness. When

J.M. Wilson et al. / Organizational Behavior and Human Decision Processes 99 (2006) 16–33

members of a team do not trust each other, they are likely to expend additional time and eVort monitoring one another, backing up or duplicating each others’ work, and documenting problems (Ashforth & Lee, 1990). Team members engaging in monitoring and defensive behavior have fewer resources to devote to the primary team task, which can result in productivity losses (McAllister, 1995). Lack of trust can also adversely aVect members’ satisfaction with the team and their willingness to continue working with the team (Golembiewski & McConkie, 1975). Similarly, without a foundation of trust, members are unlikely to openly share information about problems, which can interfere with their ability to learn from team experiences (Zucker, Darby, Brewer, & Peng, 1996). Although research on groups has increasingly substantiated the notion that trust plays a critical role in inXuencing group eVectiveness, there has been relatively little systematic investigation of the determinants of trust in groups, particularly in distributed groups where the development of trust may be more challenging. In fact, trust has been identiWed as the deWning issue in understanding the eVectiveness of distributed groups (Handy, 1995; Poole, 1999). Trust is presumed to be easier to generate and sustain when people are spatially clustered (Lewicki & Bunker, 1996) because co-location permits greater knowledge of others and aids in the formation of collective identity. For instance, Burt and Knez (1996) showed that trust was highest among contacts that met face-to-face on a daily basis compared to those that met less often. Similar Wndings have been observed in the context of a prisoner’s dilemma game where the greatest levels of trust occurred among participants in close proximity, especially when they could see each other (Wichman, 1970). This suggests that visual isolation inhibits the development of trust. Being able to detect and interpret behavioral clues that reveal intentions, referred to as translucence, plays an important role in developing trust and cooperation (Orbell & Dawes, 1991). Similarly, “telltale signs” such as facial expressions and voice tone reveal intentions and make cooperation possible (Frank, 1993, p. 165). When individuals are spatially dispersed, the social information upon which interpersonal trust is based is less readily available (Zucker, 1986). The purpose of this study is to test theoretical arguments about the impact of decreased social information in distributed (computer-mediated) teams on the development of trust. The prevailing assumption in the literature is that reduced social information changes the fundamental communication process and constrains relational development in distributed groups (a perspective labeled “Cues-Filtered-Out” by Culnan & Markus, 1987). We concur that social information is critical to the development of trust in groups. However, drawing on Walther’s (1992) Social Information Processing theory

17

(which is distinct from the social information processing theory articulated by Salancik & PfeVer, 1978), we argue that trust in distributed groups develops in the same way as it does in co-located groups—with one important exception. It takes longer for trust to develop in computer-mediated groups because it requires more time for members of those groups to exchange social information. Whereas our predictions diVer from those of the dominant perspective, they begin with the same underlying assumption that there is initially less social information in electronically mediated communication. Because time is a conduit for information (Harrison, Price, Gavin, & Florey, 2002), examining patterns of social information and trust in distributed teams over time is critical. Nevertheless, past research on trust in distributed groups has been inconclusive because the research designs used were not well-suited to assess directly the eVect of electronic communication media on trust development. To understand whether the development of trust in distributed teams diVers from that in face-to-face teams, studies must: (1) accurately measure trust; (2) measure trust over time; and (3) examine teams communicating via diVerent media. Although other studies have incorporated one or more of these design features, we are unaware of any other research that combines all three in a single study. For instance, some studies compare diVerent media over time, but examine relational outcomes other than trust (e.g., Walther, 1996, who studied receptivity). Other studies on trust in groups have relied on one-time meetings (e.g., Alge, WeithoV, & Klein, 2003; Moore, Kurtzberg, Thompson, & Morris, 1999; Naquin & Paulson, 2003) whereas still others have examined groups in only one medium (e.g., Jarvenpaa, Knoll, & Leidner, 1998; Langfred, 2004; Paul & McDaniel, 2004; Weisband & Iacono, 1997). Although trust in groups has been empirically linked to important outcomes like performance (Dirks, 2000), the factors inXuencing the development of trust in groups are relatively unexplored. Consequently, there is little evidence about how trust development diVers in computer-mediated versus faceto-face teams or how such interactions change over time. We also go beyond the existing literature by making predictions about the eVects of combinations and order of media use over time on the development of trust in distributed teams. That is, does it make a diVerence for the development of trust if teams begin working face-toface and then switch to electronically mediated modes, or vice versa? Although some have argued that relational development can be accelerated by having distributed teams start with a face-to-face meeting, this idea is rooted in conventional wisdom (Ohara-Devereaux & Johansen, 1994; Solomon, 2001) and academic discussion (Kraut, Galagher, & Egido, 1988) rather than systematic analyses of how changing media aVects relational development in teams. Finally, our study also

18

J.M. Wilson et al. / Organizational Behavior and Human Decision Processes 99 (2006) 16–33

advances research by investigating the mechanisms theorized to account for the development of trust in distributed teams. SpeciWcally, we conduct a content analysis of team interactions and examine how communication patterns are associated with trust development in distributed teams and face-to-face teams. Trust deWned Distributed group work increases the tension between the two key components of trust: risk and reliance (Gambetta, 1988; Rousseau, Sitkin, Burt, & Camerer, 1998). Risk means that group members could experience negative outcomes, such as the loss of time or recognition, due to the behavior of other group members (e.g., not following through on commitments, leaking conWdential information) (March & Shapira, 1987). Reliance occurs when group members allow their fate to be determined by other members of the group (Zand, 1972), which is a necessary condition of any interdependent group work. Trust gives group members the conWdence to take risks and act without concern that other group members will take advantage of them (McAllister, 1995). For the purposes of this paper we deWne trust as conWdent positive expectations about the conduct of another (Lewicki, McAllister, & Bies, 1998). Such positive expectations have been linked to a willingness to be vulnerable to the actions of another party, both from a theoretical (Mayer et al., 1995; Rousseau et al., 1998) and empirical standpoint (Mayer & Davis, 1999). Prior research has emphasized that trust consists of diVerent dimensions including cognitive trust, or beliefs about others’ competence and reliability, and aVective trust, which arises from emotional ties among group members and reXects beliefs about reciprocated care and concern (McAllister, 1995). At the same time, the theories of relational development in technology-mediated groups that we draw on suggest that cognitive and aVective trust should develop in a similar fashion. Consequently, we treat these forms of trust as the same in the development of our hypotheses and empirically investigate the eVects of communication media on each form of trust. Relational development in technology-mediated groups The dominant perspective on relational development in distributed groups (i.e., Cues-Filtered-Out) argues that limited social information in electronic communication alters the nature of the interaction and constrains the development of interpersonal relationships. For example, Sproull and Kiesler (1986) note that in comparison to face-to-face interaction, computer-mediated communication reduces “social context cues,” such as non-verbal information and status cues. Consequently, participants feel more anonymous and are focused more on themselves and less on others (Kiesler, Siegel, &

McGuire, 1984). When these social context cues are missing, it leads to increased depersonalization, lower cohesiveness, and less social conformity (Kiesler, Zubrow, Moses, & Geller, 1985; Short, Williams, & Christie, 1976). Such lower levels of social control are often associated with lower levels of interpersonal trust (Rousseau et al., 1998). This perspective suggests that computer-mediated groups should exhibit a lower incidence of behaviors associated with both cognitive- and aVect-based trust, such as sharing and attending to personal information. Therefore, according to this viewpoint, groups interacting face-to-face should exhibit higher levels of trust than groups interacting electronically. Although this has been the dominant perspective in both the academic (Fulk, Schmidt, & Schwarz, 1992; Walther, Anderson, & Park, 1994) and practitioner (Handy, 1995) literatures on distributed groups, research support for this perspective has been mixed. Consistent with the dominant perspective, a large number of empirical studies have found that interaction in computermediated groups is more task-oriented and less personal than interaction in face-to-face groups (Connolly, Jessup, & Valacich, 1990; Hiltz, Johnson, & TuroV, 1986; Rice, 1984). Additional support for the dominant view can be found in studies reporting evidence for more uninhibited communication and more equal status eVects in computer-mediated groups (Dubrovsky, Kiesler, & Sethna, 1991; Kiesler et al., 1985; Siegel, Dubrovsky, Kiesler, & McGuire, 1986), with these eVects attributed to the anonymity and self-focused attention associated with reduced social information. More recent research has been less supportive, however. For instance, some studies have found a higher proportion of task-oriented communication and positive interpersonal acts in computer-mediated groups (Straus, 1997; Walther, 1996). Studies have also failed to support reduced inhibitions as the cause of apparently equal participation eVects in computer-mediated communication. For instance, Straus (1996) found that equal participation was attributed to a ceiling eVect on the amount of communication in electronic discussions; she further found that individual diVerences in extraversion predicted levels of participation in computer-mediated conferencing. In addition, most of the research supporting the dominant view has been based on one-time interactions in laboratory settings. Field studies, which often involve interaction over longer periods of time, have shown more positive outcomes for the computer-mediated groups in terms of aVective reactions and group process variables (Abel, 1990; Dennis, Heminger, Nunamaker, & Vogel, 1990; Eveland & Bikson, 1989; Parks & Roberts, 1998). We suggest that one reason for these mixed results is that the dominant perspective does not explicitly address the role of time in the development of technology-medi-

J.M. Wilson et al. / Organizational Behavior and Human Decision Processes 99 (2006) 16–33

ated relationships. Examining patterns of interaction over time is fundamental to understanding both groups (McGrath, 1984) and trust (Tyler & Kramer, 1996). Because much of the research in support of the dominant perspective has relied on one-time meetings of groups, there is little evidence about how such interactions change over time. By examining the development of trust in teams from a longitudinal perspective, we use time as a mechanism to address the tensions and contradictions in research on computer-mediated communication (Poole & Van de Ven, 1989). One approach that does incorporate the role of time in the relational development of distributed groups is Walther’s (1992) Social Information Processing theory. This social information processing model assumes that people are driven to develop social relationships, regardless of their mode of interaction (e.g., computer-mediated or face-to-face). Walther argues that the process of developing social relationships is the same across media; in all cases, people test their assumptions about others over time, reWne their impressions, and adjust their relational communication. However, because it takes longer to type versus speak, and because the visual cues associated with face-to-face contact are absent (Frank, 1993; Wichman, 1970), computer-mediated groups operate at diVerent rates of social information exchange (Walther, 1995, 1996). Computer-mediated groups can take up to four times as long to exchange the same number of messages as face-to-face groups (Dubrovsky et al., 1991; Weisband, 1992). Indeed, a number of studies have supported the prediction that computer-mediated communication aVects the rate of information exchange. In studies of status eVects and communication media, computer-mediated communication reduced information exchange by both high and low status members (Hollingshead, 1996), rather than simply Wltering the cues about status. Straus (1996) found similar results for extraversion in computer-mediated versus face-to-face groups. Other research on computer-mediated groups has also demonstrated improvements in interpersonal relationships over time that are consistent with the gradual acquisition of social information. In one lab experiment with undergraduates, face-to-face groups had higher initial levels of cohesiveness than groups using a group decision support system, but the computer-mediated groups’ cohesiveness increased to exceed that of the face-to-face groups in the Wnal 2 weeks (Chidambaram, Bostrom, & Wynne, 1991). In a longitudinal study by Walther and Burgoon (1992), computer-mediated groups had increased levels of intimacy and aVection, reduced dominance, and more social communication over time. Walther (1993) found similar time eVects for the development of interpersonal impressions. Walther’s social information processing perspective is also consistent with theories about group development.

19

According to models of group development, trust develops over time when communication in the group becomes more mature and task-oriented (Bales, 1953; Tuckman, 1965). SpeciWcally, trust develops when groups move beyond early stages of development in which members feel uncertain or anxious and even argumentative and critical (Chang, Bordia, & Duck, 2003). In the context of distributed groups, where the rate of social information exchange is attenuated, it may take longer for groups to move through stages of uncertainty and conXict to achieve trust.2 We draw on these Wndings about the diVerent rate of social information exchange in computer-mediated teams to formulate predictions about the development of trust in such teams. Because information exchange is a key determinant of trust (Williams, 2001), we argue that it will take longer for trust to develop in computermediated teams. Because members of face-to-face teams use nonverbal cues to develop interpersonal relationships, trust should develop more quickly in face-toface teams.3 The greater amounts of social information associated with visual cues in face-to-face communication are thought to reduce discomfort, increase predictability, and raise the level of aVection for others (Berger & Calabrese, 1975; Lawrence & Mongeau, 1996). However, in computer-mediated teams, which have only text-based cues, relational development occurs more gradually, requiring more exchanges to reach the levels achieved in face-to-face teams. Thus, we expect trust to develop in computer-mediated teams, but more gradually than it does in face-to-face teams. For these reasons, we predict: Hypothesis 1. There will be an interaction of time and media such that (a) initial levels of trust will be higher in face-to-face versus computer-mediated teams, and (b) time will neutralize the eVects of communication media. Combinations and order of media use The previous predictions pertain to teams that interact via one mode of communication or the other (i.e., in face-to-face or computer-mediated arrangements). In practice, work teams routinely use multiple modes of communication media to accomplish their tasks (GriYth, Sawyer, & Neale, 2003; Mankin, Cohen, & Bikson, 1996). Yet past research designs have made it diYcult to pinpoint the eVects of changes in communication 2 The importance of time in changing the predictors of group dynamics has been amply demonstrated in research on face-to-face groups. Several studies have demonstrated that other potentially divisive inXuences (such as surface-level diversity Harrison et al., 2002; Pelled, Eisenhardt, & Xin, 1999) become less potent over time. 3 Trust does not increase indeWnitely: It increases asymptotically up to a plateau (Herrig, 1930).

20

J.M. Wilson et al. / Organizational Behavior and Human Decision Processes 99 (2006) 16–33

media.4 We believe that examining changes in media order can increase the generalizability of our Wndings and provide further insights into the robustness of our predictions. By extending the basic premises of Walther’s Social Information Processing theory, we derive predictions about the eVect of changing communication media on the level of trust within teams. As noted previously, face-to-face contact simply increases the rate of acquisition of social information and therefore should result in an increase in the development of trust. In teams that start with face-to-face interaction, all other things being equal, trust levels should be higher at the end of the initial group meeting compared to teams that start electronically. Switching to electronic interaction will cause the rate at which trust develops to decrease, but will not cause an absolute decline in the level of trust. In teams that start with computer-mediated interaction, trust levels should start low, but increase with face-to-face contact. This prediction is consistent with mounting evidence that, over time, communication partners do disclose and seek individuating information on-line, and that people do form close, personal relationships with others at a distance (Parks & Roberts, 1998; Walther, 1996). As other research has demonstrated with group performance (Hollingshead, McGrath, & O’Connor, 1993), diVerences between groups using face-to-face versus computer-mediated communication diminish over time. Increased experience with the other group members and the technology should decrease ambiguity, increase predictability, and generally reduce information exchange requirements. For these reasons, we expect that both computer-mediated and face-to-face teams will develop comparable levels of trust over time. Accordingly, we predict: Hypothesis 2. Team members that meet face-to-face following electronic interaction will experience an increase in the rate of trust development, whereas team members 4 Previous work has examined groups that either started with faceto-face contact or computer-mediated communication. However in several of these cases, the order of media communication was either randomly assigned (Weisband, 1992) or controlled in a Latin Square design (Dubrovsky et al., 1991), making it impossible to assess the inXuence of the initial media condition. Other experiments using a series of face-to-face and computer-mediated conditions have changed the membership of the groups from one task to the next, so that it is not possible to determine the eVect of a particular starting condition on a particular group (McGrath, 1993; Weisband, Schneider, & Connolly, 1995). In other cases, the members of the distributed groups were often students drawn from the same class (Galegher & Kraut, 1994; McGrath, 1993; Weisband, 1992), with the result that members of computer-mediated groups had some prior information about each other and an expectation that they would see each other on a regular basis. Thus, while previous studies have provided insights into the eVects of changes in communication media on relational development in groups, each is limited from a research design standpoint.

that meet electronically after interacting face-to-face will experience a decrease in the rate of trust development but not in the level of trust. We examine the development of trust in a context where trust is critical: with team members who were previously unacquainted (who lack the familiarity that would lead to conWdence or predictable behavior), performing a task without established role expectations, and one in which members have little ability to impose sanctions. This constellation of conditions is becoming increasingly prevalent, heightening the importance of trust in interactions (Seligman, 1998). These conditions are also increasingly common in new forms of groups that span organizational boundaries (Goodman & Wilson, 2000) and ad hoc project teams that lack a shared history (Fulk & DeSanctis, 1995). To test our predictions, we will examine: (1) trust and cooperation in teams; (2) changes in trust over time; (3) comparisons of diVerent communication media; (4) comparisons of diVerent combinations and orderings of media; and (5) speciWc interaction patterns that contribute to any diVerential development of trust over time.

Method Participants One hundred and Wfty-six undergraduate students from a large urban university participated in this study for course credit. Team members were randomly selected from a pool of participants (indicating that they were available to meet at particular times) to form Wfty-two, three-person teams. Each team met three times over a period of 3 weeks. Before teams met, participants were asked to indicate whether they knew any of the other members of the assigned team. Members were replaced in three teams because some of the randomly assigned members were previously acquainted. Participants ranged in age from 17 to 45, with a mean age of 20. Forty-seven percent of the sample was female. The majority of participants were business majors. Media manipulation In this study, teams were randomly assigned to one of four conditions: (FFF) a series of three face-to-face meetings, (FEE) an initial face-to-face meeting followed by two electronic meetings, (EEE) a series of three electronic meetings, and (EFF) an initial electronic meeting followed by two face-to-face meetings. At the end of each meeting, teams were informed about the medium for their next meeting.

J.M. Wilson et al. / Organizational Behavior and Human Decision Processes 99 (2006) 16–33

All electronic meetings were conducted using a synchronous, text-only chat tool. Teams participating in electronic meetings logged onto a chat website created for that meeting. Each member connected to the site from a diVerent remote location. Task Teams completed a mixed-motive task involving the two critical elements for trust: risk and reliance. The risk in the task came from individual team members turning over personal resources for team disposal. The reliance came from an advantage accruing to the use of combined (as opposed to individual) resources. Each team was asked to make stock selection decisions. The individual or team with the portfolio that earned the most money at the end of the experiment received $300. Each team member was given a Wxed number of tokens at the beginning of the experiment. The tokens were used to “purchase” stocks from the current NYSE and NASDAQ lists. Each week, teams were asked to collectively identify three stocks, from among 12 possible stock alternatives, for further research. The choices of stocks diVered each week. Teams could structure the research task in one of two ways: each member could research one of the three selected stocks, or the three team members could individually research all three stocks. The following week, after team members shared their research results on the three selected stock alternatives, the members were given an opportunity to spend their tokens among the three stocks. If all team members spent their tokens on the same single stock, the team was able to purchase an additional share of the stock, which was divided equally among the team members’ individual portfolios. The token spending decision was private, and it was possible for team members to defect (ostensibly agree to devote tokens to the team stock pick, but actually spend tokens on a diVerent stock or stocks). There was no way for team members to identify defections unless the defecting member(s) revealed his or her behavior. The incentives for defecting included being able to follow a strong personal preference in the face of team disagreement and the possibility of winning $300 personally (without having to share it with the team). The incentives for cooperating included the ability to purchase an additional share of stock (thus increasing the chances of earning more money than the other teams), as well as the comfort and conWdence associated with making a group decision (Sniezek, 1992). Regardless of whether members decided to invest individually or as a team, they were limited to investing in the reduced list of three stocks initially selected by the team. Team members were able to track the performance of their stock

21

selections by following the increases or decreases in their stocks’ prices in the actual market. This task was speciWcally chosen so that task performance would not inXuence trust levels. In short-term stock selection tasks, performance is essentially random (Moore, Kurtzberg, Fox, & Bazerman, 1999). As with many mixed motive studies (e.g., prisoner’s dilemma), we were interested in the behavior rather than the Wnancial performance of the participants. Procedures Participants were told that the purpose of the experiment was to examine the eVects of technology on group decision-making. They were introduced to each other by Wrst names only to help prevent contact between experimental sessions. All participants were informed that their interactions would be recorded. Face-to-face sessions were audio-taped and chat sessions were recorded electronically. Teams met for one hour per week for 3 sequential weeks. At the end of each session, participants completed questionnaires measuring trust and related constructs (see Table 1 for a summary of the experimental procedures). Measures McAllister’s (1995) measure of trust, which was originally designed for use with dyads, was adapted for groups. Three scales from the original measure were included: cognitive trust, aVective trust, and monitoring/defensiveness. Samples of the original and adapted items appear in Appendix A. Four items were omitted because they were not relevant for student teams (e.g., “We would both feel a sense of loss if one of us was transferred and we could no longer work together.”). On the adapted measure, participants were rating teams versus individuals (a referent shift consensus model, according to Chan, 1998) on a 5-point Likert scale. ConWrmatory factor analyses of the teamadapted items showed that they conformed to the original structure suggested by McAllister (1995). We used structural equation modeling, representing each of the three constructs (aVective trust, cognitive trust and monitoring/defensiveness) as latent variables with the items as indicators of each latent variable. Goodnessof-Wt indices ranged from .80 to .86 across the three time periods. Adjusted goodness-of-Wt indices ranged from .72 to .81. Bentler’s Comparative Fit Index (CFI) ranged from .79 at Time 1 to .89 at Time 3. In each of the time periods, the coeYcient for each item on its appropriate factor was signiWcant. Analysis of the loadings and the coeYcient  levels indicated that the cognitive trust scale could be improved by removing one of the items improving the average  level across

22

J.M. Wilson et al. / Organizational Behavior and Human Decision Processes 99 (2006) 16–33

Table 1 Experimental procedure Week 1

Week 2

Week 3

• Complete consent form • Complete pre-survey • Instructions • Receive list of stocks • Discuss stocks • Collectively narrow list to three for future research • Collectively agree on strategy for research • Agree on team name • Complete Survey 1 (Cognitive Trust 1, AVective Trust 1) • Learn of medium for next meeting

• Instructions • Discuss research • Agree on purchase strategy • Individually (privately) record token spending decision (Cooperation 2) • Receive new list of stocks • Discuss stocks • Collectively narrow list to three for future research • Collectively agree on strategy for research • Complete Survey 2 (Cognitive Trust 2, AVective Trust 2) • Learn of medium for next meeting

• Instructions • Discuss research • Agree on purchase strategy • Individually (privately) record token spending decision (Cooperation 3) • Receive new list of stocks • Discuss stocks • Agree on purchase strategy • Complete Survey 3 (Cognitive Trust 3, AVective Trust 3) • Receive debrieWng

• Conduct research in intervening week

time periods from .81 to .86. This item was dropped for the remainder of the analyses. The construct reliability (.56) (Werts, Linn, & Joreskog, 1974) and scale reliability (.49) for monitoring/defensiveness were low, so this scale was omitted from the analysis. With these changes, the range of adjusted goodness-of-Wt indices improved to .77 to .82, and Bentler’s Comparative Fit Indices improved to .89 to .93. Within-group agreement was used to evaluate the appropriateness of aggregating individuals’ trust scores to represent a group trust score. We used the (rWG) measure (James, Demaree, & Wolf, 1984) based on a uniform null distribution as an index of within group agreement. In all cases in all teams, rWG values exceeded .77, indicating that there was a reasonable level of agreement among team members about the trust levels within the teams at each of the three time periods. Cooperation was measured as a behavioral outcome of trust. Although we have deWned trust as an underlying psychological state, cooperation is among the most proximal behavioral manifestations of trust (Rousseau et al., 1998). We believe that examining this construct provides another test of the theory and can enhance the generalizability of the results. Cooperation was measured by the choice to devote tokens to team versus individual stock picks. Cooperation was measured at two points in time: in the second meeting and in the last meeting. Cooperation was a binomial measure at the individual level: each team member either chose to cooperate with the team in pooling tokens or defected and used their tokens for possible individual gain. At the team level, the measure represented the mean strategy for each team. As an example, if two of the three team members chose to cooperate (1), and one of the members chose to defect (0), the team’s cooperation score for that period would be .67. Across all teams, the mean level of cooperation was .79 (SD D .41) at Time 2 and .83 (SD D .37) at Time 3.

• Conduct research in intervening week

At the end of the experiment we also measured the interpersonal trust between individual members of each team using an adaptation of Johnson-George and Swap’s (1982) SpeciWc Interpersonal Trust scale. We used this scale to determine whether team members detected diVerences in the trustworthiness of their fellow team members, which is a test of the depersonalization eVect predicted by the dominant paradigm. If members of computer-mediated teams experience depersonalization, it would inhibit their ability to perceive their fellow team members as distinct individuals. We selected three of the original Johnson-George and Swap items that we thought would be relevant for our sample and created an additional item in the same format as the original scale that pertained directly to our task. The three items taken from the original scale were: “I would rely on ______ to mail an important letter for me if I couldn’t get to the post-oYce,” “If ______ couldn’t meet with us as planned I would believe that something important had come up,” and “If ______ promised to do me a favor, he/she would follow through.” To these three items we added a fourth that read “I would be willing to give ________ all of my tokens to invest.” Individuals rated the items on 5-point Likert response scales ranging from strongly disagree to strongly agree. The coeYcient  level was .77 for this scale (across all team members rated). We totaled the four items to create an individual trust score from each focal team member for each speciWc other team member, and then examined the diVerence between a focal team member’s trust of Teammate 1 and trust of Teammate 2 to see if participants were perceiving their fellow team members as distinct individuals (M D 2.13; SD D 2.60). All participants also were asked to indicate what, if any, interactions they had with other team members outside the context of the experimental sessions. This check was designed to ensure that all interactions inXuencing

J.M. Wilson et al. / Organizational Behavior and Human Decision Processes 99 (2006) 16–33

23

performed separate univariate repeated measures tests. The univariate tests require the variance covariance matrices to be equivalent across the conditions, and this assumption was conWrmed with Mauchly’s test of sphericity for aVective trust (Mauchly’s W D .92, p > .05). Although the test results for cognitive trust departed from expectations, the Huynh–Feldt correction (.98) to the degrees of freedom did not change the outcome of the tests. All hypothesis tests were performed at the group level. Means, standard deviations and correlations among the key dependent variables at each time period are shown in Table 2. Repeated measures analysis of variance showed that cognitive trust in the teams increased over time (F2,49 D 8.48, p < .001; 2 D .15), although there was a signiWcant interaction eVect of media and time (F6,98 D 3.69, p < .01; 2 D .19). The proWle plot for the changes in cognitive trust over time in each condition is shown in Fig. 1. Cognitive trust increased signiWcantly for both the EEE and EFF teams, but not for the FFF and FEE teams, as shown by the contrasts and eVect sizes in Table 3. Thus, cognitive trust in both the EEE and EFF teams increased to the levels of cognitive trust in the FEE and FFF teams, which remained relatively constant across

the results of the experiment were observed and recorded. Out of 156 participants, 12 indicated that they had interacted with team members outside the context of the experiment. However, in six of those cases, no other members in their team indicated that they had interacted with other members of the team outside the experiment. There were three teams in which more than one member indicated outside interactions, and these teams did not diVer signiWcantly from the other teams in their conditions on any dependent variables. Therefore, all teams were retained in the analyses. The Wnal questionnaire also included a measure of how well team members felt they knew each other as a result of working together. This was a single item measure (“after 3 weeks of working together how well do you feel you know your fellow team members?”) (M D 2.98, SD D .89). This was included as an additional test of the depersonalization eVect predicted by the Cues-Filtered-Out perspective. Participants were also asked to indicate their best guess about the purpose of the experiment. The modal response concerned stock decision making. None of the responses speciWcally referred to trust.

Results We conducted three separate analyses to test our hypotheses. First, we used univariate repeated measures analysis of variance to analyze diVerences among conditions in cognitive trust over time. Next we repeated this analysis for aVective trust. We then used generalized estimating equations to evaluate diVerences among conditions in cooperation over time. Although ordinarily a multivariate repeated measures analysis of variance would have been appropriate (since there was a correlation between the diVerent trust response variables, as shown in Table 2), in this case there were signiWcant departures from the assumption of a multivariate normal distribution, as shown by Box’s M test (Box’s M D 256.09, p < .01). Since the multivariate comparison is not very robust to departures from assumptions, we

Cognitive Trust

4.75

EEE EFF FEE FFF

4.25

3.75

3.25 Time1

Time2

Time 3

Fig. 1. Mean levels of cognitive trust over time by condition. EEE teams met electronically three times. EFF teams met electronically once, followed by two face-to-face meetings. FEE teams met face-toface once, followed by two electronic meetings. FFF teams met faceto-face three times.

Table 2 Descriptive statistics for key trust variables

1. AVective Trust Time 1 2. AVective Trust Time 2 3. AVective Trust Time 3 4. Cognitive Trust Time 1 5. Cognitive Trust Time 2 6. Cognitive Trust Time 3 7. Cooperation Time 2 8. Cooperation Time 3

M

SD

1

2

3

4

5

6

7

4.29 4.35 4.33 3.95 4.14 4.15 .80 .83

.59 .54 .61 .45 .45 .49 .39 .40

.82 .45¤¤ .30¤¤ .54¤¤ .35¤¤ .33¤¤ .08 .14

.85 .51¤¤ .34¤¤ .71¤¤ .47¤¤ .18¤ .06

.86 .18 .41¤¤ .66¤¤ .11 .09

.88 .45¤¤ .40¤¤ .24¤ .18

.84 .62¤¤ .18¤ .21¤

.87 .09 .10

— .26¤

Note. Reliabilities appear on diagonal. Dashes are included for single item measures. ¤ p < .05. ¤¤ p < .01.

J.M. Wilson et al. / Organizational Behavior and Human Decision Processes 99 (2006) 16–33

Table 3 Means and standard deviations for the trust measures by condition and time Measure Time 1 M

Time 2

SD M

Time 3

SD M

A priori contrasts

SD T2

a

t

b

T3 d

t

d

Cognitive trust EEE 3.65 EFF 3.59 FEE 4.11 FFF 4.07

.51 .62 .49 .50

3.89 4.19 3.97 4.18

.52 .52 .44 .62

4.04 4.06 4.01 4.18

.64 1.869 .68 7.16¤¤ .51 ¡1.34 .62 .88

.49 .98 .39 .23

2.48¤ .88 .21 .64

.66 .24 .06 .17

AVective trust EEE 4.20 EFF 4.16 FEE 4.47 FFF 4.34

.64 .70 .44 .53

4.17 4.54 4.26 4.42

.64 .45 .49 .50

4.38 4.42 4.22 4.27

.54 ¡.25 .61 2.81¤ .49 ¡1.809 .78 1.14

.07 1.89¤ .78 .43 .52 ¡2.149 .23 ¡1.08

.50 .12 .64 .29

.48 .76 .97 1.00

.52 .43 .00 .00

.72 .82 .82 .97

1.559 .56 ¡1.17 ¡1.00

.42 .16 .35 .22

Cooperation EEE EFF FEE FFF

.46 .37 .48 .00

a

T2 reports whether the value at Time 2 was signiWcantly diVerent from the value at Time 1. b T3 reports whether the value at Time 3 was signiWcantly diVerent from the average of the prior values. ¤ p < .05. ¤¤ p < .01. 9 p < .10.

the three meetings. Although there was a signiWcant diVerence between conditions at the outset, after three meetings, there was no signiWcant diVerence between the conditions in their level of cognitive trust, as Hypothesis 1 predicted. The level of trust in computer-mediated teams did not persistently remain below that of face-to-face teams. Changes in communication media aVected the development of cognitive trust. There was an interaction between media order and time (F4,49 D 9.03, p < .001; 2 D .29). In particular, switching from computer-mediated communication to face-to-face communication (EFF) resulted in an increase in cognitive trust (as shown in Table 3). There was no corresponding decrease in cognitive trust when switching from face-to-face interaction to computer-mediated interaction (FEE). However, as noted above, by Time 3 there was no diVerence in the levels of cognitive trust between the EFF and FEE teams. The increase in cognitive trust following a change to face-to-face contact, and the lack of decline in trust following a change to computer-mediated communication, support Hypothesis 2. Similar overall patterns were observed for aVective trust. There was a signiWcant interaction between time and media conditions (F6,98 D 3.09, p < .01; 2 D .16). The proWle plot for aVective trust by condition over time shows that the largest changes occurred in the conditions that started with computer-mediated communica-

5

Affective Trust

24

4.5

EEE EFF FEE FFF

4

3.5 Time1

Time2

Time3

Fig. 2. Mean levels of aVective trust over time by condition.

tion (EEE and EFF) (see Fig. 2). The levels of aVective trust in these conditions increased to the levels for the face-to-face conditions by the third meeting. This pattern also supports Hypothesis 1: over time, teams interacting electronically achieved the same levels of aVective trust as teams interacting face-to-face. Changing communication media also made a diVerence in the patterns of aVective trust development. There was a signiWcant interaction between media order and time (F4,49 D 4.99, p < .01; 2 D .19), such that switching from computer-mediated to face-to-face communication (EFF) resulted in a signiWcant increase in aVective trust. Switching to computer-mediated interaction (FEE) slightly decreased aVective trust, which partially supports the predictions from the dominant paradigm. However, by the end of the task there was no diVerence between the teams with diVerent media orders, which contradicts the predictions from the dominant paradigm and supports Hypothesis 2. We also examined the results for cooperation, a behavioral indicator of trust. Cooperation was measured using a categorical scale, therefore analysis of this variable is based on generalized estimating equations. Generalized estimating equations are used to model data where the dependent variables are correlated. This approach can accommodate a variety of distributions— in this case, binomial (Liang & Zeger, 1986). Analysis of cooperation revealed similar patterns to the trust measures over time and media conditions. Correlations between trust and cooperation may have been attenuated by the relatively high base rates of cooperation in some conditions. Although there were diVerences between conditions in the likelihood that team members would cooperate (B D 3.14, Z D 3.17, p < .001; 95% CI § 1.94), there was also a signiWcant interaction of time and media (B D 1.26, Z D 2.38, p < .01; 95% CI § 1.04). This can be seen in the proWle plot for cooperation over time among the diVerent conditions (see Fig. 3). At Time 2, teams working face-to-face were much less likely to defect and much more likely to cooperate and pool their tokens. However, by Time 3, there were no signiWcant diVerences among conditions in the likelihood that teams would pool their tokens or defect, consistent with

J.M. Wilson et al. / Organizational Behavior and Human Decision Processes 99 (2006) 16–33

Cooperation

1

EEE EFF FEE FFF

0.5

0 Time 2

Time 3

25

ing to rely on and trust each other, but when performance is low, they are less inclined to do so. To investigate this possibility, we analyzed the eVect of individual or team performance (depending on whether teams pooled their tokens) from the Wrst stock purchase decision on later levels of trust. We examined the partial correlation coeYcients for trust at Time 3 with performance on the Wrst stock purchase, controlling for trust at Time 1 and Time 2.5 In no case was performance signiWcantly correlated with later trust (cognitive or aVective) or cooperation.

Fig. 3. Mean levels of cooperation over time by condition.

Process explanations Hypothesis 1. This result was due largely to an increase in the cooperation rates among computer-mediated teams (EEE). Changes in communication media did not result in signiWcant changes in the pattern of cooperation. Switching from face-to-face interaction to computer-mediated interaction (FEE) did not result in signiWcant decreases in cooperation rates. These results combined with the patterns noted above provide support for Hypothesis 2. Alternative explanations We also conducted additional analyses to explicitly test the core assertion of the dominant paradigm that members of computer-mediated teams experience deindividuation—manifested in the inability to perceive fellow team members as distinct individuals. Contrary to this prediction we found no signiWcant diVerences between conditions in the magnitude of the perceived diVerences between fellow team members on trustworthiness using Johnson-George and Swap’s SpeciWc Interpersonal Trust scale (F3, 153 D 2.11). Members of face-toface and computer-mediated teams were equally likely to detect diVerences in the trustworthiness of their fellow team members. Moreover, there was no diVerence between conditions on the extent to which participants felt they knew their fellow team members (F3, 153 D 1.42; MEEE D 3.02 (SD .92), MEFF D 2.97 (SD .84), MFEE D 2.74 (SD .89), MFFF D 3.16 (SD .82)). Our power to detect a small eVect was .54, with a larger sample than the average reported in a recent meta-analysis of media eVects (Baltes, Dickson, Sherman, Bauer, & LaGanke, 2002). If depersonalization had occurred, we would have expected members of computer-mediated teams to report that they were less familiar with their fellow team members in comparison to face-to-face teams. Although we did not expect performance in this task to inXuence trust, we did investigate this possibility as it could represent another explanation for the Wndings. More speciWcally, it could be the case that when performance is high (in this case when the value of the team’s stock portfolio increases), team members are more will-

We also examined the process data from audio tapes and electronic records to investigate if interaction patterns in the teams explained changes in the development of trust over time. We conducted content analyses of the transcripts from the meetings of 12 EEE teams, 11 FFF teams, 11 EFF teams, and 11 FEE teams over three time periods (equipment problems resulted in the loss of recordings for several teams). Because both Cues-Filtered-Out and Social Information Processing theories suggest that exchange of social information explains relational outcomes in groups, we Wrst examined patterns of social information exchange. Task and non-task comments were coded in a manner consistent with the Time-by-Event-by-Member (TEMPO) process coding system (Bales, 1950; Futoran, Kelly, & McGrath, 1989). Comments were coded as “task” if they pertained to the team’s task or process (Examples include: “They are losing ground in the youth market. Nike I mean.”). Comments were coded as “nontask” if they revealed personal information, digressed from the task, involved humor, or dealt with topics not related to the team’s task (Examples include: “I’m so bright my dad calls me son.” and “that’s my sister’s birthday. She’s even stranger than you JeVƒ”). Transcripts from pilot sessions (conducted before the actual experiment to test the task, procedures, and instructions) were used to train coders. Two coders coded 20% of the transcripts in each condition to establish inter-rater reliability: 10% at the outset of the coding, and 10% at the midpoint of the coding. Systematic disagreements between coders were identiWed and corrected during training. Unitizing reliability was calculated using Guetzkow’s U (Folger, Hewes, & Poole, 1984; Guetzkow, 1950). Backchanneling (non-substantive interruptions) was not counted as a speaking turn so that meaningful statements could be coded as coherent wholes (Weingart, 1997). Since transcripts were unitized 5 We tested for performance eVects between Time 1 and Time 3 because the Wrst stocks were actually selected at the beginning of Meeting 2, and participants would not have had an opportunity to react to the performance of their Wrst stock selection until Meeting 3.

J.M. Wilson et al. / Organizational Behavior and Human Decision Processes 99 (2006) 16–33

at the level of speaking turn, unitizing reliability was essentially perfect. Inter-rater reliability for content coding was calculated by using Cohen’s  (Cohen, 1960). Inter-rater reliability was .71 at the start and .83 at the midpoint for coding task and non-task comments. Agreement levels above .60 are considered “substantial.” (Landis & Koch, 1977). In contrast to predictions of both Walther’s Social Information Processing and Cues-Filtered-Out perspectives, the relationship between social (or non-task) information and trust was negative (aVective trust, r D ¡.26, p < .05; cognitive trust, r D ¡.28, p < .05 across conditions and time periods). To account for these results we developed Wner-grained coding categories for non-task comments, including disclosure and inXammatory comments. Disclosure has been suggested as a means to reduce social uncertainty in computer-mediated interactions (Tidwell & Walther, 2002). We coded both self-disclosure (“Sorry, don’t ask me I’m not creative like that.”) and personal questions (“Anyone here a Monty Python fan?”) in this category. In addition, computer-mediated communication sometimes is characterized by unusually personal remarks that are associated with interpersonal aVect (Walther, 1997). We tested such inXammatory comments as a possible predictor of trust, including teasing (“300 bucks [reference to team incentive] will buy a lot of women”), antagonistic comments (“cut the music [ ] have you made up your mind yet so we could move along here?”) and use of oVensive words (“Okay, the feminist in me might have to come out and kick your ass right now.”). Procedures described above were followed to code the transcripts again. Inter-rater reliability was .94 for coding antagonistic comments, .60 for coding teasing and humor, and .75 for coding oVensive words at the outset and .98, .68, and .90, respectively, at the midpoint of coding. Because the mean levels of the separate categories of inXammatory remarks were highly correlated within each team meeting, an overall index for these remarks was formed by adding the occurrences of these remarks within any speaking turn and aggregating those by team and time period. The resulting index for inXammatory remarks was negatively correlated with aVective trust (r D ¡.35, p < .001), cognitive trust (r D ¡.38, p < .001) and cooperation (r D ¡.20, p < .10) across conditions and time periods. Disclosure was not correlated with trust (aVective trust, r D .09; cognitive trust r D ¡.01). Excluding inXammatory remarks from the non-task category revealed that general social information was not signiWcantly related to either aVective trust (r D ¡.08) or cognitive trust (r D ¡.16). This pattern of results suggests that inXammatory remarks impede the development of trust in teams. The proWle plot for inXammatory remarks by condition across time periods is shown in Fig. 4. The pattern of inXammatory remarks closely mirrors the patterns for trust outcomes. The initial level of inXammatory

Inflammatory Remarks

26

0.1 EEE EFF FEE FFF

0.05

0 Time 1

Time 2

Time 3

Fig. 4. Means levels of inXammatory remarks over time by condition.

remarks was high in computer-mediated teams (EEE and EFF), corresponding to low levels of trust. Conversely, the initial level of inXammatory remarks was low or non-existent in the face-to-face teams (FFF and FEE). Overall, the level of inXammatory remarks decreased over time in all teams, as trust increased (F2, 42 D 3.04, p < .05; 2 D .12). When there was a change in medium, a change in the overall level of inXammatory remarks preceded a change in trust. In the EFF teams, inXammatory remarks decreased signiWcantly over time, particularly in the second meeting, corresponding to an increase in trust (F2,8 D 12.47, p < .01; 2 D .58). In the FEE teams, inXammatory remarks increased signiWcantly in the second meeting, preceding a decrease in trust indicators (F2, 8 D 2.60, p < .10; 2 D .21). As with trust, there was no signiWcant diVerence between conditions in the level of inXammatory remarks by Time 3 (F D .762). We also tested whether inXammatory remarks mediated the relationship between communication medium and trust. The dependent variable in the mediation analyses was trust at Time 1, as diVerences between media conditions were not signiWcant at Times 2 and 3. Including Time 1 inXammatory remarks as a mediator between communication medium and cognitive trust decreased the coeYcient for medium from .65 to .46, which remained statistically signiWcant (t43 D 3.06, p < .01), with the coeYcient for inXammatory remarks also signiWcant ( D ¡.37, t43 D ¡2.46, p < .05), indicating partial mediation. For aVective trust, including Time 1 inXammatory remarks as a mediator decreased the coeYcient for medium from .43 to .19, which was no longer signiWcant (t43 D 1.07), but the coeYcient for inXammatory remarks was signiWcant ( D ¡.46 ,t43 D ¡2.58, p < .05), indicating full mediation. To illustrate the change in tone from one meeting to the next, we have excerpted some typical non-task comments from a team in the EFF condition in Appendix B. In their Wrst (electronic) meeting, there were 13 instances of antagonistic remarks out of 112 speaking turns. One team member’s use of capital letters (“YOU need to tell me what YOU like”) is characteristic of the antagonism

J.M. Wilson et al. / Organizational Behavior and Human Decision Processes 99 (2006) 16–33

in this meeting. In their second and third (face-to-face) meetings, there were no antagonistic remarks, oVensive words or sarcasm, reXecting a dramatic change in tone. Since supportive and predictable communication is thought to increase trust (Gibson & Manuel, 2003), it is not surprising that the extreme nature of the inXammatory comments in the early electronic meetings reduced trust in the teams.

Discussion Implications of the research The results of this study hold important implications for research and theories about computer-mediated communication. The changes in trust over time that we observed support arguments for incorporating a temporal dimension into theories of relational development in computer-mediated groups (McGrath, 1993; Walther, 1995). We found that computer-mediated teams had lower levels of cognitive and aVective trust at the end of the initial session than did teams working face-to-face, consistent with Cues-Filtered-Out. However, within three meetings, the trust levels of the computer-mediated teams increased to levels comparable to those of face-toface teams, consistent with Social Information Processing (Walther, 1992). Furthermore, we observed the same pattern of Wndings for cooperation, which is a behavioral outcome of trust. The results of this study also underscored the importance of examining the speciWc communication mechanisms that are predicted by theories of relational development in computer-mediated groups. Examination of the communication processes that contributed to the patterns of trust development revealed that the degree of inXammatory remarks had a temporary inXuence on the development of trust. This Wnding, however, could not be explained by either the Cues-Filtered-Out (which emphasizes communication content, but not temporal dynamics) or Walther’s Social Information Processing (which stresses temporal dynamics, but not changes in communication content) perspectives alone. Instead, an integrated theory of technology-mediated communication is called for that considers how the content of communication, particularly the degree of inXammatory remarks over time, inXuences relational development. The pattern of trust development in this study has important implications for what has been the dominant view of relational development in computer-mediated groups: Cues-Filtered-Out. Overall, we found little support for the prediction that teams with face-to-face interaction would ultimately exhibit higher levels of trust than teams interacting electronically. Although this pattern occurred in the Wrst meeting, over time, trust in computer-mediated teams rose to levels that met or

27

exceeded the levels of trust in the face-to-face teams. At the same time, some of the temporary changes in trust that accompanied changes in the communication medium did support the dominant perspective’s predictions. In particular, the decrease in aVective trust following a switch to computer-mediated interaction was consistent with a Cues-Filtered-Out interpretation. However, the results for cognitive trust and cooperation following a change in communication medium were more consistent with our predictions based on Walther’s Social Information Processing theory. Cognitive trust and cooperation decelerated, but did not signiWcantly decrease, with a change to computer-mediated interaction. Members of computer-mediated teams also reported that they felt they knew their fellow team members at least as well after three meetings as did members of face-to-face teams. This Wnding is inconsistent with the deindividuation eVects predicted by the dominant perspective. It also raises questions about whether past Wndings of diVerences between computer-mediated and face-to-face groups were simply an artifact of the shortterm nature of much of this research (Martins, Gilson, & Maynard, 2004; Walther, 1995). Although the eVects of time on trust development do not support Cues-Filtered-Out, the content of initial interactions in computer-mediated teams and its association with trust are consistent with this theory. That is, members of teams that met electronically at the outset were signiWcantly more likely to use oVensive language and tease or antagonize each other. Not surprisingly, these behaviors were associated with lower levels of cognitive and aVective trust and lower levels of cooperation. However, over time, these communication behaviors diminished, and the behaviors and outcomes of face-toface and computer-mediated teams converged, as predicted by Walther’s Social Information Processing Theory. At the same time, Walther’s theory does not completely account for these Wndings since it would predict that the opportunity to learn about group members through increasing amounts of social information over time would predict trust in computer-mediated groups. In fact, we found that general social information was not associated with cognitive or aVective trust; instead, it was the decrease in inXammatory communication that inXuenced trust development. This suggests that a uniWed theory of technologymediated groups that integrates Cues-Filtered-Out and Walther’s Social Information Processing perspectives might better explain relational development. Technology-mediated communication does reduce inhibitions in early interactions (consistent with Cues-Filtered-Out), but over time communication patterns in technologymediated teams come to approximate those of face-toface teams (consistent with Social Information Processing). Contrary to the predictions of Walther’s Social Information Processing, there are diVerences in the

28

J.M. Wilson et al. / Organizational Behavior and Human Decision Processes 99 (2006) 16–33

initial communication processes of computer-mediated and face-to-face groups. However, with time, members of technology-mediated teams develop fully individuated views of other team members and behave accordingly. This study also suggests that pessimistic conclusions about the viability of electronic groups (Baltes et al., 2002) may be limited by reliance on one-shot studies. When the electronic teams were observed over time in this study, it became clear that both the content and the results of their interactions improved. The longitudinal nature of this study helps address a recurring problem in small group research: the fact that “Wndings” may be statistically robust but dynamically Xeeting (McGrath, 1993). By examining group phenomena in static experiments, researchers are particularly susceptible to dynamic type I errors (when a phenomenon occurs early in the life of groups but does not last over time) (McGrath, Arrow, Gruenfeld, Hollingshead, & O’Connor, 1993). In addition to addressing a theoretical debate in the computer-mediated communication literature, this study makes several important contributions to the literature on trust in organizations. First, it examines trust development over time. Despite the fact that most models of trust include time as a critical factor (Lewicki & Bunker, 1996), there are few longitudinal studies of trust (Tyler & DeGoey, 1996). Incorporating a temporal dimension into the design of this study allows us to begin to investigate the dynamic properties of trust. For instance, the Wnding that trust increases with a switch to face-to-face interaction, but does not decline with a shift away from such modes of communication, suggests that trust may exhibit “stickiness” under certain circumstances (Szulanski, 2000). From this perspective, the social information acquired through face-to-face interaction may remain with individuals even as they move to more distributed forms of organization. We recognize that the results reported here are only suggestive of this interpretation, which requires further examination, but it underscores the importance of studying trust over time. Second, this study tests trust development at the group level of analysis. Although trust is often cited as a critical determinant of group success (Golembiewski & McConkie, 1975), it is rarely measured at the group level (see Dirks, 1999; Langfred, 2004; Simons & Peterson, 1998 for exceptions). Finally, this study includes both aVective and cognitive measures of trust that show some convergence with each other and with cooperation, a behavioral measure of trust. Thus, the levels of cognitive and aVective trust were associated with important changes in team behavior and decision-making. There was no evidence in this sample that cognitive and aVective trust developed diVerently over time. This Wnding is consistent with recent theorizing that, rather than developing in discrete stages, aVect may actually inXuence all stages and types of trust (Williams, 2001).

We also may not have observed diVerences in the development of cognitive and aVective trust because social information is used to make judgments about both aspects of trust. Although we did not observe diVerent developmental patterns for cognitive and aVective trust in our study, in circumstances where diVerent information is available about remote partners (such as work history or reputation), cognitive trust may develop faster than aVective trust. The process results also have interesting implications for the application of group development theories to computer-mediated groups. Integrative models of group development suggest that groups progress through predictable stages, starting with an initial stage of polite and tentative behavior, progressing to a second stage of uncertainty and argumentative or critical behavior, followed by a third stage characterized by more mature communication and the development of trust (Tuckman, 1965; Wheelan, 1994). The computer-mediated teams in our sample seemed to shortcut the polite and tentative stage and spend more time in the argumentative and critical stage. Bypassing stage 1 or spending more time in stage 2 may have delayed these teams from getting to the third stage where trust develops (Chang et al., 2003). The results of this study also have practical implications for managers. In short-term teams that meet only once or twice, there are signiWcant trust advantages for having teams meet face-to-face at least once. However, if teams are going to be meeting over a longer period of time, there does not appear to be a signiWcant ongoing advantage for face-to-face meetings. Managers may also be able to minimize the temporary depersonalization in distributed teams by encouraging team members to share individuating information. Indeed, there is some evidence that encouraging early disclosure of personal information (Moore, Kurtzberg, Fox, et al., 1999) or even pictures (Walther, Slovacek, & Tidwell, 2001) helps establish aVect-based rapport in computer-mediated environments. Limitations and future research The results of this study need to be interpreted in light of the team task. The task structure used in this experiment was easy for teams to modify in the later stages of their work (changing the ways they divided the research or how they cooperated in pooling tokens). For other group tasks, early decisions about the structure may be diYcult to change. For instance, once a distributed product development team allocates tasks and responsibilities, these decisions may become entrained or more diYcult to renegotiate. If this is the case, it may be that organizational teams that start electronically (with lower levels of trust) set up their tasks with low levels of interdependence. If virtual teams design their tasks with low

J.M. Wilson et al. / Organizational Behavior and Human Decision Processes 99 (2006) 16–33

levels of interdependence as a result of their initially low levels of trust, the lower levels of interdependence may limit interaction, which in turn may restrict the opportunity for trust development (creating a self-fulWlling prophecy). In fact, there is some evidence that the costs of coordinating at a distance already cause members of virtual teams to attempt to limit their task interdependence (Galegher & Kraut, 1992). Accordingly, an important area for future research is how the malleability of group task structure and the level of interdependence inXuence the development of trust in distributed groups. This study also examined the development of trust in groups without any prior familiarity. As such, it reveals what happens to trust as members of teams gradually become more familiar with each other. Because familiarity often predicts trust (Gulati, 1995; Wilson, 2001), the results may be diVerent when teams are composed of members who are previously acquainted. We know, for instance, that prior interpersonal contact creates an initial trust and performance advantage for face-to-face teams (Harrison, Mohammed, McGrath, Florey, & Vanderstoep, 2003) and this advantage may extend to teams in all conditions. Teams operating in an organizational context also may diVer in other respects. Members are more likely to be concerned about missed opportunities that come from choosing to rely on other members of the team (Rousseau, 1995). Team members’ behavior also may be constrained by consideration of the reputational consequences of their actions. There may be important institutional substitutes for trust, such as strong role prescriptions (Meyerson, Weick, & Kramer, 1996) or highly structured environments (Dirks & Ferrin, 2001) that reduce the inXuence of communication media on the development of trust within the team. Members in teams with highly specialized roles may also need to rely on each other’s expertise to a greater extent than in teams with more homogenous expertise. Taken together, these observations suggest that Weld studies are a fruitful area for future research on the development of trust in distributed groups. While cognizant of the limits of this study, we also note that the Wndings reported here are generalizable to a variety of group situations. The results certainly apply to teams that are being formed with increasing frequency across various functional or geographic boundaries in organizations where the members do not know each other at the outset of the team experience. The results also apply to mixed-motive situations—a circumstance where interpersonal trust is particularly important (Ferrin & Dirks, 2003). In this study, team members had incentives for acting in both their individual and collective interests. These circumstances characterize the reward structure in many work teams. For example, members of cross-functional product development teams have incentives to act in the best interests of their

29

functional department as well as incentives to maximize the team’s outcomes (which may overlap minimally with their functional interests). Even members of permanent work teams have to choose between maximizing their individual career interests and acting in the best interests of the team. We also expect that the negative eVect of inXammatory remarks on initial levels of trust will generalize to organizational teams. Intact work teams with fully identiWed members and a complete range of reputational consequences have also demonstrated a tendency, early in their development, to communicate remarks electronically that they typically would not say face-toface (Alonzo & Aiken, 2004). Although this study did examine patterns of trust over time, it did not explicitly investigate mechanisms of social control. The teams in this study had only a few monitoring and control mechanisms at their disposal. We need more research on the relationship between trust and social controls. Some research indicates that behavioral control mechanisms used in traditional teams can have a negative eVect on trust in virtual teams (Piccoli & Ives, 2003), but other research indicates that control is positively related to trust (Crisp & Jarvenpaa, 2000). It would be interesting to understand what forms of control have positive, negative, and curvilinear relationships with trust. Our study also focused on process-based trust, but some theories predict that in distributed circumstances, interpersonal trust will be supplemented or replaced by institutional-based trust (relying on formal structures) (Zucker, 1986). This raises questions about what types of formal structures are realistic and eVective at producing trust in distributed groups. In addition, how do these structures interact with, substitute for, or complement social interaction in the production of trust?

Conclusion Overall, the results of this study provide important insights that begin to unravel the paradox of trust development in distributed groups. By taking a temporal view of trust, we begin to see how trust develops in distributed groups. Over time, teams working electronically eventually develop levels of trust comparable to face-to-face teams. A key implication of this Wnding is that the communication medium alters the rate at which trust develops, but does not produce fundamentally diVerent levels of trust in computer-mediated versus face-to-face teams. We expect that understanding the dynamics of trust in technology-mediated groups will only become more important. Although organization scholars have long suggested that trust is an element that makes work in organizations possible (Barnard, 1938), distributed forms of organization have made the development of trust more challenging. This study conWrms that the

30

J.M. Wilson et al. / Organizational Behavior and Human Decision Processes 99 (2006) 16–33

development of trust in distributed teams is not impossible, it just takes longer. Appendix A. Sample items from McAllister’s (1995) trust measure and modiWcations for short-term groups

Original items from McAllister measure AVect-based trust • We have a sharing relationship. We can both freely share our ideas and feelings • If I shared my problems with this person, I know (s)he would respond constructively and caringly Cognition-based trust • I can rely on this person not to make my job more diYcult by careless work

ModiWed items for this study

• I can freely share my ideas and feelings in this group • If I shared my concerns with this group, I know that they would respond constructively and caringly

• Given this person’s track record, I see no reason to doubt his/her competence for the job

• I can rely on the other group members not to make my decisions more diYcult by careless work • Given my experience with this group, I see no reason to doubt the members’ competence for the task

Monitoring/defensiveness • I have sometimes found it necessary to work around this person to get things done the way I would like them to be done*

• I have sometimes found it necessary to disregard the other group members’ recommendations to make a good decision*

>Courtney says, ‘I mean, I’m giving out my ideas and you don’t have to agree with them. We simply have to work together to come to a decision instead of you snapping at people’ >Jean-Marie says, ‘OKƒ’ Second meeting (face-to-face) in a discussion of Harley Davison as a possible stock selection >Jean-Marie says, ‘I like Harleys.’ >Brian says, ‘Me too. I think Harleys will always be around.’ >Courtney says, ‘Yeah’ >Brian says, ‘They are like classic.’ >Jean-Marie says, ‘I heard something about them just last week.’ >Brian says, ‘Really?’ >Jean-Marie says, ‘Yeah.’ >Brian says, ‘All right. Let’s keep Harley Davidson. What can I say? I like motorcycles. Keebler elves.’ >Jean-Marie says, ‘Hmmm. I don’t know about Keebler’ >Courtney says, ‘Now I know Reebok, they just released a new line. Reebok DMX.’ >Brian says, ‘Yeah’ >Courtney says, ‘So, I work in a shoe store and everybody likes DMX.’ >Jean-Marie says, ‘Oh, gosh.’ >Courtney says, ‘So’ >Brian says, ‘That’s a star.’

Note. Items indicated by a * are reverse scored.

Appendix B. Transcript of selected comments from an EFF team First meeting (computer-mediated) in a discussion of whether to select the Gap as a stock >Brian says, ‘just remember fashion is very trendy’ >Brian says, ‘lets make up our minds NOW’ >Courtney says, ‘there is no need to be getting loud with anybody Brian’ >Jean-Marie says, ‘calm down’ >Brian says, ‘you’re the one asking for all the info, lets just get in and get out’ >Courtney says, ‘anyway I like Disney, PWzer and you need to tell us what else you like’ >Jean-Marie says, ‘so what is it Brian’ >Courtney says, ‘what do you like’ >Brian says, “YOU need to tell me what YOU like’ (later) >Brian says, ‘where did you pull that out of?’ >Courtney says, ‘What are you talking about.’

References Abel, M. J. (1990). Experiences in an exploratory distributed organization. In J. Galegher, R. Kraut, & C. Egido (Eds.), Intellectual teamwork (pp. 489–510). Norwood, NJ: Erlbaum. Alge, B. J., WeithoV, C., & Klein, H. J. (2003). When does the medium matter? Knowledge-building experiences and opportunities in decision-making teams. Organizational Behavior and Human Decision Processes, 91, 26–37. Ashforth, B. E., & Lee, R. T. (1990). Defensive behavior in organizations: A preliminary model. Human Relations, 43, 621–648. Alonzo, M., & Aiken, M. (2004). Flaming in electronic communication. Decision Support Systems, 36, 205–217. Bales, R. F. (1950). Interaction process analysis: A method for the study of small groups. Cambridge, MA: Addison-Wesley. Bales, R. F. (1953). The equilibrium problem in small groups. In T. Parsons, R. F. Bales, & E. A. Shils (Eds.), Working papers in the theory of action (pp. 111–161). New York: Free Press. Baltes, B. B., Dickson, M. W., Sherman, M. P., Bauer, C. C., & LaGanke, J. S. (2002). Computer-mediated communication and group decision-making: A meta-analysis. Organizational Behavior and Human Decision Processes, 87, 156–179. Barnard, C. I. (1938). The functions of the executive. Cambridge, MA: Harvard University Press. Berger, C. R., & Calabrese, R. J. (1975). Some explanations in initial interaction and beyond. Human Communication Research, 2, 99–112. Burt, R. S., & Knez, M. (1996). Trust and third party gossip. In R. M. Kramer & T. R. Tyler (Eds.), Trust in organizations: Fron-

J.M. Wilson et al. / Organizational Behavior and Human Decision Processes 99 (2006) 16–33 tiers of theory and research (pp. 68–89). Thousand Oaks, CA: Sage. Chan, D. (1998). Functional relations among constructs in the same content domain at diVerent levels of analysis: A typology of composition models. Journal of Applied Psychology, 83, 234–246. Chang, A., Bordia, P., & Duck, J. (2003). Punctuated equilibrium and linear progression: Toward a new understanding of group development. Academy of Management Journal, 46, 106–117. Chidambaram, L., Bostrom, R. P., & Wynne, B. E. (1991). The impact of GDSS on group development. Journal of Management Information Systems, 7, 3–25. Cohen, J. A. (1960). CoeYcient of agreement for nominal scales. Education and Psychological Measurement, 20, 37–46. Coleman, J. S. (1990). Foundations of social theory. Cambridge, MA: Harvard University Press. Connolly, T., Jessup, L. M., & Valacich, J. S. (1990). EVects of anonymity and evaluative tone on idea generation in computer-mediated groups. Management Science, 36, 97–120. Crisp, C. B., & Jarvenpaa, S. L. (2000, August). Trust over time in global virtual teams. Paper presented at the Academy of Management conference, Toronto, Ont. Culnan, M. J., & Markus, M. L. (1987). Information technologies. In F. M. Jablin, L. L. Putnam, K. H. Roberts, & L. W. Porter (Eds.), Handbook of organizational communication: An interdisciplinary perspective (pp. 420–443). Newbury Park, CA: Sage. Dennis, A., Heminger, A., Nunamaker, J., & Vogel, D. (1990). Bringing automated support to large groups: The Burr–Brown experience. Information and Management, 18, 111–121. Dirks, K. T. (1999). The eVects of interpersonal trust on work group performance. Journal of Applied Psychology, 84, 445–455. Dirks, K. T. (2000). Trust in leadership and team performance: Evidence from NCAA basketball. Journal of Applied Psychology, 85, 1004–1012. Dirks, K. T., & Ferrin, D. L. (2001). The role of trust in organizational settings. Organization Science, 12, 450–469. Dubrovsky, V. J., Kiesler, S., & Sethna, B. N. (1991). The equalization phenomenon: Status eVects in computer-mediated and face-to-face decision-making groups. Human-Computer Interaction, 6, 119–146. Eveland, J. D., & Bikson, T. K. (1989). Work group structures and computer support: A Weld experiment. ACM Transactions on OYce Information Systems, 6, 354–379. Ferrin, D. L., & Dirks, K. T. (2003). The use of rewards to increase and decrease trust: Mediating processes and diVerential eVects. Organization Science, 14, 18–31. Folger, J. P., Hewes, D. E., & Poole, M. S. (1984). Coding social interaction. In B. Dervin & M. Voight (Eds.), Progress in communication sciences (Vol. 4, pp. 115–161). Norwood, NJ: Ablex. Frank, R. (1993). The strategic role of emotions: Reconciling under and over-socialized accounts of behavior. Rationality and Society, 5, 160–184. Fulk, J., & DeSanctis, G. (1995). Electronic communication and changing organizational forms. Organization Science, 6, 1–13. Fulk, J., Schmidt, J. A., & Schwarz, D. (1992). The dynamics of contextbehaviour interactions in computer-mediated communication. In M. Lea (Ed.), Contexts of computer-mediated communication (pp. 7– 29). London: Harvester-Wheatsheaf. Futoran, G. C., Kelly, J. R., & McGrath, J. E. (1989). TEMPO: A timebased system for analysis of group interaction process. Basic and Applied Social Psychology, 10, 211–232. Galegher, J., & Kraut, R. E. (1992). Computer-mediated communication and collaborative writing: Media inXuence and adaptation to communication constraints. Proceedings of the conference on Computer Supported Cooperative Work (CSCW)’92 (pp. 155–162). New York: ACM Press. Galegher, J., & Kraut, R. E. (1994). Computer-mediated communication for intellectual teamwork: An experiment in group writing. Information Systems Research, 5, 110–138.

31

Gambetta, D. (1988). Can we trust? In D. Gambetta (Ed.), Trust: Making and breaking cooperative relations (pp. 213–238). New York: Basil Blackwell. Gibson, C. B., & Manuel, J. A. (2003). Building trust: EVective multicultural communication processes in virtual teams. In C. B. Gibson & S. G. Cohen (Eds.), Virtual teams that work: Creating conditions for virtual team eVectiveness (pp. 59–86). San Francisco: JosseyBass. Golembiewski, R., & McConkie, M. (1975). The centrality of interpersonal trust in group process. In C. L. Cooper (Ed.), Theories of group process (pp. 131–185). New York: Wiley. Goodman, P. S., & Wilson, J. M. (2000). Substitutes for socialization and exocentric teams. In M. Neale, B. Mannix (Series Ed.) & T. GriYth (Vol. Ed.) Research in Groups and Teams, Vol. 3, (pp. 53– 77). Stamford, CT: JAI Press. GriYth, T. L., Sawyer, J. E., & Neale, M. A. (2003). Virtualness and knowledge in teams: Managing the love triangle of organizations, individuals, and information technology. MIS Quarterly, 27, 265– 287. Guetzkow, H. (1950). Unitizing and categorizing problems in coding qualitative data. Journal of Clinical Psychology, 6, 47–58. Gulati, R. (1995). Does familiarity breed trust? The implications of repeated ties for contractual choice in alliances. Academy of Management Journal, 38, 85–112. Handy, C. (1995). Trust and the virtual organization. Harvard Business Review, 73, 40–48. Harrison, D. A., Price, K. H., Gavin, J. H., & Florey, A. T. (2002). Time, teams, and task performance: Changing eVects of surface- and deep-level diversity on group functioning. Academy of Management Journal, 45, 1029–1045. Harrison, D. A., Mohammed, S., McGrath, J. E., Florey, A. T., & Vanderstoep, S. W. (2003). Time matters in team performance: EVects of member familiarity, entrainment, and task discontinuity on speed and quality. Personnel Psychology, 56, 633–669. Herrig, J. P. (1930). The measurement of liking and disliking. Journal of Educational Psychology, 21, 159–196. Hiltz, S. R., Johnson, K., & TuroV, M. (1986). Experiments in group decision-making: Communication process and outcome in face-toface versus computerized conferences. Human Communication Research, 13, 225–252. Hollingshead, A. B. (1996). Information suppression and status persistence in group decision making: The eVect of communication media. Human Communication Research, 23, 193–219. Hollingshead, A. B., McGrath, J. E., & O’Connor, K. M. (1993). Group task performance and communication technology: A longitudinal study of computer-mediated versus face-to-face work groups. Small Group Research, 24, 307–334. Jarvenpaa, S. L., Knoll, K., & Leidner, D. E. (1998). Is anybody out there? Antecedents of trust in global virtual teams. Journal of Management Information Systems, 14, 29–64. James, R., Demaree, R. G., & Wolf, G. (1984). Estimating within-group interrater reliability with and without response bias. Journal of Applied Psychology, 69, 85–98. Johnson-George, C., & Swap, W. (1982). Measurement of speciWc interpersonal trust: Construction and validation of a scale to assess trust in a speciWc other. Journal of Personality and Social Psychology, 43, 1306–1317. Kiesler, S., Siegel, J., & McGuire, T. W. (1984). Social psychological aspects of computer-mediated communication. American Psychologist, 39, 1123–1134. Kiesler, S., Zubrow, D., Moses, A. M., & Geller, V. (1985). AVect in computer-mediated communication. Human Computer Interaction, 1, 77–104. Kraut, R. E., Galagher, J., & Egido, C. (1988). Relationships and tasks in scientiWc research collaboration. Human Computer Interaction, 3, 31–58.

32

J.M. Wilson et al. / Organizational Behavior and Human Decision Processes 99 (2006) 16–33

Langfred, C. W. (2004). Too much of a good thing? Negative eVects of high trust and individual autonomy in self-managing teams. Academy of Management Journal, 47, 385–399. Landis, J. R., & Koch, G. G. (1977). Measurement of observer agreement for categorical data. Biometrics, 33, 159–175. Lawler, E. (1992). The ultimate advantage: Creating the high involvement organization. San Francisco: Jossey-Bass. Lawrence, J. A., & Mongeau, P. (1996, May). The role of image and anticipation of future interaction in computer-mediated impression development. Paper presented at the meeting of the International Communication Association in Chicago. Lewicki, R. J., & Bunker, B. B. (1996). Developing and maintaining trust in work relationships. In R. M. Kramer & T. R. Tyler (Eds.), Trust in organizations: Frontiers of theory and research (pp. 114– 139). Thousand Oaks, CA: Sage. Lewicki, R. J., McAllister, D. J., & Bies, R. J. (1998). Trust and distrust: New relationships and realities. Academy of Management Review, 23, 438–458. Liang, K. Y., & Zeger, S. L. (1986). Longitudinal data analysis using generalized linear models. Biometrika, 73, 13–22. Mankin, D., Cohen, S. G., & Bikson, T. K. (1996). Teams and technology: FulWlling the promise of the new organization. Boston: Harvard Business School Press. March, J. G., & Shapira, Z. (1987). Managerial perspectives on risk and risk-taking. Management Science, 33, 1404–1418. Martins, L. L., Gilson, L. L., & Maynard, M. T. (2004). Virtual teams: What do we know and where do we go from here? Journal of Management, 30, 805–825. Mayer, R. C., & Davis, J. H. (1999). The eVect of the performance appraisal system on trust for management: A Weld quasi-experiment. Journal of Applied Psychology, 84, 123–136. Mayer, R. C., Davis, J. H., & Schoorman, F. D. (1995). An integrative model of organizational trust. Academy of Management Review, 20, 709–734. McAllister, D. J. (1995). AVect- and cognition-based trust as foundations for interpersonal cooperation in organizations. Academy of Management Journal, 38, 24–59. McGrath, J. E. (1984). Groups: Interaction and performance. Englewood CliVs, NJ: Prentice-Hall. McGrath, J. E. (1993). The JEMCO workshop: Description of a longitudinal study. Small Group Research, 24, 285–306. McGrath, J. E., Arrow, H., Gruenfeld, D. H., Hollingshead, A. B., & O’Connor, K. M. (1993). Groups, tasks and technology: The eVects of experience and change. Small Group Research, 24, 406–420. Meyerson, D., Weick, K. E., & Kramer, R. M. (1996). Swift trust and temporary groups. In R. M. Kramer & T. R. Tyler (Eds.), Trust in organizations: Frontiers of theory and research (pp. 166–195). Thousand Oaks, CA: Sage. Moore, D. A., Kurtzberg, T. R., Fox, C. R., & Bazerman, M. H. (1999). Positive illusions and forecasting errors in mutual fund investment decisions. Organizational Behavior and Human Decision Processes, 79, 95–114. Moore, D. A., Kurtzberg, T. R., Thompson, L. L., & Morris, M. W. (1999). Long and short routes to success in electronically mediated negotiations: Group aYliations and good vibrations. Organizational Behavior and Human Decision Processes, 77, 22–44. Naquin, C. E., & Paulson, G. D. (2003). Online bargaining and interpersonal trust. Journal of Applied Psychology, 88, 113–120. Ohara-Devereaux, M., & Johansen, R. (1994). Globalwork: Bridging distance, culture and time. San Francisco: Jossey-Bass. Orbell, J., & Dawes, R. (1991). A cognitive miser’s theory of cooperators’ advantage. American Political Science Review, 85, 515– 528. Ouchi, W. G. (1981). Theory Z: How American business can meet the Japanese challenge. Reading, MA: Addison-Wesley. Parks, M. R., & Roberts, L. D. (1998). Making MOOsic: The development of personal relationships online and a comparison to their oV-

line counterparts. Journal of Social and Personal Relationships, 15, 517–537. Paul, D. L., & McDaniel, R. R. (2004). A Weld study of the eVect of interpersonal trust on virtual collaborative relationship performance. MIS Quarterly, 28, 183–227. Pelled, L. H., Eisenhardt, K. M., & Xin, K. R. (1999). Exploring the black box: An analysis of work group diversity, conXict and performance. Administrative Science Quarterly, 44, 1–29. Piccoli, G., & Ives, B. (2003). Trust and the unintended eVects of behavior control in virtual teams. MIS Quarterly, 27, 365–376. Poole, M. S. (1999). Organizational challenges for the new forms. In G. DeSanctis & J. Fulk (Eds.), Shaping organization form: Communication, connection and community (pp. 453–471). Thousand Oaks: Sage. Poole, M. S., & Van de Ven, A. H. (1989). Using paradox to build management and organization theories. Academy of Management Review, 14, 562–578. Rice, R. E. (1984). The new media: Communication, research and technology. Beverly Hills: Sage. Rousseau, D. M. (1995). Psychological contracts in organizations: Understanding written and unwritten agreements. Thousand Oaks: Sage. Rousseau, D. M., Sitkin, S. B., Burt, R. S., & Camerer, C. (1998). Not so diVerent after all: A cross-discipline view of trust. Academy of Management Review, 23, 393–404. Salancik, G. R., & PfeVer, J. (1978). A social information processing approach to job attitudes and task design. Administrative Science Quarterly, 23, 224–253. Seligman, A. B. (1998). Trust and sociability: on the limits of conWdence and role expectations. American Journal of Economics and Sociology, 57, 391–404. Sheppard, B. H., & Sherman, D. M. (1998). The grammars of trust: A model and general implications. Academy of Management Review, 23, 422–437. Short, J., Williams, E., & Christie, B. (1976). The social psychology of telecommunications. London: John Wiley. Siegel, J., Dubrovsky, V., Kiesler, S., & McGuire, T. W. (1986). Group processes in computer-mediated communication. Organizational Behavior and Human Decision Processes, 37, 157–187. Simons, T. L., & Peterson, R. S. (1998, August). Task conXict and relationship conXict in top management teams: The pivotal role of intragroup trust. Paper presented at the Academy of Management conference, San Diego, CA. Sniezek, J. A. (1992). Groups under uncertainty: An examination of conWdence in group decision making. Organizational Behavior and Human Decision Processes, 52, 124–156. Solomon, C. M. (2001). Managing virtual teams. Workforce, 80, 60–65. Sproull, L., & Kiesler, S. (1986). Reducing social context cues: Electronic mail in organizational communication. Management Science, 32, 1492–1512. Straus, S. G. (1996). Getting a clue: Communication media and information distribution eVects on group process and performance. Small Group Research, 27, 115–142. Straus, S. G. (1997). Technology, group process, and group outcomes: Testing the connections in computer-mediated and face-to-face groups. Human–Computer Interaction, 12, 227–266. Szulanski, G. (2000). The process of knowledge transfer: A diachronic analysis of stickiness. Organizational Behavior and Human Decision Processes, 82, 9–21. Tidwell, L. C., & Walther, J. B. (2002). Computer-mediated communication eVects on disclosure, impressions and interpersonal evaluations: Getting to know one another a bit at a time. Human Communication Research, 28, 317–348. Tuckman, B. W. (1965). Developmental sequence in small groups. Psychological Bulletin, 63, 384–399. Tyler, T. R., & DeGoey, P. (1996). Trust in organizational authorities: The inXuence of motive attributions on willingness to accept deci-

J.M. Wilson et al. / Organizational Behavior and Human Decision Processes 99 (2006) 16–33 sions. In R. M. Kramer & T. R. Tyler (Eds.), Trust in organizations: Frontiers of theory and research (pp. 331–356). Thousand Oaks, CA: Sage. Tyler, T. R., & Kramer, R. M. (1996). Wither trust? In R. M. Kramer & T. R. Tyler (Eds.), Trust in organizations: Frontiers of theory and research (pp. 1–15). Thousand Oaks, CA: Sage. Walther, J. B. (1992). Interpersonal eVects in computer-mediated interaction: A relational perspective. Communication Research, 19, 52– 90. Walther, J. B. (1993). Impression development in computer-mediated interaction. Western Journal of Communication, 57, 381–393. Walther, J. B. (1995). Relational aspects of computer-mediated communication: Experimental observations over time. Organization Science, 6, 186–203. Walther, J. B. (1996). Computer-mediated communication: Impersonal, interpersonal, and hyperpersonal interaction. Communication Research, 23, 3–43. Walther, J. B. (1997). Group and interpersonal eVects in international computer-mediated collaboration. Human Communication Research, 23, 342–369. Walther, J. B., Anderson, J. F., & Park, D. W. (1994). Interpersonal eVects in computer-mediated interaction: A meta-analysis of social and anti-social communication. Communication Research, 21, 460– 487. Walther, J. B., & Burgoon, J. K. (1992). Relational communication in computer-mediated interaction. Human Communication Research, 19, 50–88. Walther, J. B., Slovacek, C. L., & Tidwell, L. C. (2001). Is a picture worth a thousand words? Photographic images in long-term and short-term computer-mediated communication. Communication Research, 28, 105–135. Weingart, L. R. (1997). How did they do that? The ways and means of studying group process. Research in Organizational Behavior, 19, 189–239.

33

Weisband, S. P. (1992). Group discussion and Wrst advocacy eVects in computer-mediated and face-to-face decision making groups. Organizational Behavior and Human Decision Processes, 53, 352– 380. Weisband, S., & Iacono, S. (1997). Developing trust in virtual teams. In Proceedings of the Hawaii international conference on systems sciences Vol. 30. (pp. 412–420). Wailea, HI. Weisband, S., Schneider, S. K., & Connolly, T. (1995). Computer-mediated communication and social information: Status salience and status diVerences. Academy of Management Journal, 38, 1124–1151. Werts, C. E., Linn, R. L., & Joreskog, K. G. (1974). Intraclass reliability estimates: Testing structural assumptions. Educational and Psychological Measurement, 34, 25–33. Wheelan, S. A. (1994). Group processes: A developmental perspective. Sydney: Allyn & Bacon. Wichman, H. (1970). EVects of isolation and communication on cooperation in a two person game. Journal of Personality and Social Psychology, 16, 114–120. Williams, M. (2001). In whom we trust: Group membership as an eVective context for trust development. Academy of Management Review, 26, 377–396. Wilson, J. M. (2001). The development of trust in distributed groups. Unpublished doctoral dissertation, Carnegie Mellon University, Pittsburgh, PA. Zand, D. E. (1972). Trust and managerial problem-solving. Administrative Science Quarterly, 17, 229–239. Zucker, L. G. (1986). Production of trust: Institutional sources of economic structure. In B. M. Staw & L. L. Cummings (Eds.), Research in organizational Behavior (pp. 53–111). Greenwich, CT: JAI. Zucker, L. G., Darby, M. R., Brewer, M. B., & Peng, Y. (1996). Collaboration structure and informant dilemmas in biotechnology. In R. M. Kramer & T. R. Tyler (Eds.), Trust in organizations: Frontiers of theory and research (pp. 90–113). Thousand Oaks, CA: Sage.