Reconceptualizing trust: A non-linear Boolean model

Reconceptualizing trust: A non-linear Boolean model

Information & Management 52 (2015) 483–495 Contents lists available at ScienceDirect Information & Management journal homepage: www.elsevier.com/loc...

951KB Sizes 0 Downloads 46 Views

Information & Management 52 (2015) 483–495

Contents lists available at ScienceDirect

Information & Management journal homepage: www.elsevier.com/locate/im

Reconceptualizing trust: A non-linear Boolean model Henri Barki, Jacques Robert, Alina Dulipovici * HEC Montreal, Montreal, Quebec H3T 2A7, Canada

A R T I C L E I N F O

A B S T R A C T

Article history: Received 15 April 2014 Received in revised form 29 January 2015 Accepted 14 February 2015 Available online 24 February 2015

Although ability, benevolence, and integrity are generally recognized to be three key characteristics of trustworthiness that explain much of the within-truster variation in trustworthiness, some researchers have noted conceptual issues regarding how these characteristics are related to trust and have detected empirical inconsistencies in past research. The present paper suggests that in many contexts, the three characteristics of trustworthiness are non-linearly related to trusting behaviors and tests this idea via a multi-method approach (two laboratory experiments and a qualitative organizational study). The results of the three studies strongly support the validity and usefulness of the non-linear relationship hypothesized between ability, benevolence, and integrity. ß 2015 Elsevier B.V. All rights reserved.

Keywords: Trust Characteristics of trustworthiness Trusting behaviors Ability Benevolence Integrity

1. Introduction The construct of trust has been widely researched in different domains, including organizational studies, economics, psychology and sociology (e.g., [3,6,11,20]). According to a generally accepted definition, trust is ‘‘. . .the willingness of a party to be vulnerable to the actions of another party based on the expectation that the other party will perform a particular action important to the truster, irrespective of the ability to monitor or control the other party’’ ([20], p. 712). In the same well-cited review paper, Mayer et al. [20] also identified ability, benevolence, and integrity to be three key characteristics of trustworthiness that help explain much of the within-truster variation observed in trust. Ability (or competence) represents the skills or expertise that one party has that enables it to have influence in a given domain. Benevolence reflects a truster’s perception of how positive an orientation the trustee has towards him or her, i.e., how much the truster perceives the trustee to have the truster’s interests at heart. Finally, integrity is the truster’s perception ‘‘. . .that the trustee adheres to a set of principles that the truster finds acceptable’’ (p. 719). Following Mayer et al. [20], many organizational researchers have conceptualized and empirically supported a latent construct of trust reflected by ability, benevolence and integrity (e.g.,

* Corresponding author. Tel.: +1 514 340 7301. E-mail addresses: [email protected] (H. Barki), [email protected] (J. Robert), [email protected] (A. Dulipovici). http://dx.doi.org/10.1016/j.im.2015.02.001 0378-7206/ß 2015 Elsevier B.V. All rights reserved.

[8,13,14,18,22]). However, a meta-analysis by Colquitt et al. [6] raised concerns regarding the mixed relationships observed in past research between trust and the three characteristics, concluding that a key question was whether ‘‘. . .all facets of trustworthiness— ability, benevolence, and integrity—have significant, unique relationships with trust, and how strong are those relationships?’’ ([6], p. 910). Other researchers have raised similar concerns (e.g. [26]), underscoring the need to carefully examine these relationships (e.g., [21]) and noting that ‘‘. . .perceptions about trustworthiness lead to decisions about willingness to be vulnerable, which in turn translate into a variety of trusting behaviors. Nevertheless, we know of few studies that actually validate this entire causal chain of events’’ ([21], p. 40). The present study suggests that for certain boundary conditions, a non-linear Boolean conceptualization of the relationship between the three characteristics of trustworthiness and trust can provide a more parsimonious and powerful model than the traditional linear relationship that past research has assumed to exist between these characteristics and trust. To examine this idea, an experimental study was first conducted to investigate the viability of the proposed conceptualization in an artificial setting via a game theoretical approach. Based on the encouraging results of Study 1, the proposed conceptualization was then examined in Study 2 with qualitative data collected in an IT project setting, providing triangulation evidence and support for the proposed conceptualization in an IS context. The results of Study 2 also suggested an extension of the proposed conceptualization that addressed situations with incomplete information. This idea was tested in Study 3, which

484

H. Barki et al. / Information & Management 52 (2015) 483–495

repeated the experiment of Study 1, but this time by including conditions with incomplete information. Its results provided further evidence of the utility of the proposed conceptualization. The paper is structured as follows. Section 2 that follows, the theoretical development and justification of the proposed conceptualization is discussed. Section 3, in which the method and results of each of the three studies are described in turn and the findings are discussed. This is followed by a Section 4, in which the findings of the three studies and their theoretical and practical implications are discussed from an overall perspective. Section 4 also provides ideas and suggestions for future research while acknowledging some limitations of the study. Finally, Section 5 ends with a short conclusion that summarizes the paper’s key contributions. 2. Theory It is generally agreed that trust is influenced by ability, benevolence and integrity [6,26] and that ‘‘. . .all three factors of ability, benevolence, and integrity can contribute to trust in a group or organization’’ ([26], p. 345). These authors also provide a buyer-supplier example to illustrate that a buyer’s beliefs concerning the supplier’s ability to supply a high-quality product suggest only that the supplier could deliver such a product but not that the supplier will do so, and hence, ability alone would not be enough for a buyer to trust a supplier. Similarly, knowing that a supplier has integrity indicates only that it will try to fulfil its agreements as promised. However, if the supplier’s capability is not assured, then the fact that it has integrity will not be sufficient for the buyer to trust it. Finally, the perception that the supplier is benevolent indicates that it will try very hard to satisfy the buyer’s needs. However, if the buyer is unsure about the supplier’s integrity (e.g., due to its inconsistent track record with other buyers), then it will not necessarily trust the supplier. Hence, according to [26], ‘‘As the perception of each of these factors increases, we would expect an increase in willingness to take a risk in the relationship’’ ([26], p. 346). However, Colquitt et al.’s [6] meta-analysis and Schoorman et al.’s [26] review have also underscored empirical and theoretical concerns regarding a lack of independence between benevolence and integrity. For example, although some studies have observed high correlations between benevolence and integrity, others have failed to observe significant, unique effects for both (e.g., [15,19]), prompting Colquitt et al. [6] to conclude that ‘‘. . .it may also be that the effects of the two character facets—benevolence and integrity— are redundant with each other’’ (p. 911). Reviewing trust research, Schoorman et al. [26] expressed a similar conceptual concern and provided a theoretically plausible explanation for the mixed results regarding the benevolenceintegrity-trust relationship. Specifically, they noted that whereas judgments of ability and integrity could form relatively quickly in the course of a relationship, benevolence judgments needed more time to develop. They also concluded that ‘‘. . .studies conducted in laboratory settings were more likely to show a high correlation between benevolence and integrity because the relationships had not had time to develop any real data about benevolence. In field samples where the parties had longer relationships, benevolence and integrity were more likely to be separable factors. We continue to find this pattern to be consistent in our research’’ (p. 346). Thus, the mixed results of past research and the above theoretical considerations suggest that it would be useful to take into account the contextual characteristics of different trust situations to more clearly identify trust’s determinants. For example, Schoorman et al.’s [26] point regarding the long time it takes benevolence judgments to develop suggests that time

could be a potentially important boundary condition. Thus, when studying parties who are interacting for the first time, with no prior history or relationship between them, e.g., studies of initial trust, benevolence could be hypothesized to be unlikely to significantly influence trust or to provide largely redundant information with integrity. A theoretical explanation of the overlap between benevolence and integrity can also be found in the idea that two key motivational determinants of trust are notions of ‘‘can do’’ and ‘‘will do,’’ where ‘‘. . .ability captures the ‘‘can-do’’ component of trustworthiness by describing whether the trustee has the skills and abilities needed to act in an appropriate fashion. In contrast, the character variables [i.e., integrity and benevolence] capture the ‘‘will-do’’ component by describing whether the trustee will choose to use those skills and abilities to act in the best interests of the truster. Such ‘‘can-do’’ and ‘‘will-do’’ explanations of volitional behavior tend to exert effects independent of one another. . .’’ ([6], pp. 910–911). Because benevolence and integrity both reflect the ‘‘will-do’’ aspect, the extent of their overlap will likely depend on the specific contexts examined by different researchers, suggesting that their high correlation and lack of significant unique effects can stem from their overlap in contexts that are largely governed by economic rationality. For example, in e-commerce transaction contexts (e.g., [13,16,27]), when parties interact for the first time and a buyer has accurate information about a seller’s ability and integrity (i.e., honesty), the buyer is unlikely to need information about the seller’s benevolence to trust the seller. Hence, as noted above, in contexts of initial trust where economic rationality operates, trust will likely be influenced essentially by ability and integrity, with benevolence providing mostly redundant information to that communicated by integrity. Similarly, when a buyer has accurate information about a seller’s ability and benevolence, it can decide to trust the seller without needing to know about the seller’s integrity (in contexts of initial trust). These considerations suggest that first-time interactions between parties that operate under economic rationality, e.g., buyers and sellers in e-commerce contexts, provide key boundary conditions under which the ability plus either the integrity or the benevolence of a party may be sufficient for the other party to consider it trustworthy in making a trusting decision. Formally, this nonlinear link between ability, benevolence, integrity, and trust is represented by a Boolean relationship, where: Trust ¼ f ½ðAbilityÞ AND ðBenevolence OR IntegrityÞ In Section 2 that follows, we describe three studies (an experiment in an artificial setting, a qualitative study in an IT project context, and a second experiment in an artificial setting) that were conducted to investigate the proposed non-linear Boolean relationship. 3. Method Three studies were conducted to examine the proposed relationship between the three characteristics of trustworthiness and trust in contexts of initial trust. Study 1 operationalized the study constructs in a game theoretic experiment as an initial test of the viability of the proposed model. Following its encouraging results, qualitative data collected in a different study were recoded and analyzed in Study 2 to provide triangulation evidence for the proposed model in an IS project management context. Finally, based on the findings of the two studies, the non-linear model was extended to contexts where information about one of the three characteristics of trustworthiness would be lacking, and the extended model was tested Study 3.

H. Barki et al. / Information & Management 52 (2015) 483–495

3.1. Study 1: Trusting behaviors in an experimental setting Following McEvily and Tortoriella’s [21] suggestion regarding the need to examine the Boolean non-linear relationship between the characteristics of trustworthiness and trusting behaviors, we devised a simple game to operationalize ability, benevolence, and integrity in contexts where a decision-maker has to make a decision under uncertainty and has access to advice regarding the best course of action to follow from an advisor with whom she has had no previous experience, such as in online recommendation agent contexts (e.g., [27]). More specifically, the game served to operationalize trust contexts in which a decision-maker (e.g., an individual, a group or an organization) makes an observable action or decision under uncertainty with the help of another individual, group, organization or machine. To do so, and consistent with Bhattacharya et al.’s [3] definition of trust as a subjective probability, we defined a trusting behavior as a decision made by an optimizing decision-maker only when he or she attaches a sufficiently high probability to a positive outcome. In the game, the decision-maker (she) cannot observe the state of nature that ultimately will determine the payoffs that will result from her decision. However, the advisor (he) is able to observe or detect some useful information about the state of nature and reports this, i.e., his recommendation, to the decision-maker. The decision-maker can then elect to use this information (or not) to decide which course of action to take. In this situation, the decision-maker can be said to trust the advisor whenever her decision follows the course of action recommended by the advisor. As such, the game enables the experimental investigation of the influence of the three characteristics of trustworthiness on trusting behaviors (i.e., a decision-maker’s decisions that follow the advisor’s recommendation). To the extent that experimental conditions adequately operationalize the ability, benevolence, and integrity of the advisors, and the decision-makers’ perceptions of these manipulations influence the decision-makers’ trusting decisions (which are observable), the determinants of trusting behaviors can be experimentally manipulated and the determinants’’ effect on trusting behaviors can be examined. A more detailed description of the game is provided next. 3.1.1. Method—Study 1 3.1.1.1. Operationalizing a trust game of heads-or-tails. A simple trust game of heads-or-tails was devised to operationalize the characteristics of trustworthiness and trusting behaviors. Each round of the game has two players, who are assigned one of two roles, i.e., decision-maker or advisor. In each round, a random coin toss is first drawn, with its outcome remaining hidden from the players. Then, the advisor receives a probabilistic signal on whether the coin toss result was ‘‘heads’’ or ‘‘tails’’. Upon receipt of this signal, the advisor decides which report to submit to the anonymous decision-maker: ‘‘My signal is heads’’ or ‘‘My signal is tails’’. Next, the decision-maker receives this report and decides whether to bet ‘‘heads’’, ‘‘tails’’ or ‘‘no bet’’. When her decision is made, both players are informed of the coin toss result and their winnings. The players’ decisions essentially depend on the game’s parameters, i.e., the probability that the signal is correct and their potential winnings after the coin toss. The game’s parameters make betting unattractive unless the decision-maker receives some useful information from the advisor and she sufficiently trusts the advisor’s report. More specifically, the game’s reward structure parameters were selected so that with a 50–50 probability of heads or tails, a risk-neutral or risk-averse decision-maker should refrain from betting, but when the advisor’s signal is highly correlated with the outcome of the coin toss and he communicates his signal truthfully, then it is advantageous for the

485

decision-maker to bet by following the advisor’s recommendation. In the game, the decision-maker wins three units if she bets and guesses the outcome of the coin toss correctly, wins zero units if she bets but guesses the outcome of the coin toss incorrectly, and wins an automatic two units if she does not bet. Thus, the decisionmaker wins an extra unit with a correct guess but forfeits a gain of two with a wrong guess. Ability was operationalized via the predictive accuracy of the advisor’s signal, i.e., how accurately the advisor can guess the outcome of the coin toss, which in a sense represents the unique expertise or capability he can provide. The higher the advisor’s accuracy is, the higher the decision-maker will perceive his ‘‘ability’’ to provide good recommendations to be. Specifically, ability was operationalized at two levels: low (predicting the coin toss correctly 6 times out of 10, i.e., only slightly above 50%) and high (predicting the coin toss correctly 9 times out of 10). Note that the construct of ability and its operationalization as signal accuracy can both be viewed as the extent to which the advisor is able to help (or can help). When the advisor’s signal is correct only 6 times out of 10, a risk-neutral decision-maker should not bet, even if she believes that the advisor communicated his signal truthfully. In this case, betting has a negative expected value and is a losing proposition {(1  0.6) + (2  0.4) = 0.2}. However, when the signal is correct 9 times out of 10, betting becomes advantageous because its expected value is positive {(1  0.9) + (2  0.1) = 0.7}, unless the decision-maker is highly risk-averse. (Note that the accuracy level, i.e., the ‘‘ability’’ of the advisor’s signal, which is needed so that a decision-maker makes a bet, depends on her risk.) The advisor’s ability is not a sufficient condition for making a betting decision because he could lie in his report, which would then render it untrustworthy. In the game, the advisor’s willingness to be truthful depended on two characteristics: benevolence and integrity. Benevolence was operationalized as preference congruence, i.e., the alignment or match between the advisor’s and the decisionmaker’s preferences. This is consistent with [2], where the ‘‘benevolence’’ of some agent j is modeled by incorporating into j’s utility function the ‘‘private utility function’’ of another individual i, which represents i’s preferences on his/her private consumption. Here, we let both agents share the utility function. Thus, when the advisor’s decisions to be truthful or lie in his report determines his gains in the same way that the decisions determine the decision-maker’s gains, the preferences of the two parties can be said to be aligned or congruent. Assuming that the advisor and the decision-maker are rational, and hence that they will both act in their own interests, benevolence and preference congruence are behaviorally equivalent: the advisor’s recommendation will be based on his own interests, but because those interests are aligned or congruent with those of the decision-maker, the advisor’s recommendation will be perceived by the decisionmaker as reflecting her interests, i.e., having a positive orientation towards her. Conversely, when the decision-maker’s and the advisor’s preferences are not congruent, the decision-maker will perceive the advisor to have a negative orientation towards her. In sum, preference congruence provides an appropriate proxy for benevolence and was operationalized at two levels: high benevolence was operationalized by making the advisor’s payoffs either the same as the decision-maker’s (gaining three units if the decision-maker won her bet, zero if she lost, and two if she did not bet), and low benevolence was operationalized by making the advisor’s payoffs opposite to those of the decision-maker (gaining three units if the decision-maker lost her bet, zero if the decisionmaker won her bet, and only one if the decision-maker decided not to bet.) Note that benevolence also reflects the advisor’s willingness to help the decision-maker win (or willingness to help).

486

H. Barki et al. / Information & Management 52 (2015) 483–495

Integrity was modeled by levying to the advisor a penalty for lying. The penalty captures the idea that lying is costly (morally or otherwise) and increases the advisor’s reluctance to lie in his report (because the penalty reduces to zero any gains the advisor may obtain by lying) and hence can be viewed to be a proxy of his integrity. In the game, the advisor’s penalty for lying in his report was set at three units, i.e., recommending heads (tails) when the signal was tails (heads), and there was no penalty when he did not lie. Note that the advisor’s decision to report his signal truthfully or not reflects whether he is willing to help the decision-maker. When both the ‘‘can help’’ and ‘‘willing to help’’ conditions are met, the advisor is both able and willing to help the decision-maker. If the advisor’s ability and incentives are known to the decision-maker, and if both the ‘‘can do’’ and ‘‘will do’’ conditions are met, then the decision-maker should trust the advisor’s report and make the bet that he recommends. Based on the above operationalization, the experiment enabled the testing of the following hypothesis: Trust hypothesis (TH). The necessary and sufficient conditions under which an equilibrium that reflects trust will exist in decision-maker/ advisor contexts requires ability and one or both of benevolence and integrity. That is, ability, benevolence, and integrity will influence trusting behaviors in a Boolean relationship where trusting behaviors will be determined by (Ability) AND (Benevolence congruence OR Integrity). 3.1.1.2. Generalizing the heads-or-tails game. Although the above heads-or-tails game is specific, it can be generalized and reinterpreted to account for a larger set of contexts. Let M denote the gain (relative to not betting) of a decision-maker when her bet is successful, and let m denote her loss if the bet fails. A risk-neutral decision-maker will bet if the probability of success exceeds m/ (m + M). Hence, for a trusting decision to be made, the decision-maker must attach a sufficiently high probability that the advisor is both competent and willing to be truthful in his report. Formally, a rational decision-maker’s calculation of the above game is as follows: let pa denote the probability that the advisor is able, let ph denote the probability that the advisor is honest (i.e., has a high cost of lying), and let pb denote the probability that the advisor is benevolent (i.e., his preferences are congruent with those of the decision-maker). Then, the advisor’s probability (from the decision-maker’s perspective) of having the incentive to truthfully reveal his signal (Benevolence OR Integrity) is:

pT ¼ 1  ðð1  ph Þ  ð1  pb ÞÞ Let PAT denote the probability that the advisor’s recommendation is correct when he is both able and truthful, which occurs with probability pa * pT. Let PT denote the probability that the advisor’s recommendation is correct whenever the advisor has low ability but is truthful, which occurs with probability (1  pa) * pT. Finally, let P0 denote the probability that the advisor’s recommendation is correct if he is not truthful and his recommendation has no value, which occurs with probability (1  pa). Given the decision-maker’s beliefs about the advisor, she should assign the following probability that the advisor’s recommendation is correct: P 0 þ ðP T  P0 Þ  pT þ ðP AT  P T Þ  pa  pT Hence, a risk-neutral decision-maker will follow the advisor’s recommendation only if: P 0 þ ðP T  P0 Þ  pT þ ðP AT  P T Þ  pa  pT > m=ðm þ MÞ The decision to act on the advisor’s recommendation depends on context. Here, it depends on the probability that the advisor is

able (pa) and truthful (pT), the benefit from a successful bet (M), the loss from an unsuccessful bet (m), and the degree to which ability matters (PAT  PT) and truthfulness matters (PT  P0). Finally, it depends on the decision-maker’s attitude towards risk. In the heads-or-tails game, the advisor observes a signal whose quality varies with his ability. The game can be reinterpreted in contexts where the probability of success depends on the advisor’s effort and ability: then, P0 is the probability of success if the advisor makes no effort (will not do), PT is the probability of success if the advisor puts in the required effort but has low ability (cannot do), and PAT is the probability of success if the advisor puts in the required effort and has high ability (will do and can do), allowing the game to be reinterpreted in a larger context. 3.1.1.3. Experimental design. Forty-four male and 20 female participants were recruited from a list of volunteers of a research organization. Because the organization has a large pool of volunteers from different backgrounds who enlist as potential subjects in the numerous studies conducted by this organization, it was felt that the organization provided a relatively representative pool of subjects. Because the study had no pre-selection hypotheses, there were also no restrictions placed on the volunteers who were able to participate. Although a certain degree of self-selection bias is likely to exist due to the voluntary nature of the subjects’ participation in the study, it was also felt that such a bias would be unlikely to affect the study’s results. Thus, 45 of the subjects were students, 12 were employed, and seven were unemployed. Twenty-eight participants had an undergraduate degree, 23 had a community college diploma, five had a certificate, and eight had a master’s degree. The participants’ average age was 27.4 years. The concepts of ability, benevolence, and integrity were each two-level factors, yielding eight experimental conditions. An equal number of decisions was made in each condition, and the participants played the game in groups of 16. Four sessions were held over two days with no interaction between the participants of different groups. In each round of a session, 16 participants were divided into two groups of eight advisors and eight decisionmakers for that round. Thus, data were generated by eight independent replications of eight participants each. With 64 participants playing 30 rounds each, a total of 1920 observable decisions was obtained (960 advisor recommendations and 960 betting decisions), yielding 120 recommendations and 120 betting decisions per condition. 3.1.1.4. Experimental procedures. At the beginning of each session, the subjects were provided with a 20-minute presentation in which the researchers explained how the session was structured and the procedures that would be followed, how they would be paid, the nature of the heads-or-tails game they would play including a complete explanation of the experimental conditions, the rules that would govern the game, how the gains of each subject would be calculated, and the pay-offs that could be expected for each experimental condition. Each round consisted of the following steps. First, eight of the 16 players were randomly assigned to be that round’s advisors. Each advisor was then randomly assigned to one of the eight experimental conditions that corresponded to the eight versions of the game and told the probability associated with the coin toss result. For example, an advisor assigned to condition 110 (i.e., Ability = 1, Benevolence = 1, and Integrity = 0, where 1 corresponds to high and 0 to low levels of a factor) was told that his signal had a 0.9 probability of correctly guessing the result of the coin toss, that his payoffs following the coin toss would be the same as the decision-maker’s, and that lying in his report would entail no penalty (Fig. 1).

H. Barki et al. / Information & Management 52 (2015) 483–495

487

Fig. 1. The experimental procedure.

Upon receipt of his signal, the advisor decided which of two reports to submit to an anonymous decision-maker (one of the 15 other subjects in the room): ‘‘My signal is heads’’ or ‘‘My signal is tails’’. Next, the decision-maker was informed of that round’s experimental conditions that applied to her and her advisor and received the advisor’s report. Upon receipt of the advisor’s report, she had to decide whether to bet ‘‘heads’’, to bet ‘‘tails’’ or to ‘‘not bet’’. Once her decision was made, the result of the coin toss determined each subject’s gains (Experimental Monetary Units, or EMUs) for that round, and all subjects were informed of the outcome of their decision, i.e., the coin toss result, whether the advisor’s recommendation was correct, the EMUs they gained, and their running EMU total. Note that the subjects could not lose any EMUs in any round: they won three EMUs when they bet and guessed correctly, zero EMUs when they bet and guessed incorrectly, and two EMUs when they did not bet (the advisor’s gains were either the same as or opposed to the gains of his decision-maker. In the latter case, the advisor gained zero, one, or three EMUS, depending on the decision-maker’s bet.). Also note that the decision-makers were not told what signal their advisors had actually received; therefore, they could not determine whether their advisor had been truthful. At the end of the experimental sessions, each subject received $5 plus the total of their EMUs at the rate of $0.25/EMU. The average pay-out per subject was approximately $23. The sessions were programmed with Z-Tree [12]. Each participant interacted via an anonymous software interface and was separated from the others by a curtain to ensure privacy. At the beginning of each round, the participants were matched two by two, and their respective roles were assigned by Z-tree. The software randomly determined both the assignments and coin toss outcomes and ensured that the signals received by the advisors were correct with probability 0.6 and 0.9 when matched with the results of the virtual coin tosses it made. 3.1.1.5. Manipulation checks. A manipulation check of the experimental treatments was made via questionnaire data collected during the experiment. Every six rounds, the participants responded to a twelve-item questionnaire that assessed (via 0–10 Likert scales) their perceptions of the advisor’s ability (in this round, the advisor was well-informed about the result of the coin toss, was a pretty good advisor, and performed his advisor role well; alpha = 0.91), benevolence (in this round, the advisor acted in my best interests, was benevolent, and wanted my well-being; alpha = 0.95), and integrity (in this round, the advisor was honest,

was sincere, and was truthful; alpha = 0.97) and their trust in the advisor (in this round, the advisor was someone I could trust, gave me advice that I did not need to doubt, and gave me advice that I could trust, alpha = 0.97). The means of all four constructs were consistent with their manipulation in the experimental conditions as low or high: ability (3.67 and 4.52, p = 0.02), benevolence (2.45 and 4.86, p = 0.000), integrity (3.71 and 4.70, p = 0.01), and trust (2.93 and 4.51, p = 0.000). These results indicated that the experimental manipulations of ability, benevolence, and integrity corresponded well to the participants’ perceptions of the three characteristics of trustworthiness conceptualized in the literature. Note that benevolence scores matched the experimental manipulation of benevolence fairly well, providing evidence that operationalizing benevolence as preference congruence was appropriate.1 3.1.2. Results and discussion—Study 1 The results of ANOVA and post hoc analyses are provided in Table 1. The TH prediction for each condition is shown in the second column of Table 1, with column 3 indicating what percentage of times (out of the 120 trusting decisions in each condition) the bets of the decision-makers were the same as the advisor’s recommendations (i.e., a trusting decision was made). As observed, 66%, 88% and 96% of betting decisions followed the advisors’ recommendations to bet, indicating that the decisionmakers exhibited trust in experimental conditions 101, 110 and 111, respectively. Much lower percentages were observed in the other conditions, ranging from 5% for condition 000 to 39% for condition 011. In the ANOVA analysis, a pairwise comparison of the eight conditions were then conducted via post hoc tests (Tukey and Scheffe) to examine whether significant differences existed between each condition pair, and the results are shown in column 3 of Table 1. As observed, the Tukey and Scheffe tests indicated whether the percentage of trusting decisions in condition 110 (high ability, high benevolence, low integrity) and 111 (high for all 1 The correlations between the questionnaire measures of the four constructs ranged between 0.74 and 0.86. A regression analysis yielded standardized betas of 0.49 (p = 0.000), 0.38 (p = 0.000), and 0.09 (p = 0.113) for ability, integrity, and benevolence, respectively, and an r-square of 0.81. These results suggest that the decision-makers’ perceptions of the three trustworthiness characteristics explained 81% of the variance in their perceptions of how much they trusted their advisor’s recommendation in that coin toss round. Notably, the standardized beta of benevolence was not significant, suggesting that its influence on the decisionmakers’ trust was less important than that of ability and integrity.

H. Barki et al. / Information & Management 52 (2015) 483–495

488

Table 1 Trusting decisions, predicted by TH vs. observed (Tukey and Scheffe numbers identify the four groups with significantly different results). (Ability) AND

(Benevolence) OR

(Integrity)

TH Prediction (0/1 = no trust/trust)

% of Trusting decisions out of 120 (post hoc test)

0 0 0 0 1 1 1 1

0 0 1 1 0 0 1 1

0 1 0 1 0 1 0 1

0 0 0 0 0 1 1 1

0.05 0.27 0.33 0.39 0.24 0.66 0.88 0.96

(Tukey1, (Tukey2, (Tukey2, (Tukey2, (Tukey2, (Tukey3, (Tukey4, (Tukey4,

Scheffe1) Scheffe2) Scheffe2) Scheffe2) Scheffe2) Scheffe3) Scheffe4) Scheffe4)

three factors) formed a homogeneous group, i.e., exhibited the highest levels of trust in the advisor’s recommendation. Condition 101 formed the second group, and the third group included conditions 001, 010, 011 and 100. Condition 000 comprised the fourth group. Thus, the decision-makers’ betting decisions exhibited trust in their advisors’ recommendations with trust being highest in conditions 111 (96%) and 110 (86%), followed by condition 101 (66%). These results are consistent with TH, which predicts the presence of trust for these three conditions. However, the 66% of the 101 condition was lower than the 96% and 88% of the other two trusting conditions, suggesting that at least in the context of this experiment, integrity may have been a more important dimension than benevolence. To examine whether TH predicted the results better than a Linear Three-Factor (LTF) conceptualization of trust, the predicted classifications obtained from two logistic regressions were compared. In both regressions, the dependent variable was Trust, which was measured by whether a betting decision followed the advisor’s recommendation (1) or not (0). The independent variables of the LTF model were ability, benevolence and integrity; each scored 0/1 depending on the experimental condition that applied to the decision. The independent variable of the TH model logistic regression was TH Prediction, which took its values from column 2 of Table 1 (indicating TH’s predicted trust for each experimental condition). Thus, TH Prediction was scored 0 or 1, depending on which experimental condition was applied to the observed trusting decision, i.e., the decision-maker’s bet. The results are shown in Table 2. As observed, TH’s predictions (column 2 of Table 2) correctly classified a higher percentage of betting decisions than did LTF’s predictions (77.7% vs. 75.0%). Although the 2.7% difference between the two models is small, it is qualitatively meaningful. Upon closer examination, it can be observed that the difference between the two estimations arises from the LTF model’s prediction that the decision-maker will trust under condition 011, i.e., when benevolence and integrity are high but ability is low, whereas TH predicts that the decision-maker will not trust under this condition. For condition 011 (in Table 1), the LTF model was correct 39% of the time, whereas TH was correct 61% of the time. Thus, the key qualitative difference between the two models stems from the linear model’s failure to capture the fact that ability is a Table 2 Comparing the predictions of LTF and TH. LTF prediction of trust

Actual trust = 0 Actual trust = 1 Correctly classified

TH prediction of trust

0

1

0

1

374 106

134 346 75.0%

447 153

61 299 77.7%

necessary condition for a trusting decision. Moreover, it is important to note that TH is also more parsimonious than the linear model: TH’s predictions are predetermined (column 2 of Table 1) and apply to any advisor/decision-maker recommendation-and-decision situation, whereas LTF’s predictions are based on the betas of ability, benevolence, and integrity, which were calculated in the regression to maximize the model’s fit with the sample data. Because beta coefficients are likely to vary across samples, they need to be recalculated in each sample, which renders LTF’s predictive ability sample dependent and therefore more complicated to use. Thus, TH not only correctly classified a slightly higher percentage of the 960 observed decisions than LTF; it did so with a simpler and more general model. As such, TH seems to provide a more powerful model than LTF. It is also important to note that the identical values in columns 2 and 3 of Table 1 show behaviors that are in perfect accordance with TH; i.e., 100% of the decision-maker’s decisions exactly match TH’s predictions. As observed from Table 1, this was not the case. For example, even when the advisor was not competent, had interests opposite to those of the decision-maker, and had no penalty for lying (experimental condition 000, clearly the worst case for trusting an advisor), 6/120 decisions (5%) still followed the advisor’s recommendation. The opposite also occurred: in the 120 decisions where the advisor was competent, had the same interests as the decision-maker, and received a penalty for lying (experimental condition 111, clearly the best case for trusting one’s advisor), some decisions did not follow the advisor’s recommendation (5/120 decisions, or 4.2%). Thus, although the participants generally behaved in accordance with TH’s predictions, they also made some betting decisions that deviated from it. Such deviations could be due to various reasons, such as mental or mechanical errors when entering a decision on the screen, a propensity to engage in risk seeking behavior, an incomplete understanding of the game’s structure and incentives, or a desire to explore whether the game described by the experimenters actually worked as explained. 3.1.2.1. Examining the effect of participants’ experience with the experimental conditions. Although investigating the applicability of the first two explanations was not possible, the effect of the latter two explanations was estimated by examining the participants’ decisions after the participants had faced the same experimental condition at least once. When faced with the same condition for a second time, the participants were likely to have learned from their previous experience and modified their decisions to be more congruent with the rationality of TH. To investigate whether such learning had occurred, the percentage of trusting decisions when the decision-makers experienced a given experimental condition for the first time was compared to when the decision-makers had already made a decision under the same experimental condition at least once before. The results are shown in Table 3. Note that the recommendation a decision-maker received from her advisor, the Table 3 Influence of a decision-maker’s experience with experimental conditions. Ability, Benevolence, Integrity

% of trusting decisions (number) by inexperienced decision-maker

% of trusting decisions (number) by experienced decision-maker

Z-test

000 001 010 011 100 101 110 111

0.04 0.39 0.41 0.48 0.28 0.64 0.90 0.96

0.06 0.18 0.25 0.31 0.21 0.67 0.85 0.95

0.63 2.56** 1.88* 1.90* 0.84 0.33 0.84 0.27

(55) (52) (56) (56) (54) (56) (52) (55)

(65) (68) (64) (64) (66) (64) (68) (65)

H. Barki et al. / Information & Management 52 (2015) 483–495

identity of the advisor, and the outcome of the coin toss could all be different even when the decision-maker and the advisor faced the same levels of ability, benevolence, and integrity previously encountered. The patterns in Tables 1 and 3 are similar to the percentages of trusting decisions being significantly different in three of the eight conditions, i.e., 001 (0.39 vs. 0.18), 010 (0.41 vs. 0.25), and 011 (0.48 vs. 0.31). Thus, ‘‘inexperienced’’ decision-makers made significantly more trusting decisions than ‘‘experienced’’ decisionmakers in three of the four conditions where ability was 0. It is also interesting to note that for the three conditions in which the decision-makers had previously experienced the same parameters, their decisions followed TH more closely. When the decisionmakers encountered an experimental condition for the first time (i.e., were ‘‘inexperienced’’), TH’s predictions were correct 74% (332/436) of the time. When the decision-makers had already experienced a condition at least once, TH was correct 81% (424/ 524) of the time (difference Z-statistic = 2.59, p < 0.05). Because ‘‘experienced’’ decision-makers acted more in congruence with TH than did ‘‘inexperienced’’ ones, some learning seems to have occurred, further supporting TH. 3.2. Study 2: Trust in an IT project setting The heads-or-tails trust game of Study 1 enabled the examination of trusting decisions and the determinants of trust in an artificial setting. To examine the trust hypothesis of Study 1 in an information systems context, qualitative data collected by one of the authors in a study on knowledge management in a state government agency in the United States (labelled here ‘‘IT Projects Authority’’ or ITPA [9]) were used. Established in 2000 by the state governor, ITPA has over 500 employees and provides IT consulting services (e.g., IT project management, development, and deployment) to other state agencies to ensure that the agencies are effectively using IT. Data collected for a subset of ITPA projects, labelled the ‘‘Integration Program’’, were considered suitable for examining the trust hypothesis of Study 1 from an information systems perspective because the Integration Program included over 20 projects (some in progress, others completed) with the goal of deploying a unified IT infrastructure across groups of state agencies. Project teams working under the Integration umbrella were encouraged to share project details, information about clients, project management methodologies and tools, and technical details about the IT solutions deployed. As such, project managers’ decisions to use the recommendations of other project managers or project team members provided a context in which IT project managers’ trust in such advice could be examined and provided a decision-making context based on advice from another, which paralleled the context of Study 1. Although this context differs from the initial trust context of Study 1 in the sense that the ITPA managers and employees who participated in the study might have already interacted more than once with members of different project teams, it was thought that TH would also be likely to apply to the ITPA context. This follows from the idea that the ‘‘can do’’ and ‘‘will do’’ reasoning also applies to the ITPA context: project team members’ ability combined with their benevolence or integrity should be sufficient for ITPA staff’s decision to trust their past experience and suggestions. As such, benevolence and integrity are likely to provide somewhat redundant information in the ITPA context as well, thus rendering TH applicable. 3.2.1. Method—Study 2 3.2.1.1. Participants, data collection and analysis. Data were collected from 23 participants who were involved in the Integration

489

Program (15 project team members and eight of their reporting managers) via semi-structured individual interviews (structured interview items and unstructured response possibilities). The questions focused on sharing and reusing project information and knowledge within and across Integration projects. The interviews were transcribed and then coded to be examined from the TH perspective. A coding scheme was developed to examine the presence of TH’s Boolean relationship between trust, ability, benevolence, and integrity [9,10]. In each case, ability, benevolence and integrity were coded as low, medium, or high and trust was coded as yes or no, i.e., present or absent. High and medium levels of each facet were considered to be equivalent in applying TH to each situation. One of the authors read the interview transcripts and identified situations (provided as vignettes in Appendix A) in which an individual (the knowledge-seeker) was in a position to engage in an action or decision that would reflect his or her trust in another individual (the knowledge-provider) who had shared project information or knowledge for further reuse. The other two authors and a PhD student also examined all vignettes to ensure that the vignettes represented situations that involved trust and contained information about one or more of the characteristics of trustworthiness. Disagreements were discussed and resolved before proceeding with the coding process. The final list contained 21 vignettes. To avoid researcher bias, two coders (one of the authors and the PhD student) independently coded the levels of ability, benevolence, integrity, and trust in each vignette. After coding three vignettes, the two coders met and discussed the results to make sure they had the same interpretation of the coding scheme. When a consensus was reached, the two coders proceeded with the coding of all vignettes. Consistency among coders was assessed via Cohen’s kappa coefficient, which varied between 0.69 and 1.0 for the four variables and can be considered satisfactory [17]. 3.2.2. Results and discussion—Study 2 According to TH, a decision-maker will use an advisor’s information (or act according to the advisor’s information) only in vignettes where Ability AND one or both of (Benevolence OR Integrity) are high or medium. That is, the knowledge-seeker was hypothesized to reuse the project knowledge of the knowledgeprovider only when perceiving the latter to have at least medium competence (i.e., ability) plus medium levels of either benevolence (i.e., motivation to act in the knowledge-seeker’s interest) or integrity (i.e., honesty). The qualitative analysis of the 21 vignettes sought to corroborate (or refute) this hypothesis. The results are provided in Table 4. Table 4 Predicted vs. observed trust in 21 vignettes. Ability H H H H H H M M M M M L L L –

Benevolence H H M M M – H M L L L H M L L

Integrity H – H M – H M M H M L H M L L

Trust (obs. vs. pred.) a

Yes–Yes Yes–Yesa Yes–Yesa Yes–Yesa Yes–Yesa Yes–Yesa Yes–Yesa Yes–Yesa No–Yes No–Yes No–Noa No–Noa No–Noa No–Noa No–Noa

Nb. of Vignettes 1 2 2 1 2 1 1 1 1 1 2 1 1 1 3

‘‘H’’ = high; ‘‘M’’ = medium; ‘‘L’’ = low; ‘‘–’’ = information unavailable. a Result consistent with TH.

490

H. Barki et al. / Information & Management 52 (2015) 483–495

As observed, in 11 of the 21 vignettes, the knowledge-seeker exhibited trust in the knowledge-provider by reusing the information the latter had provided. All 11 vignettes were consistent with TH in that in each case, the knowledge-provider was perceived to have a medium or high level of competence (Ability = M or H) and a medium or high level of either benevolence or integrity (Benevolence = M or H; Integrity = M or H). For example, in vignette #4 (Appendix A), the program manager trusted his colleagues in the Operations division to provide valuable information that could help solve his problems. His colleagues knew what needed to be done (Ability = H) and shared not only what to do but also what not to do (Benevolence = H). Notably, the results also show that information about both integrity and benevolence was not necessary for making a trusting decision. For example, in vignette #12, the project manager trusted his engineers to share all their project information and knowledge because they followed a professional code of conduct. The knowledge-seeker’s decision to trust was based exclusively on his perception of the knowledge-provider’s degree of ability and integrity. Information regarding benevolence did not seem to matter in his decision to trust the knowledge-provider. In 10 of the 21 vignettes, the knowledge-seeker did not exhibit trust in the knowledge-provider and did not reuse the information the latter had provided. Eight of these vignettes are consistent with TH. For example, in vignettes #1, #10 and #18, when the knowledge-provider’s ability was perceived to be insufficient (Ability = L), no level of benevolence and integrity could compensate for this lack of ability and convince the knowledge-seeker to trust the knowledge-provider. For the three vignettes in the last row of Table 4, the researchers had no information about ability (i.e., the knowledge-seeker might have had that information but had not mentioned it in the interview), but both integrity and benevolence levels were low. Despite a lack of complete information, TH predicts a lack of trust in such cases, and the results were consistent with this prediction, providing further support for TH. Moreover, this result also leads to an interesting observation: a decision to trust (or not) was made based on information on only two of the three trustworthiness facets, with possibly no information about the third facet. This observation suggests an interesting avenue for future research, i.e., examining trusting decisions under conditions of insufficient information regarding the three characteristics of trustworthiness, and provided the motivation for Study 3, described below. In two of the 21 vignettes, the observed results were inconsistent with TH (vignettes #6 & #7). In both cases, TH predicted a trusting decision because ability and benevolence levels were medium and low, respectively, whereas integrity was medium in vignette #6 and high in vignette #7. However, contrary to TH, the knowledge-seeker decided not to trust the knowledge provider. Unfortunately, a more in-depth analysis of the more layered and complex situations these two vignettes with medium levels of ability might have involved could not be undertaken due to the impossibility of collecting new data, which is acknowledged to be a limitation. Viewing each vignette as a test of TH, the results indicate that TH was supported 19 out of 21 times (90%). These results provide triangulation evidence that TH applied well to the IT project context examined in Study 2 and that TH’s Boolean relationship between trust, ability, benevolence, and integrity is likely to apply to not only IT project contexts but also contexts other than initial trust. 3.3. Study 3: Extending Studies 1 and 2—TH with incomplete information According to TH, benevolence and integrity provide redundant information (under the boundary conditions of initial trust and

economic rationality). This implies that a decision-maker may be able to make a trusting decision (or not) that follows an advisor’s recommendation without necessarily having complete information on all three characteristics of trustworthiness. Specifically, having information about only one of the two characteristics should be sufficient for making a trusting decision (or not) in most cases. Notably, the results of Study 2 were consistent with this idea: as observed in Table 4, information about one of the three characteristics was lacking in eight of the 21 vignettes, but in all eight vignettes, the trusting decisions of the decision-makers were consistent with TH. (In Table 4, these are the vignettes where ability, benevolence, and integrity were scored HH–, HM–, H–H, and –LL.) To examine the effect of not having information on ability, benevolence, or integrity, the experiment of Study 1 was repeated with a new sample of 64 subjects with similar demographical characteristics to the subjects of Study 1. However, in addition to the eight experimental conditions of Study 1, Study 3 included a new set of 12 conditions to represent the situations in which the decision-makers had no information about one of the three characteristics of trustworthiness. The experimental treatments, procedures, and data analysis approaches of Study 3 were identical to those of Study 1 and yielded 48 decisions for each of the 12 + 8 = 20 conditions. (In Study 1, 960 total decisions had yielded 120 decisions/condition because Study 1 investigated only the 8 conditions with full information.) The results of ANOVA and post hoc analyses for the eight conditions in which the decision-makers had complete information (information about all three characteristics of trustworthiness) were largely similar to those of Study 1: [Experimental condition] Study 3/Study 1 percentages of trusting decisions; [000]0.08/0.05; [001]0.19/0.27; [010]0.31/0.33; [011]0.33/0.39; [100]0.34/0.24; [101]0.63/0.66; [110]0.71/0.88; [111]0.94/0.96, providing replication support for TH.2 According to TH, a decision-maker who lack information about one of the three characteristics of trustworthiness can still trust an advisor’s recommendation to bet under two conditions: (1) when the advisor’s ability and integrity are high (condition 1X1) or (2) when the advisor’s ability is high and the decisions maker’s and the advisor’s preferences are congruent (condition 11X), as depicted in the last two rows of Table 5. As observed, betting decisions that matched the advisors’ recommendations were highest in the last two rows of Table 5 (67% and 83% of trusting decisions), providing further support for TH with incomplete information. Further, according to TH, decision-makers should not follow their advisor’s recommendation in rows 1–6 of Table 5 because the trustworthiness characteristic about which they have no information does not matter for these conditions. That is, regardless of what the missing information may be, the TH rule will be to not follow the advisor’s recommendation. In the remaining conditions (rows 7–10 in Table 5), TH’s prediction is indeterminate because it depends on the level of the trustworthiness characteristic about which there is no information. The above results also provide further support for the superiority of TH over LTF because it is not possible to use LTF when information about any of the three characteristics of trustworthiness is missing. As such, LTF cannot suggest any course of action to decision-makers who have missing information about those characteristics. In contrast, TH provides valid recommendations for 8 of the 12 conditions that have missing information.

2 A Fisher’s test comparison of the percentages of the trusting decisions obtained in each study showed that their differences was significant for only condition 110 (0.71 vs. 0.88, p = 0.03), suggesting that the results of the two experiments were largely similar.

H. Barki et al. / Information & Management 52 (2015) 483–495 Table 5 Trusting decisions with missing information, TH prediction vs. observed. Ability/Benevolence/Integrity (0/1 = low/high; X = no information)

TH prediction of trust (0/1 = no trust/trust; ‘‘?’’ = indeterminate)

% of trusting decisions out of 48

X00 0X0 00X 0X1 01X X11 X01 X10 1X0 10X 1X1 11X

0 0 0 0 0 0 ? ? ? ? 1 1

0.19 0.15 0.13 0.29 0.29 0.58 0.21 0.52 0.40 0.29 0.67 0.83

It is notable that in the six conditions for which TH recommends no trust, the percentages of trusting decisions were greater than zero, even though the percentages were lower than the percentages of the last two rows. Similar results were also observed in the four ‘‘indeterminate’’ conditions, indicating that the decisionmakers made betting decisions that followed their advisors’ recommendations even when information about one of the three characteristics of trustworthiness was lacking and despite the fact that the missing information was critical to the betting decision. These results suggest that similar to the results of Study 1, the subjects of Study 3 also made decisions that deviated from TH’s predictions, possibly due to mental or mechanical errors, personal tendencies regarding decision-making under risk, or due to misunderstandings related to the experimental task and the game’s structure and incentives. It might be interesting to examine these possibilities in future research by extending TH to account for the uncertainty created by the lack of information in each experimental condition. 4. Discussion Although past research has generally agreed that ability, benevolence, and integrity constitute three key characteristics of trustworthiness that influence individuals’ trusting behaviors, recent studies have noted that the relationship between trust and these characteristics is not conceptually and empirically well understood. In an effort to examine this issue, we hypothesized a Boolean relationship between trusting decisions, ability, benevolence, and integrity and investigated it via three studies. The hypothesis was first operationalized and tested in a laboratory experiment via a heads-or-tails game where decision-makers’ trust in the recommendation received from an anonymous advisor was examined in the context of one-shot deals. A key finding of this experiment was that the hypothesized Boolean relationship appeared to accurately describe the influence of ability, benevolence, and integrity on trusting behaviors in initial trust contexts governed by economic rationality. Next, these findings were applied in an IT project context via a qualitative case study. A key finding of Study 2 was that the hypothesized Boolean relationship that was supported in the artificial setting of the experiment of Study 1 was also supported in Study 2, suggesting its applicability and usefulness in IS contexts. An additional insight of Study 2 stemmed from the observation that some decision-makers of Study 2 made trusting decisions despite lacking information about one of the three characteristics of trustworthiness. This idea was then examined in Study 3 by replicating the experiment of Study 1 but with the addition of 12 new experimental conditions to represent all possible

491

situations where information about one of the three characteristics might be lacking. The results of Study 3 supported the extended hypothesis that the Boolean relationship between trust and the three characteristics can be applied in situations where information about one of the three characteristics might be lacking and provided replication evidence for Study 1. In sum, the two laboratory experiments and the qualitative study conducted in different contexts provided strong triangulating evidence for the present paper’s theorization of how trusting behaviors and trusting decisions are affected by ability, benevolence, and integrity and provided evidence in support of the superiority of TH over LTF in contexts of initial trust and economic rationality. 4.1. Future research The present study’s findings have important theoretical and practical implications. The hypothesized and empirically supported Boolean relationship between trust and the three characteristics of trustworthiness clarifies an important theoretical link in trust research and can help explain the mixed results of past studies. In particular, it suggests that in initial trust contexts that are governed by economic rationality, benevolence and integrity are likely to provide redundant information about the ‘‘will do’’ aspect of the ‘‘can do and will do’’ theory. The information systems field contains many contexts in which situations of initial trust governed by economic rationality are likely to be relevant, such as information search (e.g., [5,23,24]), communication in teamwork (e.g., [1,15]), organizational fairness (e.g., [4,7]), and online security (e.g., [25]). As such, it would be interesting to examine the applicability of the main hypothesis of the present study, i.e., the proposed Boolean relationship in other IS contexts where trust is thought to play an important role. The present study’s findings also suggest that the hypothesized Boolean relationship may apply to contexts other than initial trust. However, in other contexts, the specific form of the relationship may be different than the one hypothesized here with different AND/OR linkages regulating the relationship between trust and the three characteristics of trustworthiness. It would thus be interesting to investigate in future studies the exact form that this relationship may take in different contexts. Another promising avenue for future research would be to investigate the conditions in which decision-makers have information about only two of the three characteristics of trustworthiness via formal game-theoretic approaches. Theoretical analyses of the equilibrium that may result from such extensions and their empirical testing can yield further insights regarding the conditions under which decision-makers will trust a recommendation. Notably, in both experiments, subjects made decisions that deviated from perfect rationality. For example, in the five conditions in which betting was a losing proposition, the decision-makers should not have followed their advisors’ recommendation but they did so approximately 25% of the time. Conversely, sometimes, the decision-makers did not bet according to their advisors’ recommendation even though it would have been advantageous for them to do so. This suggests that the subjects may have had motivations other than pure monetary interest when deciding how to bet. It is also possible that the subjects’ decisions might have been partially influenced by the nature of the experiment, which consisted of multiple rounds of a simple game involving small amounts, perhaps tempting some participants to try different betting strategies occasionally in a 30-round game. It is also possible that some participants may not have fully understood that betting had a positive expected value under condition 101. The experimental manipulation of this factor may not have been clear enough to make the decision-makers see that it

H. Barki et al. / Information & Management 52 (2015) 483–495

492

would have been to their advantage to bet when the advisor was competent and their respective gains were the same. Finally, other factors, such as individual risk taking propensity or human decision-making biases, might also have influenced the results. Identifying and investigating factors that induce individuals to make decisions that are against pure monetary self-interest would also be an interesting area for future research. It is also important to note that current data analysis approaches such as path analysis, factor analysis, and structural equation modeling assume linear relationships between factors or dimensions. However, the use of linear models when such relationships are Boolean (hence non-linear) can result in model misspecification. As such, there is a need for the development of new statistical analysis tools that are appropriate for modeling non-linear relationships among factors, such as the Boolean relationship between trust, ability, benevolence, and integrity. It should also be noted that the redundant nature of the relationship between benevolence and integrity examined and supported here applies to contexts of trusting decisions and behaviors. Although outside the present study’s scope, it is possible that the redundant relationship between these two characteristics may be absent for other outcomes or dependent variables of interest. For example, information about all three characteristics of trustworthiness may be needed to assess one’s ‘‘confidence’’ in the trusting decision or its ‘‘fragility’’. The examination of such relationships is left to future research.3 From a practical and organizational standpoint, the findings of the present study suggest a relatively straightforward decision rule for initial trust situations governed by economic rationality. The knowledge that ability plus one of the other two characteristics of trustworthiness is sufficient for making a trusting decision can clarify the complexity inherent in many organizational contexts and in individual decision-making situations. Many of the subjects who participated in the two experiments could have earned more money than they actually did had they known about this decision rule and applied it consistently. Another interesting practical implication of the present findings is that the TH decision rule can be applied in many conditions where information regarding a trustworthiness characteristic may be lacking.

Appendix A. List of vignettes and their coding 1&2

‘‘We expect people to [fill in the templates for project deliverables]. We’ll follow-up on that because everything will tie up [in the dashboard]. [. . .] X, Y and I, we go through all the dashboards before they go to [senior managers]. [. . .] Quite often, the financial side is wrong. It’s not an area that’s been our strength [. . .] So we’ll check [the dashboards], and if there is something wrong, we’ll flag it and send it back to the [project] manager and tell him how to get it fixed right away. Sometimes, we’re going through one, and the write-up says that they were ahead the budget, and you look down here, and it says they are behind the budget. Wait, guys! We don’t care, but one has to match the other. So we will go through and just make sure it’s logical. We can’t verify the details. We know all these projects to some extent but not to the details level. So we will do that beforehand just to make sure we have a good model going in. [. . .] Since everything drives up in the dashboard, people will not want their dashboard to be wrong.’’ (Source: Interview #1, Executive Project Manager, p. 4). Ability (A) Benevolence (B) Integrity (I) Trust Low Medium Medium No For the financial side, the executive project manager does not trust his project managers. It is not their strength (A = low). The executive project manager expects his project managers to use the templates so that the data can be integrated and displayed in the dashboard (B = medium). Project managers do not want their dashboard to be wrong (I = medium). Ability (A) Benevolence (B) Integrity (I) Trust Medium Medium Medium Yes For the other details, the executive project manager trusts the project managers. He does not verify the other details of their dashboards because it is usually not a weakness (A = medium). The executive project manager expects his project managers to use the templates so that the data can be integrated and displayed in the dashboard (B = medium). Project managers do not want their dashboard to be wrong (I = medium).

3

‘‘Interviewer: Has it ever happened to you to feel the need to go to another project manager or project lead to ask for help? Program Lead: Oh, definitely yeah, definitely. Interviewer: Have you contacted them by phone, face-to-face or just asked for the documents? Program Lead: The documents, but usually they will give you some verbal knowledge’’ (source: interview #3, Program Lead, p. 4). Ability (A) Benevolence (B) Integrity (I) Trust High Medium Yes The program lead trusts the other project managers and project leads to provide valuable information. If they are project managers or project leads, they have the abilities to help the program lead (likely because they went through the same problems; A = high). Usually, project managers and project leads provide not only the documents but also verbal knowledge (B = medium).

4

‘‘It is a collaborative effort because you would need input from those folks in Operations, when we run into flags, so that we know what to do and what not to do again’’ (source: interview #4, Program Manager, p. 3). Ability (A) Benevolence (B) Integrity (I) Trust High High Yes The program manager trusts the folks in Operations to provide valuable information. When the program manager runs into problems, the folks in Operations know what to do (A = high) and what not to do (B = high).

5

‘‘[The program coordinator] is in charge of the SharePoint site. She organizes everything and she updates with meetings minutes from customer meetings and things like that. When there is a problem with the team and I need something in a hurry and I want to know what was done in the latest presentation to senior management because there is something I need to pull from it . . . that should be out there on the SharePoint site. [. . .] If [the program coordinator] is not there, [the program lead] is not there, and I need something, I’ll go to the SharePoint site. I depend on [the program coordinator] to make sure that SharePoint is current with the right information. Instead of going through my hard drive looking for documents, I just go there and click’’ (source: interview #4, Program Manager, p. 11). Ability (A) Benevolence (B) Integrity (I) Trust High Medium Yes

5. Conclusion Trust is an important construct in organizational studies, and it continues to occupy an important place in research. As such, we need to clearly and precisely define this construct and its relationship with its antecedents to rigorously study its role in a variety of contexts. The present paper has taken a step in that direction by building on trust’s past conceptualizations to develop a formal and testable model of trusting decisions and the determinants of those decisions. It is hoped that the theoretical and empirical findings reported here can provide a clearer and more precise understanding of this fundamental construct and its determinants. Acknowledgements The authors thank the Canada Research Chairs Program (Grant number: 219144) and the Social Sciences and Humanities Research Council of Canada (SSHRC) for their financial support and the Bell University Labs and the CIRANO for providing access to their experimental laboratory and facilities.

3

The authors thank an anonymous reviewer for this idea.

H. Barki et al. / Information & Management 52 (2015) 483–495 Appendix A (Continued )

Appendix A (Continued )

The program manager trusts the program coordinator to share all the project information via a SharePoint site. The program coordinator manages everything on the SharePoint site (A = high). The program coordinator should keep the SharePoint site current so that the program manager can have access to that information (B = medium). 6

‘‘There are times when I need to do many research. I need to build a new standard, and I try to see how we can do this better. I need to see what other agencies have done or, at least here at ITPA, what other groups may have done because they had a very similar problem. [. . .] I haven’t contacted the people on the [other] project, normally because many vendors are wrong, and I may not always find what they did completely appropriate. However, just to see the road that they took, it’s worth it for those particular times [when I need to do many research] because it changes my thought processes. [. . .] The documents [from the other project] tend to be very specific to the [client] or to the fact that they used this particular vendor. Many of these documents are written by the vendors, so you got a whole different flavor. It’s their vision’’ (source: interview #5, Project Manager, pp. 2–4). Ability (A) Benevolence (B) Integrity (I) Trust Low Medium No Medium The project manager does not trust his colleagues from another project. Their information is too specific to their client or to the vendor used (B = low). Their information is not completely appropriate because the vendors they use are wrong (I = medium), but it is worth seeing the road they took because they had a similar problem (A = medium).

7

‘‘There are templates that [our] group has been given by the PMO, and they are a good starting point. They aren’t always complete’’ (p. 2). ‘‘I think that the templates that are designed here are good, but there’s a certain limitation to their usage to be quite frank. [. . .] There may be questions that I have, nuances of the project . . . overall, it’s how to tackle particular issues. [The templates] are not mature enough to have that, in my view, not yet. They’re getting there. Just give them time’’ (p. 6; source: interview #5, Project Manager). Ability (A) Benevolence (B) Integrity (I) Trust Medium Low High No The project manager does not trust the Project Management Office (PMO) to provide project templates for his needs. The templates are provided by the PMO, and they are a good starting point (I = high). However, they are not always complete or mature enough (A = medium). These templates do not have nuances of the project (B = low).

8

‘‘If you can get people on the right day for SharePoint, they’re very responsive. . .. So you may not get a response as quick as you might like. And it’s not that they’re ignoring you; it’s just that they’re all doing something else. There is a special group for SharePoint support, and they’ve been very good. Sometimes I get an immediate correction’’ (source: interview #5, Project Manager, p. 7). Ability (A) Benevolence (B) Integrity (I) Trust High Medium High Yes The project manager believes he can trust the support team and the information they provide. They are very good (A = high). They are responsive, maybe not as quick as the project manager might like (B = medium), but that is because they are all working hard (I = high).

9

‘‘This project is not unique. I think we should say that the program is unique compared to other programs at ITPA. The project is unique when we try to use the same templates [for project deliverables] provided by the PMO for other programs. The other projects [from the same program as my project] either used the templates provided by the PMO and modified them to a great extent or they missed some of the documents that are required because I haven’t seen publicly each of these projects having all those documents. [. . .] When I was looking for templates and could not find them, I went to different corners asking people ‘‘Do you have a template for this? Do you know a place where I can find this?’. [. . .] So when I published my templates, I sent a note to all the other people, saying that ‘‘After all the bugging, I think I got [what I needed]. Just take a look at it and tell me what you think’. And some people responded, and then, I made some changes, and [my templates] are all [on SharePoint] right now’’ (source: interview #6, Project Manager, p. 5).

493

Benevolence (B) Integrity (I) Trust Ability (A) Medium High Medium Yes The project manager trusts his colleagues [working on projects from the same program] to help him improve the templates for project deliverables (provided by the Project Management Office (PMO)). Some of the other project managers had to have the same problems (A = medium). They responded to the project manager’s questions and provided useful feedback (B = high). The PM’s colleagues abide by different set of principles because they do not make their templates public (I = medium). 10

‘‘For closed projects, we have all the docs on the shared drive. If that is not enough [when I need help for an ongoing project], I contact someone else here at ITPA that might have that expertise. [. . .] In our team, we document a lot, but we still need to meet and talk or ask for help outside our team. It is a small team, so it’s easy to communicate, but it’s also limited in expertise’’ (source: interview #9, Business Analyst, p. 3). Ability (A) Benevolence (B) Integrity (I) Trust Low High High No When someone on his team does not have the expertise (A = low), the business analyst does not trust his teammates. The business analyst’s team documents a lot (I = high) and has all the documents on the shared drive (B = high).

11

‘‘If we bring out [a technical solution] and a technical person tells us that this is not exactly the way to do it, that doesn’t necessarily mean it was not the right way to do it [. . .] It may not be the exact solution, but what it does is that many time, engineers start thinking in a different way. ‘‘You know, I haven’t thought about that, but that might be the exact process, result or technical solution that we’re looking for’. So it is collaborative from a technical perspective even though the technical result from another project may not be necessarily what works for my project [. . .] You know engineers always have their own way to do something creative, and there are multiple ways to achieve. However, I have to look for cost, effectiveness, meeting the requirements. If it’s a metric issue, does it need a metric associate with that requirement so that it passes the audit? There are things that we look at from a different perspective’’ (source: interview #10, Project Manager, p. 9). Ability (A) Benevolence (B) Integrity (I) Trust High Medium Medium Yes The project manager trusts the engineers (the technical people) in his division. Engineers always have their own way to solve an issue (A = high). Their solution from another project may not necessarily work for the project manager’s project (B = medium). When engineers come up with a solution, the project manager still needs to check its cost, its effectiveness, how it meets the requirements, etc. (I = medium).

12 & 13

‘‘[My engineers] have update capabilities on [my SharePoint site]. [. . .] If I have an action item on a document out there, I want them to be able to update the status of that action item. We have professional integrity. If I find out that somebody is violating it, then I will remove [his rights]. [. . .] However, until somebody shows me that I can’t trust what they do, why would I want to deny them [the right to update the SharePoint site] and add more work to my work? That is my personal perspective. No one from [the other division] or my upper report has directed me otherwise. So, we are all professionals here, we have a professional code of conduct to follow here. Until somebody breaks that, I trust him’’ (source: interview #10, Project Manager, p. 10). Ability (A) Benevolence (B) Integrity (I) Trust High High Yes The project manager trusts his engineers to correctly update the SharePoint site. The engineers know the status of an action item, and the project manager wants then to be able to update that action item (A = high). The team follows a professional code of conduct (I = high). Ability (A) Benevolence (B) Integrity (I) Trust Low Low No If an engineer from his team breaks the professional code of conduct (I = low), the project manager does not trust the changes made by that engineer on the SharePoint site. If an engineer breaks the code of professional conduct, then it also means that the person’s preferences are not congruent with the rest (B = low).

H. Barki et al. / Information & Management 52 (2015) 483–495

494 Appendix A (Continued ) 14

15

16

17

‘‘Development Team Leader: Everybody [on my team] can modify [the documents on our shared drive]. You could lock it down, but right now, you can modify everything. [. . .] However, no one should be changing any documents without putting a note up in the front regarding the change and changing the version of the document. [. . .] It is the whole culture from square one: whatever they have [they put it on the shared drive] so that everyone can read and reach and get to it. [. . .] Interviewer: Is there a formal process to evaluate what is on the shared drive in terms of the accuracy or trustworthiness? Development Team Leader: No, it is just the integrity of the team’s members’’ (source: interview #11, Development Team Leader and Lead Architect, p. 7). Ability (A) Benevolence (B) Integrity (I) Trust High High High Yes The development team leader trusts his team to share their knowledge on the shared drive. Everybody can modify everything on the shared drive (A = high). The development team ensures that all the documents are accessible for everyone in their team (B = high). The development team ensures that the information on the shared drive is accurate (I = high). ‘‘[My team and I] have mandatory meetings, most of the time. [. . .] I get everybody in there and we talk about what we’re doing. I might have something up here in my brain that you might benefit from. [. . .] We do many meetings like that, and we do many face-to-face. We have a collective thinking and it’s the way to operate [within our team]. Everybody has a specific expertise, but everybody else is eager to learn what everybody else is doing. Everybody is geared in such a way that you can pick up what I am doing and I can pick up what you’re doing. [. . .] Well, all my folks back there know how to do it because I made them go through training on how to use Microsoft Project, Excel, SharePoint. . . They all know that now’’ (source: interview #14, Business Owner, p. 5). Ability (A) Benevolence (B) Integrity (I) Trust High Medium High Yes The business owner trusts his team to share its knowledge at meetings. They might know something that could benefit someone else (B = medium). Everybody participating in these meetings has a specific expertise (A = high). Everybody adheres to the same principles because they all went through training (I = high). ‘‘[Some of the state agencies] are reluctant to give out the data because it’s their data. Why do you want my data? Because this other agency needs it and they just told me so. Or, here comes a house bill that passes that you have to share data among one another. They’ve never done it before, and there is a reluctance to do it’’ (p. 8). ‘‘[These state agencies] feel that part of it is due to a security gap. However, part of it is that they don’t trust other folks to handle the data because they feel like they’re corrupted, or they feel they’re a threat to their job security’’ (p. 14; source: interview #14: Business Owner). Ability (A) Benevolence (B) Integrity (I) Trust Low Low No Some state agencies do not trust other agencies to handle their data. They feel that these other agencies are corrupted (I = low) or that sharing knowledge is a threat to their job security (B = low). ‘‘There are several pieces of this project: there was the integration side, but then, there was all this other stuff too that I was learning. So it became apparent that this might be too much for one person, you know, as I started looking at the scope of all the interfaces that we were doing, and so that’s when I brought it to X, and said, ‘‘X, here’s our kind of position, I think we need some help or I need some help’. [. . .] That’s when she said, ‘‘Okay, well, you know I think we can assign Y, an integration project manager, to work with you’. [. . .] So I’m more responsible for the hosting environment and that type of work. He’s more responsible for the integration, although I’m still involved in that because I also am responsible for reporting on the whole thing. [. . .] I did not know anything about [the integration side of the project], you know, I mean, it was just ‘‘Oh my God’, and so that part of it was overwhelming. [. . .] This particular project just didn’t allow for [a] learning curve, and I still feel like I’m learning because I have been able to stay involved, and Y has been a good mentor in terms of his knowledge and experience’’ (source: interview #15, Project Manager, pp. 6–7). Ability (A) Benevolence (B) Integrity (I) Trust High High Yes

Appendix A (Continued ) The project manager trusts Y to help her manage the integration side of this project. Y is a good mentor (B = high). Y has the knowledge and experience (A = high). 18

‘‘The two PMOs are at times in conflict with each other. And because they’re in conflict with each other, they can’t settle. We’re caught in the middle because we have to take these policies and standards and process from each one and make them work. We have to know which PMO we’re engaging with and how we should engage with them. At the same time, we’re trying to have our own policies and standards down here for [the projects] that we do. They really just don’t care about [our projects]. They’re not following, they’re not tracking. It’s just beneath their view. They don’t even see it’’ (p. 5). ‘‘Things as important as standards, policies, procedures and those type of things, they have to be available across the board; they have to be indexed in some way where you can find them; and then, the person who’s seeing it has to have confidence in it’’ (p. 15; source: interview #16, Quality Management Director) Ability (A) Benevolence (B) Integrity (I) Trust Low Low Low No The Quality Management Director does not trust the two Program Management Offices (PMOs). Because the two PMOs are not tracking the Quality Management director’s projects, they are not competent in setting policies for his projects (A = low). The two PMOs do not care about the Quality Management Director’s projects when they set policies and standards (B = low). The two PMOs do no adhere to a set of important principles: having standards, policies and procedures available across the board; having them indexed; making sure that users have confidence in using them (I = low).

19

‘‘There are many external contractors [working on our projects]. They prefer not to share their knowledge, not to be open, so they don’t lose their ‘‘power’’ [. . .] The contractors don’t want to provide you with the information that you need so that you don’t need their services anymore. The idea is to have revenue back into their organization, back into their pocket as contractors. At the end of the day, they don’t have any accountability. They can go away’’ (source: interview #16, Quality Management Director, p. 12). Ability (A) Benevolence (B) Integrity (I) Trust Low Low No The Quality Management Director does not trust the external contractors to share information and knowledge of the projects. The contractors want to keep their jobs and do not have any accountability (I = low). They care only about getting revenue for themselves and not for ITPA (B = low).

20

‘‘Dealing with employees sometimes can be different from dealing with contractors. You don’t need to ask them the same way that you ask a contractor. Contractors’ job is not secure. They try as best as they can to keep it. Employees, they are safe, so they try as much as they can to avoid any additional work for themselves. [. . .] They see that you are new and that you don’t know who is working there on that project or who has the knowledge. So you go and ask some [employees] because you think that they have some knowledge. The first answer you get is usually ‘‘I don’t know. I’m not sure. I was not involved’. It’s because they try to avoid more responsibility for that [project]’’ (source: interview #17, Project and Architecture Lead [external contractor], p. 4–5). Ability (A) Benevolence (B) Integrity (I) Trust Medium Low Low No The project lead, who is an external contractor, does not trust some of the employees when he asks them for help. He believes that those people might have the knowledge of a particular problem (A = medium), but they say they do not know (I = low). They do not want additional responsibilities (B = low).

21

‘‘I think [talking to other Integration project managers] would have been beneficial, but I haven’t done it. I talk with [other operations managers and project managers from my division] about things that I’m trying to resolve. I really did not talk to the other Integration project managers. I just gave them my status report [for my Integration project]. [. . .] I don’t really know how the other Integration projects handle things’’ (source: interview #22, Project Manager, p. 5–6). Note: This project manager belongs to one PMO, and the other Integration project managers belong to the other PMO. ‘‘The two PMOs are at times in conflict with each other’’ (see vignette #18).

H. Barki et al. / Information & Management 52 (2015) 483–495 Appendix A (Continued ) Benevolence (B) Integrity (I) Trust Ability (A) Medium Low Low No The project manager does not trust the other Integration project managers. The other Integration project managers might have the same problems because their projects are also Integration projects (A = medium). Given the conflict between the two PMOs, the project manager is unlikely to consider the other Integration project managers to be completely honest (I = low) or to have his best interests at heart (B = low).

References [1] B.A. Aubert, B.L. Kelsey, Further understanding of trust and performance in virtual teams, Small Group Res. 34 (5), 2003, pp. 575–618. [2] T. Bergstrom, A survey of theories of the family, in: M. Rosenzweig, O. Stark (Eds.), Handbook of Population and Family Economics, Elsevier, 1995, pp. 21–59, (Chapter 2). [3] R. Bhattacharya, T.M. Devinney, M.M. Pillutla, A formal model of trust based on outcomes, Acad. Manag. Rev. 23 (3), 1998, pp. 459–472. [4] E.C. Bianchi, J. Brockner, In the eyes of the beholder? The role of dispositional trust in judgments of procedural and interactional fairness Organ. Behav. Hum. Decis. Process. 118 (1), 2012, pp. 46–59. [5] V. Choudhury, E. Karahanna, The relative advantage of electronic channels: a multidimensional view, MIS Q. 32 (1), 2008, pp. 179–200. [6] J.A. Colquitt, B.A. Scott, J.A. Lepine, Trust, trustworthiness, and trust propensity: a meta-analytic test of their unique relationships with risk taking and job performance, J. Appl. Psychol. 92 (4), 2007, pp. 909–927. [7] M.J. Culnan, P.K. Armstrong, Information privacy concerns, procedural fairness, and impersonal trust: an empirical investigation, Organ. Sci. 10 (1), 1999, pp. 104– 115. [8] A. Dimoka, What does the brain tell us about trust and distrust? Evidence from a functional neuroimaging study MIS Q. 34 (2), 2010, pp. 373–404. [9] A. Dulipovici, Exploring IT-Based Knowledge Sharing Practices: Representing Knowledge Within and Across Projects, (PhD dissertation in Computer Information Systems), Georgia State University, 2009. [10] K.M. Eisenhardt, M.E. Graebner, Theory building from cases: opportunities and challenges, Acad. Manag. J. 50 (1), 2007, pp. 25–32. [11] M. Fichman, Straining towards trust: some constraints on studying trust in organizations, J. Organ. Behav. 24 (2), 2003, pp. 133–157. [12] U. Fischbacher, Z-Tree 2.1 Zurich Toolbox for Readymade Economic Experiments: Reference Manual, Institute for Empirical Research in Economics, University of Zurich, 2002. [13] M.A. Fuller, M.A. Serva, J.S. Benamati, Seeing is believing: the transitory influence of reputation information on e-commerce trust and decision making, Decis. Sci. 38 (4), 2007, pp. 675–699. [14] D. Gefen, Reflections on the dimensions of trust and trustworthiness among online consumers, ACM SIGMIS Database 33 (3), 2002, pp. 38–53. [15] S.L. Jarvenpaa, K. Knoll, D.E. Leidner, Is anybody out there? Antecedents of trust in global virtual teams J. Manag. Inf. Syst. 14 (1), 1998, pp. 29–64. [16] S.X. Komiak, I. Benbasat, Understanding customer trust in agent-mediated electronic commerce, web-mediated electronic commerce, and traditional commerce, Inf. Technol. Manag. 5 (1–2), 2004, pp. 181–207. [17] J.R. Landis, G.G. Koch, The measurement of observer agreement for categorical data, Biometrics 33 (1), 1977, pp. 159–174. [18] R.B. Lount Jr., N.C. Pettit, The social context of trust: the role of status, Organ. Behav. Hum. Decis. Process. 117 (1), 2012, pp. 15–23. [19] R.C. Mayer, M.B. Gavin, Trust in management and performance: who minds the shop while the employees watch the boss? Acad. Manag. J. 48 (5), 2005, pp. 874–888. [20] R.C. Mayer, J.H. Davis, F.D. Schoorman, An integrative model of organizational trust, Acad. Manag. Rev. 20 (3), 1995, pp. 709–734.

495

[21] B. McEvily, M. Tortoriello, Measuring trust in organizational research: review and recommendations, J. Trust Res. 1 (1), 2011, pp. 23–63. [22] H.D. Mcknight, L.L. Cummins, N.I. Chervany, Initial trust formation in organizational relationships, Acad. Manag. Rev. 23 (3), 1998, pp. 473–490. [23] H.D. Mcknight, V. Choudhury, C. Kacmar, Developing and validating trust measures for e-commerce: an integrative topology, Inf. Syst. Res. 13 (3), 2002, pp. 334–359. [24] P.A. Pavlou, M. Fygenson, Understanding and predicting electronic commerce adoption: an extension of the theory of planned behavior, MIS Q. 30 (1), 2006, pp. 115–143. [25] S. Ray, T. Ow, S.S. Kim, Security assurance: how online service providers can influence security control perceptions and gain trust, Decis. Sci. 42 (2), 2011, pp. 391–412. [26] F.D. Schoorman, R.C. Mayer, J.H. Davis, An integrative model of organizational trust: past, present, and future, Acad. Manag. Rev. 32 (2), 2007, pp. 344–354. [27] W. Wang, I. Benbasat, Trust in and adoption of online recommendation agents, J. Assoc. Inf. Syst. 6 (3), 2005, pp. 72–101. Henri Barki is Canada Research Chair in Information Technology Implementation and Management at HEC Montre´al. A member of the Royal Society of Canada since 2003, he has served on the editorial boards of MIS Quarterly, Data Base for Advances in Information Systems, Canadian Journal of Administrative Sciences, Gestion, and Management International. His main research interests focus on the development, introduction and use of information technologies in organizations. Journals where his research has been published include Canadian Journal of Administrative Sciences, Data Base for Advances in Information Systems, IEEE Transactions on Professional Communication, Information Systems Journal, Information Systems Research, Information & Management, INFOR, International Journal of Conflict Management, International Journal of e-Collaboration, International Journal of e-Government Research, Journal of the AIS, Journal of Information Technology, Journal of MIS, Management Science, MIS Quarterly, Organization Science, and Small Group Research. Jacques Robert is a professor and chair of the IT department at HEC Montre´al. He holds a Ph.D. in economics from Western Ontario. His research interests includes applied game theory, mechanism design and experimental economics. Journals where his research has been published include Econometrica, Journal of Economic Theory, Quarterly Journal of Economics, Journal of Applied Econometrics, Journal of Economic Behavior & Organization, Information & Management, European journal of operational research, Journal of the Association for Information Systems, Annals of Finance, and Neural computation. He is a fellow at CIRANO and chairman of Baton Simulations. Alina Dulipovici is an Assistant Professor of Information Technologies at HEC Montreal, Canada. She received her PhD in Computer Information Systems at Georgia State University. Her research focuses on knowledge sharing, strategic alignment of knowledge management systems, and social representations. Her papers have been published in the Journal of Management Information Systems, the Journal of Strategic Information Systems, Knowledge Management Research and Practice, Journal of Knowledge Management, the International Journal of Case Studies in Management, and in the proceedings of leading conferences.