Information & Management 44 (2007) 313–320 www.elsevier.com/locate/im
Situational influences on ethical decision-making in an IT context Russell Haines a,*, Lori N.K. Leonard b a
Information Technology and Decision Sciences, College of Business and Public Administration, Old Dominion University, Norfolk, VA 23529, United States b Department of Management Information Systems, College of Business Administration, University of Tulsa, Tulsa, OK 74104, United States
Received 10 September 2005; received in revised form 7 September 2006; accepted 7 February 2007 Available online 25 February 2007
Abstract Processes of ethical decision-making are thought to depend on the issue faced when making the decision. We examined the processes by examining student’s reactions to five scenarios involving IT use. Data were collected using a questionnaire following a group discussion. The results showed that ethical decision-making processes did indeed vary by scenario, suggesting that a singleissue approach is inadequate for studying ethical decision-making. Perceived importance of the ethical issue was a factor in the scenarios, but it did not have an all-inclusive influence on the decision-making of the participants. The results were considered in the context of theories and Mason’s ethical issues of the information age. We offer advice to managers on how to limit unethical behavior. # 2007 Elsevier B.V. All rights reserved. Keywords: IT ethics; Ethical decision-making; Ethical behavior; Perceived importance; Four-component model; Moral intensity
1. Introduction Ethical issues are particularly important in IT today, with widespread illegal use of intellectual property, violation of privacy, and breaches in security. The rapid development and deployment of IT has outpaced the development of ethical guidelines for its use [16]. The 2003 CSI/FBI Computer Crime and Security Survey found that disgruntled employees ranked just below independent hackers and above competitors as likely sources of attack [23]. One survey noted that 80% of all computer- and Internet-related crimes against corpora-
* Corresponding author. Tel.: +1 757 683 5841; fax: +1 757 683 5639. E-mail address:
[email protected] (R. Haines). 0378-7206/$ – see front matter # 2007 Elsevier B.V. All rights reserved. doi:10.1016/j.im.2007.02.002
tions are committed by individuals from within, causing an average of $110,000 per corporate victim [5], while another found that 78% of organizations have had to discipline employees for downloading pornography, pirated software, or misusing e-mail [18]. A survey found that nearly 30% of business people could be classified as pirating software through electronic methods [26]. Because of their control of networks, IT systems administrators are a serious insider threat [19]. Studies of ethical decision-making generally search in one of two directions: either examining demographic and personality styles of individuals who indicate that they judge a given behavior as immoral (e.g., [4,8,12,20]) or examining the process of ethical decision-making to find beliefs and attitudes that lead to unethical behavior [29], relegating individual differences to being external
314
R. Haines, L.N.K. Leonard / Information & Management 44 (2007) 313–320
variables [21]. Models in the second area are exemplified by a four-component model of moral behavior [22] and the theory of planned behavior [1], which has been a popular model for ethical decision-making. However, studies generally focused on decision-making about a single ethical dilemma in spite of an understanding that different factors affect decision-making across situations. Few studies except those of Robin et al. [24] and Loch and Conger [15] have compared the decision-making processes that individuals use when making ethical decisions about multiple scenarios. We therefore decided to examine how ethical decision-making processes of individuals differed when faced with different situations in the use of IT. 2. Theoretical background This study uses a four-component model of ethical decision-making (Fig. 1). This sees ethical decisionmaking as a sequential decision-making process. In recognition of a moral issue, a decision maker engages in an ethical decision-making process rather than making a decision based on emotional or other grounds. This prompts the person to make a moral judgment. Then the decision maker chooses a course of action to establish moral intent. Finally, the person will engage in moral behavior based on the moral intent. In the process, social norms and individual differences are implicit but external factors. Moral judgment has been shown to influence moral intent, as have age, gender, and other demographic differences. Our intent was to determine how decision makers’ perceptions of ethical situations affected the two central components of the model: making a moral judgment and establishing moral intent. A relationship between them had been confirmed in many studies (e.g., [2]). We postulated: Hypothesis 1. Moral judgment of a behavior will be a positive indicator of moral intent. Several theories have extended the four-component model with additional precedents that explain how moral judgment and/or establishing moral intent are formed. The one most relevant to our study of different situations was enunciated by Jones [10], who added a situation-based construct, moral intensity, which ‘‘captures the extent of issue-related moral imperative in a situation’’. It depends on attributes of the situation and
Fig. 1. Four-component model of ethical decision-making.
is expected to increase with increase of consequences of an act, the probability of the act affecting others, and the decision maker being close to those affected by the action. Jones was unclear, however, about the stages affected and whether moral intensity had a mediating or moderating role. Robin et al. developed a measure of moral intensity that they termed perceived importance of an ethical issue. Perceived importance reflects a decision maker’s perceptions of the importance they feel about an ethical issue. Because perceived importance is fundamentally driven by the perceptions of the individual decision maker, they suggest that perceived importance will have an even stronger impact on the process of ethical decision-making. Using this measure, Robin et al. measured the ethical decision-making processes of ad managers, focusing on the link between making a moral judgment and establishing moral intent. Their results suggested that perceived importance is an antecedent to making a moral judgment. We therefore hypothesized: Hypothesis 2. Perceived importance of an ethical issue will be (a) a positive indicator of moral judgment of the behavior and (b) not an indicator of moral intent. Hypothesis 3. The link between moral judgment and moral intent will be weaker for scenarios with low perceived importance than for scenarios with high perceived importance. Robin et al. also proposed that moral judgment’s influence on moral intent was weakened in scenarios of low importance, because of the greater importance of other factors. We therefore suggested that one of those factors is moral obligation, which is an extension of the theory of planned behavior’s personal normative beliefs, emphasizing a combination of personal and social pressures [3]; it was more important than personal normative beliefs in predicting behavioral intention in one ethical context, and empirical studies in an IT context have found a significant relationship between moral obligation and moral intent. Thus: Hypothesis 4. Moral obligation will be a positive indicator of moral intent. If moral obligation is one of the other determinants of behavior that influence moral intent, its relationship should be heightened for scenarios with low perceived importance, giving: Hypothesis 5. The link between moral obligation and moral intent will be higher for scenarios with low perceived importance than for scenarios with high perceived importance.
R. Haines, L.N.K. Leonard / Information & Management 44 (2007) 313–320
Fig. 2. Research model.
Street et al. [27] proposed that individuals may not think about ethical situations in some situations, using the process of the four-component model because they choose not to expend the cognitive energy and make the decision on emotional grounds. Differing amounts of effort would be especially observed in a multiple scenario situation. We sought to overcome potential cognitive expenditure bias by having the subjects answer questions after a short on-line discussion of each scenario rather than in a single large questionnaire. Fig. 2 summarizes our research model. Previous studies used surveys alone to gather perceptions about ethical scenarios, with no interaction between the subjects. Gathering perceptions after subjects had interacted was a significant difference of our study from others. 3. Research method The subjects for our experiment were recruited from students in a junior-level MIS course at a private midwestern U.S. university. Participation in the study was stated to be voluntary in the informed consent form; but since it was conducted during normal class time and in the presence of the instructor, it is possible that students felt a pressure to participate. They were given a small ‘‘parting gift’’: a pen or pencil worth approximately one dollar (U.S.). Although their moral reasoning may be different than professionals based on their age and moral development, junior-level students were expected to exhibit differences in ethical decision-making on the scenarios we chose. Even given their level of involvement, this was sufficient to test our hypotheses [9]. 3.1. Data collection system and materials The data collection application was entirely webbased. First, subjects filled out a demographic and personality questionnaire. Once all had completed this, they were randomly divided into groups of five or six members and the separate groups discussed each scenario for 3 min (students in the final sessions
315
indicated that this was enough time) in an anonymous chat room. After discussing the scenario, subjects completed the questionnaire for that scenario. Sessions lasted approximately 60 min. Needing scenarios that varied on the perceived importance scale but wanting to build from previous research, we chose the five scenarios used by Ref. [14]. Their full text is shown in Appendix A. The scenarios are different with respect to the issues involved, but similar in that the behavior was questionable (indeed, the behavior in scenarios one, four, and five is illegal under U.S. law; though none of our students seemed to recognize this). 3.2. Variables Our study variables were the extended fourcomponent model items: (1) perceived importance of the ethical issue (lower = more important), (2) moral judgment of the behavior (higher = less acceptable), and (3) moral obligation to act (higher = stronger obligation to act). The dependent variable was moral intent, with higher values representing a lower probability that the subjects would engage in the behavior. Although multiple item instruments would have been desirable, single item instruments for judgment, obligation, and intent were employed in our study to minimize questionnaire fatigue (23% of the respondents did not complete a single-topic questionnaire in Peace et al.’s study, thus a shorter questionnaire was assumed to result in higher quality responses). The same items have been used in previous research (e.g., [13]), and results suggested that the single-item instruments captured a reasonable amount of the variation in the underlying constructs. The complete text of the questionnaires for each scenario is in Appendix B, with a summary of the psychometric properties of the perceived importance scale. Control variables included background and personality variables that have been shown to affect or proposed as affecting ethical decision-making: age, gender, locus of control [25], and ego strength [28]. 3.3. Sample Of the 167 participants in our study, 81 were male. Students from a variety of business majors participated, with the largest proportion indicating management (16%) or marketing (16%). 4. Results Paired t-tests and SEM, specifically partial least squares (PLS), were used to test the research model for
316
R. Haines, L.N.K. Leonard / Information & Management 44 (2007) 313–320
each scenario. PLS was chosen because of its low sample size requirements relative to covariance-based SEM like LISREL or EQS [6]. PLS recommends a minimum sample size of ten times the largest number of structural paths directed at a particular construct in the model, which for us was seven (ego strength, locus of control, gender, age, perceived importance, moral judgment, and moral obligation). Therefore, the minimum sample size required was 70. Thus our PLS analysis had sufficient power.
Fig. 6. Summary of PLS results for scenario 4. *Path coefficient is different than zero ( p 0.01).
4.1. Ethical models by scenario A summary of the results of the PLS analysis for scenarios one through five is shown in Figs. 3–7. To reduce the chance of a type 1 error when multiple comparisons were made, a Bonferroni adjustment was made to the significance level, leaving an alpha level of 0.01 necessary to infer that a difference or coefficient Fig. 7. Summary of PLS results for scenario 5. *Path coefficient is different than zero ( p 0.01).
Fig. 3. Summary of PLS results for scenario 1. *Path coefficient is different than zero ( p 0.01).
Fig. 4. Summary of PLS results for scenario 2. *Path coefficient is different than zero ( p 0.01).
Fig. 5. Summary of PLS results for scenario 3. *Path coefficient is different than zero ( p 0.01).
was greater than zero. To make the diagrams readable, the paths from the control variables to the study variables were not shown. Significance of path coefficients was tested using the bootstrap resampling technique (500 subsamples). The number shown below the construct name was the percentage of variance in a construct that was explained by variance in its antecedent constructs (R2). Table 1 summarizes the mean scores for each of the items in the model (importance is the average of the four items). We focused on differences with respect to perceived importance, but felt that reporting and testing the means of the other items would aid the reader. Mean perceptions that are not different from one another are underlined ( p 0.01). Higher scores generally indicated Table 1 Comparison of mean perceptions by scenarioa
a
Means sharing the same line are not significantly different from one another.
R. Haines, L.N.K. Leonard / Information & Management 44 (2007) 313–320
a more negative response to the questionable behavior, except for perceived importance, which is reverse-coded with lower scores indicating a higher perceived importance. With respect to perceived importance, the scenarios were ordered five, one, four, two and three (scenarios one, four, two, and three were significantly different from one another). The path coefficients linking moral judgment and moral obligation with moral intent were compared for purposes of testing hypotheses three and five. The path coefficients were compared using a multigroup analysis technique [7]. The approach involved computing the pooled estimator for the variance of the two coefficients and calculating a t-statistic using the difference in the two paths. Table 2 summarizes the path coefficients and differences among the model variables for all the scenarios. The scenario with the lowest strength path coefficient is shown to the left and the scenario with the highest strength path is shown to the right. Paths that were not statistically different from one another ( p 0.01) shared the same line beneath the scores. Hypothesis 1: Moral judgment was a significant, positive indicator of moral intent for all scenarios (Figs. 3–7). Thus, Hypothesis 1 was supported. Hypothesis 2: The perceived importance of an ethical issue was a significant, positive indicator of moral judgment for all scenarios (Figs. 3–7). Perceived importance had a weak but significant influence on moral intent for scenarios two, three, and four. Thus, Hypothesis 2 was partially supported. Hypothesis 3: The link between moral judgment and moral intent was not weaker for scenarios with low perceived importance. Scenario 3, with the lowest perceived importance (Table 1), fell in the middle of the others in its link from moral judgment to moral intent and was not significantly different from scenarios 1 and Table 2 Comparison of path coefficientsa
a
Coefficients sharing the same line were not significantly different from one another.
317
5 (Table 2), which, with the highest perceived importance scores, had the strongest links from moral judgment to moral intent but were only significantly higher than scenario 4, whose perceived importance score fell in the middle of the others. Thus, Hypothesis 3 was not supported. Hypothesis 4: Moral obligation toward a behavior is a significant, positive indicator of moral intent for scenarios 2 and 4 (Figs. 3–7). Thus, Hypothesis 4 was partially supported. Hypothesis 5: Scenarios with lower perceived importance did not necessarily have higher links from moral obligation to moral intent. Scenarios 4 and 2, while falling in the middle of the others with respect to perceived importance (Table 1), had the strongest links from moral obligation to intent (Table 2). Scenario 3, which had the lowest perceived importance score, had the weakest link from obligation to intent. Thus, Hypothesis 5 was not supported. 5. Discussion We compared the decision-making process of individuals across multiple ethical situations involving the use of IT. The results suggested that the decisionmaking process varied across situations. In all of the models, moral judgment had a significant, positive relationship with moral intent, and perceived importance had a significant, positive link with moral judgment (perceived importance is reverse-coded), which suggested that these links held for all ethical decision-making in an IT context, regardless of how IT was used/misused. Moral obligation, on the other hand, was inconsistently linked: having a significant, positive link to moral intent in only two out of five scenarios. We hoped that perceived importance, as a measure of the moral intensity of the scenario, might be able to explain much of the difference in ethical decision-making across scenarios. However, the relationship between perceived importance and the link from moral judgment to intent and the link from moral obligation to intent was not a linear relationship as Robin et al. had suggested. Given these results, it seems reasonable to propose at least that a semblance of a U-shaped relationship existed: scenarios with very high or very low perceived importance had a stronger links from moral judgment to moral intent than scenarios with mid-level perceived importance. Since different beliefs (other than perceived importance) could be responsible for differences in ethical decision-making across scenarios involving IT use, we
318
R. Haines, L.N.K. Leonard / Information & Management 44 (2007) 313–320
considered other factors. Mason’s [17] four ethical issues (privacy, accuracy, property, and accessibility) might explain some differences on a categorical basis. Specifically, scenarios 2 and 4 both involved intellectual property, which is easy to reproduce and share. For these two scenarios, moral obligation had a higher influence on moral intent than the others. Scenario 5 had a significantly stronger link from moral judgment to moral intent than scenario 4, suggesting that privacy issues were considered different from intellectual property ones. Unfortunately, the limited number of each type of scenario in our study made definitive statements impossible. Moral responsibility may account for some of the difference among the models [11]. A key component is severity of consequences. The subjects in our study had to assume that the actor in the scenario was not punished in any way, so the only consequences were social or personal. Because students may perceive of electronic distribution of software as having no social consequence, the behavior in scenarios 2 and 4 would probably have a weaker link between moral judgment and moral intent, leading to a relatively strong path from moral obligation to moral intent. This does not completely account for the difference because the behavior in scenario 3 would also have little social consequence. Maybe its unimportance was such a large driver of moral intent that the moral responsibility component was overridden. Scenarios 1, 3, and 5, with either high or low perceived importance exhibited insignificant links from moral obligation to moral intent. 5.1. Limitations Our study used third-person scenarios and college students as subjects. According to Greenberg and Eskew, this is appropriate for studying basic psychological processes. However, college students’ judgments of the behavior of others should not be generalized to organizational settings, regardless of their being ‘‘the professionals of tomorrow’’. Given their lack of involvement and relative immaturity, it is not surprising that college students have been shown to perceive questionable behavior as being more ethical than IS professionals. 6. Conclusions Our experimental results showed that the relationship among factors that influence ethical decisionmaking is complex and that different factors were
important in determining moral intent for different ethical scenarios. We found perceived importance to be an important, but not all-inclusive influence on ethical decision-making. It has a significant, positive link with moral judgment. However, its effect on the link between individuals’ moral judgment and their intention to engage in that behavior was not tied to perceived importance. Our U-shape may be an artifact of the scenarios in an IT setting. An issue with very low or high importance seems to lead people to economize on their cognitive efforts and rely on an emotional response (based on moral judgment at the expense of moral responsibility). Then, scenarios 1, 3, and 5 may not be considered ethical dilemmas; rather but ethical certainties. Our study had differences among the scenarios in the ethical decision-making process. Certainly, attitudes toward software piracy can vary enough across individuals to allow for the validation of the decision-making. However, attitudes and decision-making processes can vary within individuals when faced with different ethical situations, making studies that compare beliefs and attitudes for a variety of questionable behaviors a better validation of theories of ethical decision-making. 6.1. Implications for managers Researchers in IT ethics all suggest implementing codes of ethics and ethical training programs as a way of limiting unethical behavior. When such training can be targeted, other studies have suggested focusing on groups with personality styles or demographics that seem to be less ethical. However, training efforts might need to be focused on the decision-making process rather than ethical outcomes. It is difficult to say whether more mature IT users will feel the same way about the scenarios as the students in our sample, but managers should target training based on the relative importance to users of the behaviors that they wish to limit. Issues of high importance will lead users to weigh their moral judgment on the issue, so managers may not wish to spend an inordinate time discussing them. For issues that fall into the ‘‘gray area,’’ users are more likely to weigh the moral issues, and managers can emphasize a moral obligation when employees are faced with the situation. Of particular concern are those behaviors of low importance to users. Because applications of IT are often novel, issues such as using company computers for personal work or looking at confidential information may seem so unimportant that they fall outside of the ‘‘gray area’’, resulting in decisions made on non-moral grounds.
R. Haines, L.N.K. Leonard / Information & Management 44 (2007) 313–320
319
Appendix A. Ethical scenarios
A.5. Scenario 5
A.1. Scenario 1
A marketing company’s employee was doing piece work production data runs on company computers after hours under contract for a state government. Her moonlighting activity was performed with the knowledge and approval of her manager. The data were questionnaire answers of 14,000 public school children. The questionnaire contained highly specific questions on domestic life of the children and their parents. The government’s purpose was to develop statistics for behavioral profiles, for use in public assistance programs. The data included the respondents’ names, addresses, and so forth. The employee’s contract contained no divulgement restrictions, except a provision that statistical compilations and analyzes were the property of the government. The manager discovered the exact nature of the information in the tapes and its value in business services his company supplied. He requested that the data be copied for subsequent use in the business. The employee decided the request did not violate the terms of the contract, and she complied.
A programmer at a bank realized that he had accidentally overdrawn his checking account. He made a small adjustment in the bank’s accounting system so that his account would not have an additional service charge assessed. As soon as he made a deposit that made his balance positive again, he corrected the bank’s accounting system. A.2. Scenario 2 With approval from his boss, a person ordered an accounting program from a mail-order software company. When the employee received his order, he found that the store had accidentally sent him a very expensive word processing program as well as the accounting package that he had ordered. He looked at the invoice, and it indicated only that the accounting package had been sent. The employee decided to keep the word processing package. A.3. Scenario 3 A computer programmer enjoyed building small computer applications to give his friends. He would frequently go to his office on Saturday when no one was working and use his employer’s computer to develop computer applications. He did not hide the fact that he was going into the building; he had to sign a register at a security desk each time he entered. A.4. Scenario 4 A computing service provider offered the use of a program at a premium charge to subscribing businesses. The program was to be used only through the service company’s computer. An employee at one of the subscribing businesses obtained a copy of the program accidentally, when the service company inadvertently revealed it to him in discussions through the system (terminal to terminal) concerning a possible program bug. All copies of the program outside of the computer system were marked as trade secret, proprietary to the service, but the copy the customer obtained from the computer was not. The employee used the copy of the program after he obtained it, without paying the usage fee to the service.
Appendix B. Questionnaires Typically, researchers reporting PLS results will show tables verifying the psychometric properties of their instruments. These would include comparing average variance extracted (AVE) for each construct with its correlation with other latent variables, and reporting loadings and cross loadings. We chose not to include tables summarizing this analysis partly to reduce the amount of space this would take in a study involving five models, but primarily because only one of our constructs (perceived importance) consisted of multiple items. Its psychometric properties exceeded recommended levels: composite reliability for perceived importance ranged from 0.971 to 0.989 and its AVE ranged from 0.893 to 0.958.
B.1. Questionnaire
Judgment
Intent
The [person in the scenario’s] [behavior] was: (acceptable–unacceptable) (derived from Leonard et al.) If you were the [person], what is the probability you would have [engaged in the behavior]? (highly probable–highly improbable) (derived from Leonard et al.)
320 Importance
Obligation
R. Haines, L.N.K. Leonard / Information & Management 44 (2007) 313–320 The [person’s] [behavior] was a(n): (extremely important issue–unimportant issue, highly significant issue–insignificant issue, issue is of considerable concern–issue is of no concern, fundamental issue–trivial issue) (derived from Robin et al.) How morally obligated would you feel to take corrective action in this case? (no obligation– strong obligation) (derived from Leonard et al.)
References [1] I. Azjen, The theory of planned behavior, Organizational Behavior and Human Decision Processes 50 (2), 1991, pp. 179–211. [2] D. Banerjee, T.P. Cronan, T.W. Jones, Modeling IT ethics: a study of situational ethics, MIS Quarterly 22 (1), 1998, pp. 31–60. [3] L. Beck, I. Azjen, Predicting dishonest actions using the theory of planned behavior, Journal of Research in Personality 25, 1991, pp. 285–301. [4] J.J. Cappel, J.C. Windsor, A comparative investigation of ethical decision-making: information systems professionals versus students, The DATABASE for advances in information systems 29 (2), 1998, pp. 20–34. [5] J. Carr, Strategies and issues: thwarting insider attacks, Network Magazine, 2002 September 5. [6] W.W. Chin, The partial least squares approach to structural equation modeling, in: G.A. Marcoulides (Ed.), Modern Methods for Business Research, Lawrence Erlbaum Associates, Mahwah, New Jersey, 1998, pp. 295–336. [7] W.W. Chin, Frequently asked questions—partial least squares & PLS-Graph, Home Page, 2000. Available on-line at: http:// disc-nt.cba.uh.edu/chin/plsfaq.htm. [8] U.E. Gattiker, H. Kelley, Morality and computers: attitudes and differences in moral judgments, Information Systems Research 10 (3), 1999, pp. 233–254. [9] J. Greenberg, D.E. Eskew, The role of role playing in organizational research, Journal of Management 19 (2), 1993, pp. 221–241. [10] T.M. Jones, Ethical decision-making by individuals in organizations: an issue-contingent model, Academy of Management Review 16 (2), 1991, pp. 366–395. [11] T.M. Jones, L.V. Ryan, The link between ethical judgment and action in organizations: a moral approbation approach, Organization Science 8 (6), 1997, pp. 663–680. [12] J. Kreie, T.P. Cronan, How men and women view ethics, Communications of the ACM 41 (9), 1998, pp. 70–76. [13] L.N.K. Leonard, T.P. Cronan, Illegal, inappropriate, and unethical behavior in an information technology context: a study to explain influences, Journal of the Association for Information Systems 1 (12), 2001, pp. 1–31. [14] L.N.K. Leonard, T.P. Cronan, J. Kreie, What are influences of ethical behavior intentions—planned behavior, reasoned action, perceived importance, or individual characteristics? Information and Management 42 (1), 2004, pp. 143–158. [15] K.D. Loch, S. Conger, Evaluating ethical decision-making and computer use, Communications of the ACM 39 (7), 1996, pp. 74–83. [16] K.P. Marshall, Has technology introduced new ethical problems? Journal of Business Ethics 19 (1), 1999, pp. 81–90. [17] R.O. Mason, Four ethical issues of the information age, MIS Quarterly 10 (1), 1986, pp. 4–12. [18] E. Messmer, Companies target the enemy within, Network World 19 (34), 2002, p. 10.
[19] E. Messmer, Security experts: insider threat looms largest, Network World 20 (49), 2003, pp. 12–72. [20] D.K. Peterson, Computer ethics: the influence of guidelines and universal moral beliefs, Information Technology and People 15 (4), 2002, pp. 346–361. [21] D.M. Randall, Taking stock: can the theory of reasoned action explain unethical conduct, Journal of Business Ethics 8, 1989, pp. 873–882. [22] J.R. Rest, Moral Development: Advances in Theory and Research, Praeger, New York, 1986. [23] R. Richardson, 2003CSI/FBI Computer Crime and Security Survey, 2003. Available on-line at: http://www.gocsi.com. [24] D.P. Robin, R.E. Reidenbach, P.J. Forrest, The perceived importance of an ethical issue as an influence on the ethical decisionmaking of ad managers, Journal of Business Research 35, 1996, pp. 17–28. [25] J.B. Rotter, Generalized expectancies for internal versus external control of reinforcement, Psychological Monographs 80 (1), 1966, pp. 1–28. [26] SIIA and KPMG, Doesn’t everybody do it? Internet piracy attitudes and behaviors, 2001. Available on-line at: http:// www.siia.net/divisions/content/pubs/kmpg.pdf. [27] M.D. Street, S.C. Douglas, S.W. Geiger, M.J. Martinko, The impact of cognitive expenditure on the ethical decision-making process: the cognitive elaboration model, Organizational Behavior and Human Decision Processes 86 (2), 2001, pp. 256–277. [28] L.K. Trevino, Ethical decision-making in organizations: a person-situation interactionist model, Academy of Management Review 11 (3), 1986, pp. 601–617. [29] D.C. Wyld, C.A. Jones, The importance of context: the ethical work climate construct and models of ethical decision-making— an agenda for research, Journal of Business Ethics 16 (4), 1997, pp. 465–472. Russell Haines is an Assistant Professor of Information Technology at Old Dominion University. He received his B.S. and Master of Accountancy at Brigham Young University and his Ph.D. at the University of Houston. His central research interest is the impact of information technology on group interaction. He has published studies on the negotiation process in software development teams, group development in virtual teams, ethical decision-making, computer-mediated communication, and supply chain decision-making. Lori N.K. Leonard is an Associate Professor of Management Information Systems at the University of Tulsa. Dr. Leonard received her Ph.D. from the University of Arkansas and is a member of the Association for Information Systems and the Decision Sciences Institute. Her research interests include electronic commerce, ethics in computing, C2C commerce, and online trust. Her publications have appeared in Journal of the Association for Information Systems, Journal of Computer Information Systems, Journal of End User Computing, Information & Management, Journal of Organizational Computing and Electronic Commerce, as well as in other journals, and Proceedings of various Conferences.