Insight into hackers’ reaction toward information security breach

Insight into hackers’ reaction toward information security breach

International Journal of Information Management 49 (2019) 388–396 Contents lists available at ScienceDirect International Journal of Information Man...

360KB Sizes 1 Downloads 69 Views

International Journal of Information Management 49 (2019) 388–396

Contents lists available at ScienceDirect

International Journal of Information Management journal homepage: www.elsevier.com/locate/ijinfomgt

Insight into hackers’ reaction toward information security breach a,⁎

Siew H. Chan , Suparak Janjarasjit a b

T

b

University of North Georgia, Mike Cottrell College of Business, Department of Accounting & Law, 82 College Circle, Dahlonega, GA 30597, USA Mahasarakham University, Mahasarakham Business School, Khamriang, Kantarawichai, Mahasarakham 44150 Thailand

A R T I C LE I N FO

A B S T R A C T

Keywords: Information security breach Moral affect Moral intensity Consequences Responsibility judgment

This study provides insight into hackers’ reaction toward an information security breach perpetuated either with an ill or good intention. To our knowledge, limited research is available for promoting understanding of whether intent induces different perceived moral affect (i.e., a perpetrator should have feelings of regret, sorrow, guilt, and shame) which explains the effect of perceived intensity of emotional distress on responsibility judgment. Further, research is sparse on enhancing understanding of whether the nature of a perpetrator’s intent affects the moderating role of consideration of the consequences in the relationship between perceived moral affect and responsibility judgment. Increased understanding of the relationships among perceived moral affect, perceived intensity of emotional distress, consideration of the consequences, and responsibility judgment of an information security breach from the hackers’ perspective may shed light on their continued engagement in the act despite society’s disapproval. Analyzes of the responses of 166 hackers recruited at two major hacker conferences reveal that perceived moral affect mediates the effect of perceived intensity of emotional distress on responsibility judgment only in an ill intention breach, and consideration of the consequences strengthens the relationship between perceived moral affect and responsibility judgment only in a good intention breach.

1. Introduction Kapustkiy and Kasimierz hacked an Indian embassy website located in seven different countries (i.e., Switzerland, Italy, Romania, Mali, South Africa, Libya, and Malawi) in November 2016 (Bonderud, 2016) and left a note alerting the embassy of the vulnerabilities. However, this notice was ignored until after the breached data were announced publicly which drew the attention of the Indian embassy. Subsequently, developers were instructed to fix the problem immediately with the assistance of Kapustkiy and Kasimierz. After the breach, the Indian embassy thanked Kapustkiy and Kasimierz for breaking into the website to draw their attention to the vulnerabilities. Kapustkiy indicated later that he believed his act was morally right because the vulnerabilities were fixed with little data leakage (Patterson, 2017). Morality research (e.g., Bandura, 2001; Fida, Tramontano, Paciello, Ghezzi, & Barbaranelli, 2016; Haidt, 2003) asserts that individuals engage in an act only when they perceive the act to be morally right. However, the moral perspective of perceived right versus wrong may be different from the legal perspective. For example, individuals may use pirated software because they believe this does not cause any harm to the companies and may even feel that the act is morally right (Bhal & Leekha, 2008) although such an act is illegal (Menell, 2018). Thus,



hackers may focus on moral values instead of the legality of an act when they engage in hacking. The information systems environment facilitates increased abstract representation of actions which trigger utilitarian moral judgment (Barque-Duran, Pothos, Hampton, & Yearsley, 2017) where the perceived benefits and costs determine the morality of an act (Bartels, 2008; Mill, 1861/1998Mill, 1861/1998). Although harming someone is a moral dilemma, this act may be acceptable if it increases the wellbeing of a large group of people (Conway & Gawronski, 2013). This suggests that Kapustkiy’s act may be morally acceptable if the perceived benefit (i.e., improved systems security) is greater than the perceived cost (i.e., the victims’ emotional distress). A perpetrator’s intent determines the extent of perceived benefits versus costs; specifically, a good intention act may be perceived as more beneficial and elicit positive feelings compared to an ill intention act (Martin & Cushman, 2016). Hackers may possess moral beliefs that are incongruent with their out-groups (Chiesa, Ducci, & Ciappi, 2009; Pieters & Consoli, 2009). According to The Hacker Manifesto: The Conscience of a Hacker (The Mentor, 1986), hackers constantly look for challenges and are passionate about breaking limits. They search for inadequacies and vulnerabilities in information systems so that they can direct the attention of

Corresponding author. E-mail address: [email protected] (S.H. Chan).

https://doi.org/10.1016/j.ijinfomgt.2019.07.010 Received 12 March 2019; Received in revised form 18 July 2019; Accepted 18 July 2019 0268-4012/ © 2019 Elsevier Ltd. All rights reserved.

International Journal of Information Management 49 (2019) 388–396

S.H. Chan and S. Janjarasjit

research (Jordan, Diermeier, & Galinsky, 2012; McMahon & Harvey, 2007; Robertson, Lamin, & Livanis, 2010) suggests that all the components do not need to be present for assessment of perceived moral intensity of an act. We focus on magnitude of consequences, probability of consequences, and temporal immediacy because these components are appropriate for the information security breach examined in this study. Magnitude of consequences refers to harm caused to the victims of a moral act in question (Jones, 1991). An act that causes the death of a human has a greater magnitude of consequences than an act that causes a person to suffer a minor injury (Jones, 1991). Probability of consequences is defined as the probability of the occurrence of an act which causes the predicted harm (Jones, 1991). For example, selling a gun to a known armed robber has a higher probability of harm than selling a gun to a law-abiding citizen (Jones, 1991). Temporal immediacy is the length of time between the present and onset of the consequences of a moral act (Jones, 1991). For example, releasing a drug that causes an acute side effect has a greater temporal immediacy than releasing a drug that causes a side effect after 20 years (Jones, 1991). Moral intensity is perceived to be high when one believes that a victim suffers severe harmful consequences (magnitude of consequences), experiences the harmful consequences (probability of consequences), and suffers from the immediate consequences (temporal immediacy) (Jones, 1991). One may consider an act as involving a moral issue and engage in a moral behavior when perceived moral intensity is high (Frey, 2000; Jones, 1991). Engagement in a moral behavior may not occur when perceived moral intensity is below one’s threshold (Barnett & Valentine, 2004; Jones, 1991). Suppose an employee installed a company’s licensed software on her home computer against her colleagues’ suggestion on purchasing a separate license for personal use. This act might elicit high moral intensity because the harmful consequences (negative emotions such as disgust and anger) have occurred and are immediate, leading to perception of the act as morally unacceptable (McMahon & Harvey, 2006). Now, suppose one’s colleagues have mixed opinions on use of the company’s licensed software for personal purposes. This situation might not elicit high moral intensity because the colleagues’ mixed opinions dilute the effects of perceived harmful consequences, leading to perception of the act as less morally unacceptable (McMahon & Harvey, 2006). Elicitation of a moral behavior may necessitate increased awareness of the harmful consequences which accentuates the victims’ well-being and suggests a need for restoring justice for the victims (Greene, Burnette, & Davis, 2008).

systems developers to potential threats and vulnerabilities. They believe that such an act suggests good moral values and sense of responsibility if they fully and privately disclose the detected vulnerabilities on a timely basis to developers and organizations so that the problem is taken care of before anyone could actually engage in malicious exploitation of the vulnerabilities (Chiesa et al., 2009). However, hackers might disclose the system vulnerabilities to the public if the problem remains unsolved. Hackers might engage in information security breach without any intention to cause harm but to improve security, enhance technology, or protect privacy (Chiesa et al., 2009). However, hacking with a good intention can still elicit unpleasant feelings. Although Kapustkiy and Kasimierz breached the Indian embassy’s websites without any intention to cause damage but to force the embassy to take information security seriously, the breach caused anxiety to the embassy because it involved sensitive data that could be used for cyber espionage campaigns (Kumar, 2016). Hackers may not be aware that an information security breach can cause emotional distress to a victim; especially when they believe that they have a good intention. Hence, the consequences of a breach may increase hackers’ awareness of the victims’ emotional distress. Hackers might also break into systems to release anger and frustration with an intention to cause damage to others (Chiesa et al., 2009). For example, a hacker known as “Peace” stole and attempted to sell at least 200 million Yahoo’s user data on the black market, an act that devastated users and caused them to change their Yahoo account information (BBC News, 2016). Selling personal data on the black market has a negative impact on the victims and is indicative of a perpetrator’s ill intention. The purpose of this study is to facilitate understanding of hackers’ reaction toward an information security breach. Specifically, we investigate whether perceived moral affect (i.e., a perpetrator should have feelings of regret, sorrow, guilt, and shame) explains the effect of perceived intensity of emotional distress on responsibility judgment (mediating hypothesis), and whether consideration of the consequences strengthens the effect of perceived moral affect on responsibility judgment (moderating hypothesis). We recruited hackers from two major hacker conferences. They completed a questionnaire containing items which assessed their perceptions of the intensity of the victims’ emotional distress, moral affect, consideration of the consequences, and responsibility judgment. Analyses of the usable responses of 166 hackers support the mediating hypothesis when a perpetrator harbors ill intention. This effect is not observed when the perpetrator has a good intention because the nature of the breach may attenuate the hackers’ perceived effect of the harmful consequences to the extent that they no longer blame the perpetrator for causing the consequences. Additionally, the results support the moderating hypothesis when a perpetrator has a good intention; this phenomenon is not observed in the ill intention breach. Thus, hackers might disapprove of an ill intention breach without considering the consequences of the act. The remainder of this paper is organized as follows. The next section reviews the relevant literature and develops the hypotheses. The research method and results are then presented. Finally, the findings, theoretical contributions and implications, implications for practice, and limitations and future research direction are discussed.

2.2. Moral affect Moral affect (i.e., feelings of regret, sorrow, guilt, and shame) elicits intuitive and automatic emotions in response to an act that violates one’s moral beliefs, even though the act may harm others rather than oneself (Haidt, 2007; Janoff-Bulman & Carnes, 2013; Tangney, 1991). Moral affect entails implicit reactions to an act because of one’s concern about the interests or welfare of the victims or society at large (Gray & Schein, 2012; Haidt, 2003). Individuals’ desire for belongingness to their social group (Baumeister & Leary, 1995) may cause them to monitor and constrain their behavior when they encounter a moral violation, and exhibit a prosocial behavioral response to a given situation (i.e., a desire to restore justice or reciprocate an act of kindness) (Ford, Agosta, Huang, & Shannon, 2018; Haidt, 2003). Emotions such as regret, sorrow, guilt, and shame may be elicited to assist individuals to cope with a specific situation so that they can fit in with their social group (Haidt, 2003; Leyens et al., 2000). Individuals may attempt to help, comfort, or alleviate the suffering of the victims (Stellar, Cohen, Overis, & Keltner, 2015) to convey their concerns about the well-being of others and mitigate concerns about the uncertainty of their helping behavior (Grant & Gino, 2010). Moral affect is critical when an act involves a conflict between one’s

2. Theory and hypotheses 2.1. Moral intensity Moral intensity theory asserts that an act has a certain degree of moral intensity and the degree of intensity needs to reach a certain threshold before one can recognize the act as a moral issue and perceive a need for a moral response (Jones, 1991). The degree of moral intensity varies in accordance with the following six characteristics: magnitude of consequences, probability of consequences, temporal immediacy, proximity, and concentration of effect (Jones, 1991). Prior 389

International Journal of Information Management 49 (2019) 388–396

S.H. Chan and S. Janjarasjit

3. Hypotheses

good intention and the harmful consequences of the act (Greene, 2008). For example, in a trolley problem, one has to decide between (a) pulling a switch to divert a runaway boxcar headed toward five agents while one agent is on a side track, and (b) abstaining from pulling a switch to leave the runaway boxcar headed toward one agent while five agents were on a side track. The findings suggest that abstaining from pulling a switch is perceived to be morally wrong to a greater extent than pulling a switch (Hauser, Tonnaer, & Cima, 2009; Navarrete, McDonald, Mott, & Asher, 2012). Individuals’ need for engagement in a prosocial behavior may intensify to the extent that they cannot ignore or take no action to help regardless of the number of victims involved (Greene, Sommerville, Nystrom, Darley, & Cohen, 2001; Hauser, Cushman, Young, Jin, & Mikhail, 2007; Navarrete et al., 2012). Although prior research indicates that moral affect may be invariant across groups (Hauser, 2006), it might differ across groups for an act involving ingroup norms (Rutland, Killen, & Abrams, 2010) because of society’s influence on their moral beliefs (Haidt & Graham, 2007). Thus, they might agree with the perpetrator rather than the victims. This phenomenon is observed when individuals perceive their group to be superior to other groups; hence, their moral affect may favor their ingroup instead of out-group (Leyens et al., 2000).

3.1. The mediating effect of moral affect Suppose a person hacks into a system and obtains the confidential information of customers. A perpetrator may engage in this act to steal customer information or to test the security of a system. The victims are likely to suffer emotional distress as a result of their stolen personal information regardless of the perpetrator’s intention because they would not have been harmed if the act had not occurred. Hence, perceived intensity is expected to be high, leading to judgment of the act as a moral issue (Jones, 1991). Once an issue is perceived as a moral issue, hackers may utilize moral reasoning, engage in moral behavior (Brandon, Kerler, Killough, & Mueller, 2007; Jones, 1991), and hold a perpetrator responsible for the act because they believe that the victims actually and immediately suffer the harmful consequences (i.e., stolen personal information). Increased judgment of responsibility is likely to occur because emotional distress is an undesired emotion. Recognition of the fact that the victims suffer emotional distress as a result of an ill or good intention act enables hackers to imagine how a perpetrator should feel in such a situation (Batson, Early, & Salvarani, 1997). Since the victims’ emotional distress experienced as a result of an ill or good intention act indicates an unpleasant situation, hackers may perceive that a perpetrator should have feelings of regret, sorrow, guilt, and shame for causing emotional distress to the victims (de Graaff, Schut, Verweij, Vermetten, & Giebels, 2016). This perception might increase concern for the victims’ well-being (Gray & Schein, 2012; Haidt, 2003), and elicit hackers’ prosocial behavioral response such as a desire to restore justice for the victims (Ford et al., 2018). Hence, they may hold a perpetrator responsible for an information security breach regardless of whether the act involves an ill or good intention. The above discussion suggests the following mediating hypothesis (Fig. 1):

2.3. Intent versus consequences of an act The consequences of an act, commonly used to judge a perpetrator’s wrongdoing in the legal system, is a critical factor for assessing responsibility (Miller, Hannikainen, & Cushman, 2014). While thoughts about an illegal act without actual commitment of the act may go unpunished, killing someone without an ill intention may be punishable (Miller et al., 2014). Further, punishment for an attempt to kill someone may not be as severe as actually killing a person. Individuals may consider a perpetrator’s intent and the consequences of an act when they address the moral issues of the act (Cushman, 2008; Killen & Smetana, 2015). They may attempt to understand a perpetrator’s mental states (desires, motives, beliefs, or intent and how these relate to what had occurred) prior to evaluation of the consequences of the act (Cushman & Young, 2011). Intent and consequences of an act influence assessment of the moral implications of the act (Miller et al., 2014). Two theories can be used to explain the relationship between intent and the consequences of an act. Blame blocking suggests that harmful consequences cause one to focus more on the consequences of an act rather than a perpetrator’s intent (Cushman, 2008). For example, shooting but missing a victim may be viewed to be less wrongful than shooting a victim who happened to be struck and harmed by a lightning instead of the shooting. Although the victim’s injury was not a direct consequence of the perpetrator’s act, individuals may attribute the consequences to the perpetrator and conclude that this would not have occurred if the perpetrator had not shot the victim. According to two-process theory, individuals generally engage in a single-process when they judge an act to be immoral in the case where the harm caused to the victims is intentional; subsequently, they mete out harsh punishment to a perpetrator (Cushman, 2008). When a perpetrator does not have an intention to cause harmful consequences to others, individuals may utilize a two-process model to determine the wrongfulness of the act; that is, they need to establish that the perpetrator causes the harmful consequences before they assess responsibility (Cushman, 2008). Thus, an act may be judged as wrongful if individuals can establish a relationship between the act and the harmful consequences, regardless of a perpetrator’s intention. However, a perpetrator’s state of mind can influence judgment when individuals establish a relationship between the cause and harmful consequences to assess responsibility. Specifically, perceived positive intent may attenuate the severity of harmful consequences, leading to reduced judgment of responsibility (Cushman, 2008).

H1. Perceived moral affect mediates the effect of perceived intensity of emotional distress on responsibility judgment.

3.2. The moderating role of consideration of the consequences As discussed above, perceived moral affect ensues when hackers believe that the victims suffer emotional distress as a result of an information security breach perpetuated either with an ill or good intention. This results in increased hackers’ concern about the welfare of the victims and propensity to engage in a prosocial behavior (i.e., hold a perpetrator responsible for the act). The saliency of perceived moral affect increases with augmented consideration of the consequences of a breach either with an ill or good intention (de Hooge, Nelissen, Breugelmans, & Zeelenberg, 2011). Hackers may become motivated to restore justice for the victims when they focus on the victims’ emotional distress; specifically, hackers may believe that the victims would not have been harmed if the information security breach had not occurred. This phenomenon may be observed even in the case where the perpetrator does not have any intention to cause harm to others. Hence, consideration of the consequences is predicted to direct attention to the harmful consequences caused to the victims regardless of the perpetrator’s intent and may result in the conclusion that the perpetrator should have feelings of regret, sorrow, guilt, and shame for engaging in the harmful act. Subsequently, increased responsibility is assessed to restore equity for the victims. The following moderating hypothesis

Fig. 1. Hypothesis 1. 390

International Journal of Information Management 49 (2019) 388–396

S.H. Chan and S. Janjarasjit

examines the above issues (Fig. 2): H2. Consideration of the consequences moderates the impact of perceived moral affect on responsibility judgment. 4. Research method 4.1. Research instrument We develop two hypothetical scenarios describing an information security breach either with an ill or good intention in a business context based on court documents available on the U.S. Department of Justice website, and reports published by the Bureau of Justice Statistics, Computer Security Institute, and the media. One scenario discussed an information security breach with an ill intention while the other scenario described a breach with a good intention. The materials reveal that taking revenge on corporations, stealing confidential information for financial purposes, and destroying corporate systems are common examples of breaches with an ill intention in corporate system intrusions. Revenge is used to represent a perpetrator’s ill intention in this study because it occurs frequently in intrusion reports. Corporate systems may also be intruded with the intention to help improve systems security. Helping behavior (i.e., testing security measures to detect and mitigate weaknesses) is utilized to represent a perpetrator’s good intention in this study. The information provided in the scenarios is adapted from actual computer incidents in the court documents.

Fig. 2. Hypothesis 2.

security professionals, programmers, and consultants)2, 14% indicated that they worked in industries other than information systems, and the remaining 12% did not provide this information. Except for six participants, the hackers engaged in hacking for at least one year and 45 years was the maximum; the mean was 10.49 years. The participants’ demographic information (i.e., age, work experience, and hacking experience) did not affect the results. 4.5. Measurement of variables 4.5.1. Perceived intensity of emotional distress (independent variable) Our three-item perceived intensity of emotional distress scale is adapted from moral intensity theory (Jones, 1991; Singhapakdi, Vitell, & Kraft, 1996). Participants responded (on a 7-point scale with 1 = strongly disagree and 7 = strongly agree) to questions assessing their perceptions of the magnitude (the victims would suffer serious emotional distress), probability (the victims would definitely suffer emotional distress), and temporal immediacy (the victims would immediately suffer emotional distress) of the consequences described in the ill or good intention information security breach.

4.2. Pretest First, we conducted a verbal protocol with the pretest participants. They were asked to think out loud their thoughts while they read the research instrument. We revised the instrument based on the feedback received from the verbal protocol procedure. We then pretested the research instrument with 28 senior accounting students. The instrument was further revised based on the feedback received from the pretest participants.

4.5.2. Perceived moral affect (mediator in Hypothesis 1) The literature on moral affect (e.g., de Hooge et al., 2011; Ghorbani, Liao, Caykoylu, & Chand, 2013; Tangney, Miller, Flicker, & Barlow, 1996) suggests that emotions such as regret, sorrow, guilt, and shame are activated when individuals evaluate an act that causes harmful consequences to the victims. We use a four-item moral affect scale (on a 7-point scale with 1 = strongly disagree and 7 = strongly agree) to assess the participants’ perceptions of whether the perpetrator should have feelings of regret, sorrow, guilt, and shame for the information security breach.

4.3. Task Participants read two hypothetical scenarios, answered questions on the scenarios, and provided demographic information. All participants received a T-shirt for their voluntary participation in this study. The order of the scenarios was not randomized. Our participants may be less vulnerable to order effect because they relied on their personal moral beliefs rather than external factors such as information from other cases when they evaluated cases involving moral issues (Wright, 2010). Further, the pretest results indicated that the order of the scenarios did not affect the participants’ responses.

4.5.3. Consideration of the consequences (moderator in Hypothesis 2) We measure consideration of the consequences (on a 7-point scale; 1 = strongly disagree and 7 = strongly agree) via the hackers’ assessment of the perpetrator’s responsibility for the consequences. 4.5.4. Responsibility judgment (dependent variable) Our dependent variable measures the hackers’ judgment of the perpetrator’s responsibility for the information security breach (on a 7point scale; 1 = strongly disagree and 7 = strongly agree). The items in each construct are presented in Table 1.

4.4. Participants We contacted the organizers of two reputable international hacker conferences attended by hackers around the world. We received permission to set up a booth at the conferences and the hackers stopped by our booth to complete our research instrument. The Mahalanobis distance test reveals that 15 responses have probability values below 0.001, suggesting lack of representativeness of the sample examined in this study (Kline, 2015). Thus, the responses of 15 participants are excluded from analysis, resulting in 166 usable responses.1 The hackers’ age ranged between 18 and 63 years and the mean was 34. Majority of the hackers were males (94%). About 76% reported that they worked in the information systems industry, (e.g., systems engineers, systems

5. Results 5.1. Psychometric properties of constructs The psychometric properties of the two latent constructs, perceived intensity of emotional distress and perceived moral affect, are tested separately for the ill and good intention information security breaches.

1 The results are similar when the responses of these 15 participants are included in the analysis.

2 A participant at the Decision Sciences Institute Annual Meeting 2018 commented that this self-reported information could not be verified.

391

International Journal of Information Management 49 (2019) 388–396

S.H. Chan and S. Janjarasjit

Table 1 Construct Items. Construct

Item

Measurement

Perceived intensity of emotional distress

Magnitude of consequences Probability of consequences Temporal immediacy Regret Sorrow Guilt Shame

The The The The The The The

Perceived moral affect

victims would victims would victims would hacker should hacker should hacker should hacker should

suffer serious emotional distress as a result of the hacker’s action. definitely suffer emotional distress as a result of the hacker’s action. immediately suffer emotional distress as a result of the hacker’s action. regret the act. feel sorry for the act. feel guilty about the act. feel ashamed of the act.

Table 2 Factor Loadings. Panel A: Ill Intention Breach

Magnitude of consequences Probability of consequences Temporal immediacy Regret Sorrow Guilt Shame

Perceived Intensity of Emotional Distress

Perceived Moral Affect

0.727 0.834 0.678 0.147 0.296 0.328 0.302

0.247 0.362 0.192 0.742 0.852 0.864 0.899

Perceived Intensity of Emotional Distress

Perceived Moral Affect

0.707 0.794 0.838 −0.030 0.040 0.073 −0.004

0.013 0.008 −0.050 0.931 0.787 0.820 0.939

Panel B: Good Intention Breach

Magnitude of consequences Probability of consequences Temporal immediacy Regret Sorrow Guilt Shame

5.1.1. Confirmatory factor analyzes The factor loadings of the perceived intensity of emotional distress construct are acceptable for the ill (between 0.678 and 0.834) and good (between 0.707 and 0.838) intention breaches (Table 2). The factor loadings (between 0.742 and 0.899) of the perceived moral affect construct are acceptable for the ill intention breach. The factor loadings (between 0.787 and 0.939) of the perceived moral affect construct are also acceptable for the good intention breach (Table 2).

respectively, for the ill intention breach; and 0.808 and 0.816, respectively, for the good intention breach (Table 3). These results indicate acceptable reliability for the items in the perceived intensity of emotional distress construct. The Cronbach’s alpha and composite reliability of the moral affect construct are 0.898 and 0.906, respectively, for the ill intention breach; and 0.927 and 0.932, respectively, for the good intention breach (Table 3). These results suggest acceptable reliability for the items in the moral affect construct.

5.1.2. Reliability tests The Cronbach’s alpha and composite reliability of the perceived intensity of emotional distress construct are 0.793 and 0.792,

5.1.3. Validity tests The average variance extracted (AVE) for the perceived intensity of emotional distress construct are 0.553 and 0.592 for the ill and good

Table 3 Reliability, Average Variance Extracted (AVE), and Inter-Construct Correlations. Panel A: Ill Intention Breach Cronbach’s Alpha

Perceived Intensity of Emotional Distress Perceived Moral Affect

Composite Reliability

AVE

Inter-Construct Correlations Perceived Intensity of Emotional Distress

Perceived Moral Affect

0.842*

0.793 0.898

0.792 0.906

0.553 0.709

0.744* 0.491

Cronbach’s Alpha

Composite Reliability

AVE

Inter-Construct Correlations

Panel B: Good Intention Breach

Perceived Intensity of Emotional Distress Perceived Intensity of Emotional Distress Perceived Moral Affect

0.808 0.927

0.816 0.932

0.592 0.775

* Square root of average variance extracted. 392

0.769 0.533

Perceived Moral Affect

*

0.880*

International Journal of Information Management 49 (2019) 388–396

S.H. Chan and S. Janjarasjit

Table 4 Results of Hypotheses. Panel A: Test of Mediationa IV → MV

Ill intention breach Good intention breach

MV → DV

IV + MV → DV

Mediation

β

p-value

β

p-value

β

p-value

0.375 0.618

0.000 0.000

0.499 −0.082

0.000 0.297

0.187 −0.051

0.002 0.286

Supported Not Supported

Panel B: Test of Moderation AIC (without interaction)

Ill intention breach Good intention breach

4457.310 4534.789

AIC (with interaction)

4456.963 4527.577

IV + MV → DV

Moderation

β

p-value

−0.110 0.118

0.457 0.008

Not Supported Supported

IV = Perceived intensity of emotional distress. MV = Perceived moral affect. DV = Responsibility judgment. IV + MV = Moderating effect of perceived moral affect and consideration of the consequences. a Similar results are obtained when the Baron and Kenny (1986) approach is employed to test the mediating hypothesis.

The AIC value of the model without the interaction term is 4534.789 for the good intention breach. This value decreases to 4527.577 after the interaction term is introduced, suggesting improved model fit. The results indicate that consideration of the consequences strengthens the effect of perceived moral affect on responsibility judgment (β = 0.118, p = 0.008, Table 4, Panel B), supporting hypothesis 2 for the good intention breach. Prior to inclusion of consideration of the consequences in the moderating hypothesis test, the effect of perceived moral affect on responsibility judgment is stronger for the ill intention breach (β = 0.499, p = 0.000) than the good intention breach (β = −0.082, p = 0.297).

intention breaches, respectively, suggesting convergent validity (Table 3). The squared root values of the AVE are larger than the correlation between the perceived intensity of emotional distress and perceived moral affect constructs for both the ill and good intention breaches, suggesting discriminant validity (Table 3). The AVE of the perceived moral affect construct are 0.709 and 0.775 for the ill and good intention breaches, respectively, indicating convergent validity. Further, the square root values of the AVE are greater than the correlation between the perceived intensity of emotional distress and perceived moral affect constructs, suggesting discriminant validity. 5.2. Analysis of results

6. Discussion

5.2.1. Test of mediating effect (Hypothesis 1) We test the mediating hypothesis using the causal indirect effect approach (Muthen, 2011; Vanderweele & Vansteelandt, 2009). The results reveal that perceived moral affect mediates the effect of perceived intensity of emotional distress on responsibility judgment (β = 0.187, p = 0.002, Table 4, Panel A), providing support for hypothesis 1 for the ill intention breach. The direct effect of perceived intensity of emotional distress on responsibility judgment decreases and is insignificant (β = −0.232, p = 0.073) when perceived moral affect is included in the hypothesis test for the ill intention breach. However, perceived moral affect does not mediate the relationship between perceived intensity of emotional distress and responsibility judgment (β = −0.046, p = 0.267, Table 4, Panel A) for the good intention breach.

We utilize two hypothetical scenarios suggesting either an ill or good intention information security breach to examine hackers’ reaction toward the breach. The findings suggest that hackers recognize the emotional distress of the victims when a perpetrator harbors an ill intention. Increased awareness of the victims’ emotional distress elicits moral affect (i.e., the perpetrator should have feelings of regret, sorrow, guilt, and shame) which explains the impact of perceived intensity of emotional distress on responsibility judgment in an ill intention breach. This result is consistent with prior research findings (e.g., Buon, Jacob, Loissel, & Dupoux, 2013; Martin & Cushman, 2016; Phillips & Shaw, 2014) on judgment of an act as morally unacceptable and punishable when an ill intention causes harmful consequences to the victims. However, perceived moral affect does not explain the impact of perceived moral intensity of emotional distress on responsibility judgment in a good intention breach. The findings also reveal that consideration of the consequences of an ill intention breach strengthens the impact of perceived moral affect on responsibility judgment in a good intention breach. This result is not observed in the case of an ill intention act. These findings are consistent with prior research (e.g., Cushman, 2008; Young & Saxe, 2011) on the impact of intent on responsibility judgment. Since the harmful consequences of a good intention act can be perceived as unintentional, the act may be perceived as not in violation of morality (Young & Saxe, 2011). Subsequently, hackers may forgive and does not hold a perpetrator responsible for a good intention breach. Increasing the hackers’ awareness of a perpetrator’s good intention breach which causes the harmful consequences may attenuate the impact of the harmful consequences on responsibility judgment. In contrast, hackers may believe that a perpetrator engages in an ill intention act with the intention to

5.2.2. Test of moderating effect (Hypothesis 2) Hypothesis 2 states that consideration of consequences moderates the effect of perceived moral affect on responsibility judgment. We use the Akaike Information Criterion (AIC) (Brown, 2006; Byrne, 2011), a comparative fit measure, to examine two models. The first model comprises only the main effect of perceived moral affect on responsibility judgment, and the second model consists of both the main effect of perceived moral affect and the interaction between perceived moral affect and consideration of the consequences on responsibility judgment. AIC suggests that a model with a smaller value has a better fit. The AIC value of the model without the interaction term is 4457.310. When the interaction term is included, the AIC value decreases to 4456.963, indicating that the model with the interaction term has a better fit. However, the results (β = −0.106, p = 0.468, Table 4, Panel B) do not support hypothesis 2 for the ill intention breach. 393

International Journal of Information Management 49 (2019) 388–396

S.H. Chan and S. Janjarasjit

in the activity to help improve systems security. Hence, they perceive hacking as morally acceptable and continue to engage in the activity. Strategies can be developed to increase hackers’ awareness of the victims’ emotional distress as a result of a breach to discourage unethical hacking. The negative effect of perceived moral affect may be attenuated when hackers assess a breach without an intention to cause emotional distress to the victims. Since hackers are less likely to hold a perpetrator responsible for a good intention breach, increasing their focus on a perpetrator’s good intention may mitigate perceptions of the harmful consequences of the breach. Hackers may also focus on the potential benefits of a good intention breach (i.e., improving systems security) which outweigh the harmful consequences of the breach. Subsequently, they are less likely to hold a perpetrator responsible for the breach despite their awareness of the victims’ emotional distress.

cause harmful consequences to others (Cushman, 2008; Young & Saxe, 2011). Therefore, hackers may not consider the consequences in their responsibility judgment of an ill intention breach. 6.1. Theoretical contributions and implications Prior research (e.g., Barnett & Valentine, 2004; Braham & van Hees, 2012; Church, Gaa, Nainar, & Shehata, 2005) has investigated the impact of perceived intensity of emotional distress on responsibility judgment. Extending this research, we examine perceived moral affect as an important mediator which explains the relationship between perceived intensity of emotional distress and responsibility judgment. Hackers may recognize the victims’ emotional distress and become motivated to behave in a prosocial and equitable manner when a perpetrator harbors an ill intention (Haidt, 2003; Killen & Smetana, 2015). The nature of an ill intention breach increases the saliency of perceived moral affect which explains the impact of perceived intensity of emotional distress on responsibility judgment. The mediating effect is not observed in a good intention breach. Although hackers are aware of the victims’ emotional distress in a good intention breach, the negative effect of perceived moral affect is attenuated in the relationship between perceived intensity of emotional distress and responsibility judgment. In the absence of an intention to cause harm to a victim, hackers may engage in a two-process model where a perpetrator’s good intention may mitigate the impact of perceived moral affect in the effect of perceived intensity of emotional distress on responsibility judgment. The finding is consistent with blame blocking theory because a good intention breach maymitigate the hackers’ perception of the impact of the harmful consequences to the extent that they no longer blame the perpetrator for causing the consequences. Previous research has looked at the impact of perceived moral affect on responsibility judgment (e.g., Epley, Savitsky, & Gilovich, 2002; Horberg, Oveis, & Keltner, 2011; Moore, Stevens, & Conway, 2010; Young & Saxe, 2011). Prior research (e.g., Borg, Hynes, Van Horn, Grafton, & Sinnott-Armstrong, 2006; Cushman, Young, & Hauser, 2006) has also manipulated the nature of consequences to examine the impact of consequences on responsibility judgment. Building on previous research, we investigate consideration of the consequences as a moderator in the relationship between perceived moral affect and responsibility judgment. Consideration of the consequences directs the hackers’ attention to the impact of perceived moral affect on responsibility judgment of a good intention breach. Specifically, the effect of perceived moral affect on responsibility judgment increases when consideration of the consequences is included as a moderator in the good intention breach. Hence, hackers realize that a good intention breach still elicits emotional distress to the victims despite a perpetrator’s lack of intention to cause harmful consequences to the victims (Cushman, 2008). However, they hold a perpetrator responsible only when they realize the effect of the harmful consequences on the victims. Thus, this study highlights the important moderating role of consideration of the consequences in a good intention breach. This moderating effect is not observed in an ill intention breach because hackers may engage in a single process in an ill intention breach; that is, they perceive the act as immoral and assess responsibility without considering the consequences. The results indicate that hackers respond differently to an ill or good intention information security breach. That is, hackers are less receptive toward an ill intention breach and more tolerant of a good intention breach. Thus, discernment of whether a breach has an ill or good intention can assist designers to develop effective strategies for dealing with information security breach. Hackers may consider an ill intention breach as an out-group act which is undesirable and view a good intention act as a morally acceptable in-group act (Chiesa et al., 2009). Hacking is unlikely to be viewed as morally acceptable when an ill intention breach leads to harmful consequences to the victims. In the case of a good intention breach, hackers may believe that they engage

6.2. Implications for practice This study has important implications for users, information systems professionals, and regulatory authorities. Modernity creates liquid online relationships where one’s life and identity are choices that depend on how one seeks pleasure, creates identities, interacts with others, and solves problems (Coulthard & Keller, 2012). This liquid modernity allows a voluminous amount of information exchange via social media (e.g., Facebook, Twitter, and Instagram), e-commerce, and online banking which raises concerns about information security (Benson, Saridakis, & Tennakoon, 2015; Molok, Ahmad, & Chang, 2018) and privacy (Garg & Camp, 2015) because users may have little understanding of how information systems operate (Ben-Asher & Gonzalez, 2015). Users may be clueless about what could happen to their personal information, pay little attention to information security, and trust the competency of security service providers (Alsmadi & Prybutok, 2018; McGill & Thompson, 2017) and organizations that obtain personal data (Castro & Bettencourt, 2017). Users’ ignorance of the implications of an information security breach provides an opportunity for hackers to hack into systems, resulting in the increased number of reported breaches (PricewaterhouseCoopers, 2017). Thus, users are victimized and suffer emotional distress as a result of the breach (Elhai & Hall, 2016). While emotional distress may motivate users to use information systems with precaution (Burns, Posey, Roberts, & Lowry, 2017; Li et al., 2019; Mamonov & Benbunan-Fich, 2018), they may feel overwhelmed with precautionary efforts and decrease their intention to engage in precautionary behavior (van Schaik et al., 2017); subsequently, they pay little attention to information security issues. Users may also employ strategies to cope with their belief that countermeasures are ineffective and security threats are inevitable (Workman, Bommer, & Staub, 2009). Hence, implementation of preventive measures aimed at increasing user awareness of the importance of information security may not mitigate problems associated with information security breach. While information security policies, awareness, and training might help to attenuate potential threats to information systems (Kankanhalli, Teo, Tan, & Wei, 2003; Whitman, 2004), these efforts take time to evolve into effective strategies (Roumani, Nwankpa, & Roumani, 2016). Further, after an attack, organizations often focus on restoring service rather than learning from the incident to improve security (Ahmad, Maynard, & Shanks, 2015). Therefore, guidelines should be instituted to encourage proactive (e.g., improve security) instead of passive (e.g., restore service) behavior to protect the interests of systems users (Doherty, Anastasakis, & Fulford, 2011). Understanding a hacker’s intention [i.e., increase security, improve technology, protect privacy, obtain monetary gains, or release anger and frustration (Chiesa et al., 2009)] for breaking into an information system in an unauthorized manner can assist designers to develop effective strategies for dealing with the vulnerability issue. Hackers engaging in hacking with a good intention are likely to point out the 394

International Journal of Information Management 49 (2019) 388–396

S.H. Chan and S. Janjarasjit

Bartels, D. M. (2008). Principled moral sentiment and the flexibility of moral judgment and decision making. Cognition, 108, 381–417. Barque-Duran, A., Pothos, E. M., Hampton, J. A., & Yearsley, J. M. (2017). Contemporary morality: Moral judgments in digital contexts. Computers in Human Behavior, 75, 184–193. Batson, C. D., Early, S., & Salvarani, G. (1997). Perspective taking: Imagining how another feels versus imaging how you would feel. Personality and Social Psychology Bulletin, 23, 751–758. Baumeister, R. F., & Leary, M. R. (1995). The need to belong: Desire for interpersonal attachments as a fundamental human motivation. Psychological Bulletin, 117, 497–529. BBC News (2016). Yahoo ‘state’ hackers stole data from 500 million users. https://www.bbc. com/news/world-us-canada-37447016. Ben-Asher, N., & Gonzalez, C. (2015). Effects of cyber security knowledge on attack detection. Computers in Human Behavior, 48, 51–61. Benson, V., Saridakis, G., & Tennakoon, H. (2015). Information disclosure of social media users: Does control over personal information, user awareness and security notices matter? Information Technology and People, 28, 426–441. Bhal, K. T., & Leekha, N. D. (2008). Exploring cognitive moral logics using grounded theory: The case of software piracy. Journal of Business Ethics, 81, 635–646. Bonderud, D. (2016). Indian embassy hack: Security breached in seven countries, citizen data compromised. https://securityintelligence.com/news/indian-embassy-hack-securitybreached-in-seven-countries-citizen-data-compromised/. Borg, J. S., Hynes, C., Van Horn, J., Grafton, S., & Sinnott-Armstrong, W. (2006). Consequences, action, and intention as factors in moral judgments: An fMRI investigation. Journal of Cognitive Neuroscience, 18, 803–817. Braham, M., & van Hees, M. (2012). An anatomy of moral responsibility. Mind, 121, 601–634. Brandon, D. M., Kerler, W. A., III, Killough, L. N., & Mueller, J. M. (2007). The joint influence of client attributes and cognitive moral development on students’ ethical judgments. Journal of Accounting Education, 25, 59–73. Brown, T. A. (2006). Confirmatory factor analysis for applied research. New York, NY: The Guildford Press. Buon, M., Jacob, P., Loissel, E., & Dupoux, E. (2013). A non-mentalistic cause-based heuristic in human social evaluations. Cognition, 126, 149–155. Burns, A. J., Posey, C., Roberts, T. L., & Lowry, P. B. (2017). Examining the relationship of organizational insiders’ psychological capital with information security threat and coping appraisals. Computers in Human Behavior, 68, 190–209. Byrne, B. M. (2011). Structural equation modeling with mplus: Basic concepts, applications, and programming. New York, NY: Routledge. Castro, P., & Bettencourt, L. (2017). Exploring the predictors and the role of trust and concern in the context of data disclosure to governmental institutions. Behaviour and Information Technology, 36, 321–331. Chiesa, R., Ducci, S., & Ciappi, S. (2009). Profiling hackers. New York, NY: CRC Press. Church, B., Gaa, J. C., Nainar, S. M. K., & Shehata, M. M. (2005). Experimental evidence relating to the person-situation interactionist model of ethical decision making. Business Ethics Quarterly, 15, 363–383. Conway, P., & Gawronski, B. (2013). Deontological and utilitarian inclinations in moral decision making: A process dissociation approach. Journal of Personality and Social Psychology, 104, 216–235. Coulthard, D., & Keller, S. (2012). Technolophilia, neo-Luddism, eDependency and the judgment of Thamus. Journal of Information, Communication and Ethics in Society, 10, 262–272. Cushman, F. (2008). Crime and punishment: Distinguishing the roles of causal and intentional analyses in moral judgment. Cognition, 108, 353–380. Cushman, F., & Young, L. (2011). Patterns of moral judgment derive from nonmoral psychological representations. Cognitive Science, 35, 1052–1075. Cushman, F. A., Young, L., & Hauser, M. D. (2006). The role of conscious reasoning and intuitions in moral judgment: Testing three principles of harm. Psychological Science, 17, 1082–1089. de Graaff, M. C., Schut, M., Verweij, D. E. M., Vermetten, E., & Giebels, E. (2016). Emotional reactions and moral judgment: The effects of morally challenging interactions in military operations. Ethics and Behavior, 26, 14–31. de Hooge, I. E., Nelissen, R. M. A., Breugelmans, S. M., & Zeelenberg, M. (2011). What is moral about guilt? Acting “prosocially” at the disadvantage of others. Journal of Personality and Social Psychology, 100(3), 462–473. Doherty, N. F., Anastasakis, L., & Fulford, H. (2011). Reinforcing the security of corporate information resources: A critical review of the role of the acceptable use policy. International Journal of Information Management, 31, 201–209. Elhai, J. D., & Hall, B. J. (2016). Anxiety about internet hacking: Results from a community sample. Computers in Human Behavior, 54, 180–185. Epley, N., Savitsky, K., & Gilovich, T. (2002). Empathy neglect: Reconciling the spotlight effect and the correspondence bias. Journal of Personality and Social Psychology, 83, 300–312. Fida, R., Tramontano, C., Paciello, M., Ghezzi, V., & Barbaranelli, C. (2016). Understanding the interplay among regulatory self-efficacy, moral disengagement, and academic cheating behavior during vocational education: A three-wave study. Journal of Business Ethics, 1–16. https://doi.org/10.1007/s10551-016-3373-6. Ford, M. T., Agosta, J. P., Huang, J., & Shannon, C. (2018). Moral emotions toward others at work and implications for employee behavior: A qualitative analysis using critical incidents. Journal of Business Psychology, 33, 155–180. Frey, B. F. (2000). The impact of moral intensity on decision making in a business context. Journal of Business Ethics, 26, 181–195. Garg, V., & Camp, L. J. (2015). Cars, condoms, and Facebook. In Y. Desmedt (Ed.). Information security (pp. 280–289). Springer International Publishing. Ghorbani, M., Liao, Y., Caykoylu, S., & Chand, M. (2013). Guilt, shame, and reparative

system vulnerabilities and may be forthcoming with ideas on how to improve security. Information systems professionals can take advantage of this opportunity to enhance their skills, knowledge, and ability to improve systems security and take action to cope with difficult challenges in an ill intention breach. Hacking is a violation of criminal law regardless of a hacker’s intention because it involves unauthorized access to a system (National Institute of Justice, 2000). Increased understanding of hackers’ reaction toward an information security breach can assist the regulatory authorities to enact rules to regulate the cyberspace to monitor, restrain, and punish unethical hacking. Regulatory authorities can implement strict rules, prosecute violators, and mete out severe punishment to deter unethical hacking. Social media can be an effective platform for dissemination of information on hacking cases in which violators are prosecuted and harsh punishment is meted out for unethical hacking. 6.3. Limitations and future research direction This study has some limitations. First, we use one question to measure consideration of the consequences and responsibility judgment separately. Future research can develop comprehensive measures of these constructs to examine whether similar results are obtained. Further, future work can examine additional factors influencing responsibility judgment of an information security breach via an experiment to provide additional insight into our survey findings. Additionally, this study focuses on the consequences related to the victims’ emotional distress. Future research can investigate whether hackers would react differently toward an ill or good intention information security breach that results in disastrous financial consequences that affect the capital markets. Research can also provide insight into how different stakeholders such as financial statement users, novice investors, sophisticated investors, and other parties would respond to an ill or good intention information security breach. Finally, future work can shed light on whether hackers favor their in-group (e.g., hacking with a good intention) in their responsibility judgment and examine whether consideration of the consequences can attenuate the effect of in-group bias on responsibility judgment. 7. Conclusion The results of this study highlight the important mediating role of perceived moral affect in the relationship between perceived moral intensity of emotional distress and responsibility judgment in an ill intention breach. The mediating effect is not observed in a good intention breach. The findings also indicate that consideration of the consequences strengthens the relationship between perceived moral affect and responsibility judgment in a good intention breach. This moderating effect is absent in an ill intention breach. Thus, the nature of the intent of an information security breach influences the results of the mediating and moderating hypotheses. The findings have important implications for users, information systems professionals, and regulatory authorities. References Ahmad, A., Maynard, S. B., & Shanks, G. (2015). A case analysis of information systems and security incident responses. International Journal of Information Management, 35, 717–723. Alsmadi, D., & Prybutok, V. (2018). Sharing and storage behavior via cloud computing: Security and privacy in research and practice. Computers in Human Behavior, 85, 218–226. Bandura, A. (2001). Social cognitive theory: An agentic perspective. Annual Review of Psychology, 52, 1–26. Barnett, T., & Valentine, S. (2004). Issue contingencies and marketers’ recognition of ethical issues, ethical judgments and behavioral intentions. Journal of Business Research, 57, 338–346. Baron, R. M., & Kenny, D. A. (1986). The moderator-mediator variable distinction in social psychological research: Conceptual, strategic and statistical considerations. Journal of Personality and Social Psychology, 51, 118–1173.

395

International Journal of Information Management 49 (2019) 388–396

S.H. Chan and S. Janjarasjit

judgment. Journal of Business Ethics, 72, 335–357. Menell, P. S. (2018). Rise of the API copyright dead: An updated epitaph for copyright protection of network and functional features of computer software. Harvard Journal of Law and Technology, 31, 305–490. Mill, J. S. (1998). In R. Crisp (Ed.). UtilitarianismNew York, NY: Oxford University Press (Original work published 1861). Miller, R. M., Hannikainen, I. A., & Cushman, F. A. (2014). Bad actions or bad outcomes? Differentiating affective contributions to the moral condemnation of harm. Emotion, 14, 573–587. Molok, N. N. A., Ahmad, A., & Chang, S. (2018). A case analysis of securing organisations against information leakage through online social networking. International Journal of Information Management, 43, 351–356. Moore, A. B., Stevens, J., & Conway, A. R. A. (2010). Individual differences in sensitivity to reward and punishment predict moral judgment. Personality and Individual Differences, 50, 621–625. Muthen, B. (2011). Applications of causally defined direct and indirect effect in mediation analysis using SEM in mplus (Technical report)Los Angeles, CA: Muthen and Muthen. National Institute of Justice (2000). State and local law enforcement needs to combat electronic crime. Washington, DC: U.S. Department of Justice. Navarrete, C. D., McDonald, M. M., Mott, M. L., & Asher, B. (2012). Virtual morality: Emotion and action in a simulated three-dimensional “trolley problem”. Emotion, 12, 364–370. Patterson, D. (2017). Interview with a hacker: Kapustkiy from new world hackers. TechRepublichttps://www.techrepublic.com/article/interview-with-a-hackerkapustkiy-from-new-world-hackers/. Phillips, J., & Shaw, A. (2014). Manipulating morality: Third-party intentions alter moral judgments by changing causal reasoning. Cognitive Science, 203, 1–48. Pieters, W., & Consoli, L. (2009). Vulnerabilities and responsibilities: Dealing with monsters in computer security. Journal of Information, Communication and Ethics in Society, 7, 243–257. PricewaterhouseCoopers (2017). Global State of information security survey: 2017 results by industry. https://www.pwc.com/gx/en/issues/information-security-survey/ geopolitical-cyber-threats.html. Robertson, C. J., Lamin, A., & Livanis, G. (2010). Stakeholder perceptions of offshoring and outsourcing: The role of embedded issues. Journal of Business Ethics, 95, 167–189. Roumani, Y., Nwankpa, J. K., & Roumani, Y. F. (2016). Examining the relationship between firm’s financial records and security vulnerabilities. International Journal of Information Management, 36, 987–994. Rutland, A., Killen, M., & Abrams, D. (2010). A new social-cognitive developmental perspective on prejudice. Perspective on Psychological Science, 5, 279–291. Singhapakdi, A., Vitell, S. J., & Kraft, K. L. (1996). Moral intensity and ethical decisionmaking of marketing professionals. Journal of Business Research, 36, 245–255. Stellar, J. E., Cohen, A., Overis, C., & Keltner, D. (2015). Affective and physiological responses to the suffering of others: Compassion and vagal activity. Journal of Personality and Social Psychology, 108, 572–585. Tangney, J. P. (1991). Moral affect: The good, the bad, and the ugly. Journal of Personality and Social Psychology, 61, 598–607. Tangney, J. P., Miller, R. S., Flicker, L., & Barlow, D. H. (1996). Are shame, guilt, and embarrassment distinct emotions? Journal of Personality and Social Psychology, 70, 1256–1269. The Mentor (1986). The hacker manifesto: The conscience of a hacker. http://phrack.org/ issues/7/3.html. Vanderweele, T. J., & Vansteelandt, S. (2009). Conceptual issues concerning mediation, interventions and composition. Statistics and Its Interface, 2, 457–468. van Schaik, P., Jeske, D., Onibokun, J., Coventry, L., Jansen, J., & Kusev, P. (2017). Risk perceptions of cyber-security and precautionary behaviour. Computers in Human Behavior, 75, 547–559. Whitman, M. E. (2004). In defense of the realm: Understanding the threats to information security. International Journal of Information Management, 24, 43–57. Workman, M., Bommer, W. H., & Staub, D. (2009). The amplification effects of procedural justice on a threat control model of information systems security behaviours. Behaviour and Information Technology, 28, 563–575. Wright, J. C. (2010). On intuitional stability: The clear, the strong, and the paradigmatic. Cognition, 115, 491–503. Young, L., & Saxe, R. (2011). When ignorance is no excuse: Different roles for intent across moral domains. Cognition, 120, 202–214.

behavior: The effect of psychological proximity. Journal of Business Ethics, 114, 311–323. Grant, A. M., & Gino, F. (2010). A little thanks goes a long way: Explaining why gratitude expressions motivate prosocial behavior. Journal of Personality and Social Psychology, 98, 946–955. Gray, K., & Schein, C. (2012). Two minds vs. two philosophies: Mind perception defines morality and dissolves the debate between deontology and utilitarianism. Review of Philosophy and Psychology, 3, 405–423. Greene, J. D. (2008). The secret joke of Kant’s soul. In W. Sinnott-Armstrong (Ed.). Moral psychology and biology. New York: Oxford University Press. Greene, J. D., Burnette, J. L., & Davis, J. L. (2008). Third-party forgiveness: (Not) forgiving your close other’s betrayer. Personality and Social Psychology Bulletin, 34, 407–418. Greene, J. D., Sommerville, R. B., Nystrom, L. E., Darley, J. M., & Cohen, J. D. (2001). An fMRI investigation of emotional engagement in moral judgment. Science, 293, 2105–2108. Haidt, J. (2003). The moral emotions. In R. J. Davidson, K. R. Scherer, & H. H. Goldsmith (Eds.). Handbook of affective sciences (pp. 852–870). New York, NY: Oxford University Press. Haidt, J. (2007). The new synthesis in moral psychology. Science, 316, 998–1002. Haidt, J., & Graham, J. (2007). When morality opposes justice: Conservatives have moral intuitions that liberals may not recognize. Social Justice Research, 20, 98–116. Hauser, M. (2006). Moral minds: How nature designed our universal sense of right and wrong. New York, NY: Ecco-Harper Collins. Hauser, M., Cushman, F., Young, L., Jin, R. K.-X., & Mikhail, J. (2007). A dissociation between moral judgments and justifications. Mind and Language, 22, 1–21. Hauser, M. D., Tonnaer, F., & Cima, M. (2009). When moral intuitions are immune to the law: A case study of euthanasia and the act-omission distinction in the Netherlands. Journal of Cognition and Culture, 9, 149–169. Horberg, E. J., Oveis, C., & Keltner, D. (2011). Emotions as moral amplifiers: An appraisal tendency approach to the influences of distinct emotions upon moral judgment. Emotion Review, 3, 237–244. Janoff-Bulman, R., & Carnes, N. C. (2013). Surveying the moral landscape: Moral motives and group-based moralities. Personality and Social Psychology Review, 17, 219–236. Jones, T. M. (1991). Ethical decision making by individuals in organizations: An issuecontingent model. The Academy of Management Review, 16, 366–395. Jordan, J., Diermeier, D. A., & Galinsky, A. D. (2012). The strategic Samaritan: How effectiveness and proximity affect corporate responses to external crises. Business Ethics Quarterly, 22, 621–648. Kankanhalli, A., Teo, H.-H., Tan, B. C. Y., & Wei, K.-K. (2003). An integrative study of information systems security effectiveness. International Journal of Information Management, 23, 139–154. Killen, M., & Smetana, J. G. (2015). Origins and development of morality. In (7th ed.). M. Lamb (Vol. Ed.), Handbook of child psychology: Vol. III, (pp. 701–749). New York: Wiley-Blackwell Publishing Ltd. Kline, R. B. (2015). Principles and practice of structural equation modeling. London: Guilford Publications. Kumar, M. (2016). Websites of Indian embassy in 7 countries hacked: Database leaked online. https://thehackernews.com/2016/11/indian-embassy-hacked.html. Leyens, J.-P., Paladino, P. M., Rodriguez-Torres, R., Vaes, J., Demoulin, S., RodriguezPerez, A., et al. (2000). The emotional side of prejudice: The attribution of secondary emotions to ingroups and outgroups. Personality and Social Psychology Review, 4, 186–197. Li, L., He, W., Xu, L., Ash, I., Anwar, M., & Yuan, X. (2019). Investigating the impact of cybersecurity policy awareness on employees’ cybersecurity behavior. International Journal of Information Management, 45, 13–24. Mamonov, S., & Benbunan-Fich, R. (2018). The impact of information security threat awareness on privacy-protective behaviors. Computers in Human Behavior, 83, 32–44. Martin, J. W., & Cushman, F. (2016). Why we forgive what can’t be controlled. Cognition, 147, 133–143. McGill, T., & Thompson, N. (2017). Old risk, new challenges: Exploring differences in security between home computer and mobile device use. Behaviour and Information Technology, 36, 1111–1124. McMahon, J. M., & Harvey, R. J. (2006). An analysis of the factor structure of Jones’ moral intensity construct. Journal of Business Ethics, 64, 381–404. McMahon, J. M., & Harvey, R. J. (2007). The effect of moral intensity on ethical

396