ARTICLE IN PRESS
Int. J. Human-Computer Studies 64 (2006) 1141–1153 www.elsevier.com/locate/ijhcs
Predicting user satisfaction, strain and system usage of employee self-services Udo Konradta,, Timo Christophersena, Ute Schaeffer-Kuelzb a
Department of Psychology, University of Kiel, Olshausenstr 40, 24 098 Kiel, Germany Faculty of Informatics, University of Applied Sciences Heidelberg, Ludwig-Guttmann-Street 6, 69 004 Heidelberg, Germany
b
Received 14 July 2005; received in revised form 12 July 2006; accepted 16 July 2006 Communicated by S. Wiedenbeck Available online 28 August 2006
Abstract In this study, we explore attitudinal and behavioural patterns when using employee self-service (ESS) systems by using an expanded technology acceptance model (TAM). We examine the relationship between organizational support and information policy on the one hand and ease of use and usefulness on the other, and then the relationship between ease of use and usefulness with satisfaction, strain and system usage. To explore question order effects, user satisfaction was assessed prior to or after survey items. Data was collected from 517 employees using an ESS system. Results from partial least squares structural equation modelling suggests that (a) organizational support and information policy were positively related to ease of use, (b) usefulness was positively related to satisfaction and system usage, (c) ease of use and usefulness were negatively related to user strain and (d) ease of use fully mediates the relation between organizational support and strain as well as between information policy and strain. Evidence for a question order effect was found with increased satisfaction judgements, when satisfaction was assessed after the survey items. Results are discussed in terms of the theoretical and methodological aspects of the TAM and their implications for ESS system implementation. r 2006 Elsevier Ltd. All rights reserved. Keywords: Employee self-service; Technology acceptance model; System usage; Question order effect; Partial least squares analysis
1. Introduction Interactive corporate e-services are increasingly being made available to employees. Employee self-service (ESS) is a group of systems used in Human Resource Management. ESS is defined as a corporate web portal that enables managers and employees to view, create and maintain relevant personnel information, e.g., benefits, payroll, vacation time and flex spending. ESS is a heterogeneous group of information systems (IS), services and applications that can be used by employees. It can be classified according to (a) the type of personnel process that is supported (administrative versus strategic) and (b) the basic channel functions that can be supported. These are: Corresponding author. Tel.: +49 431 8803676; fax: +49 431 8801559.
E-mail addresses:
[email protected] (U. Konradt),
[email protected] (T. Christophersen),
[email protected] (U. Schaeffer-Kuelz). 1071-5819/$ - see front matter r 2006 Elsevier Ltd. All rights reserved. doi:10.1016/j.ijhcs.2006.07.001
inform (e.g., factory agreements, vacation rules), interact (e.g., access to personnel files), transact (e.g., application for leave, travel expense claim), and deliver (e.g., payslips, training videos). The rationale of ESS is that it helps to relieve the personnel department from the burden of recurrent tasks and to empower employees to take a more active role in personnel processes (Lengnick-Hall and Moritz, 2003; Marler and Dulebohn, 2005). Employees who are enabled to actively participate in their Human Resources processes would develop positive attitudes towards the work processes and higher self-efficacy (Barki and Hartwick, 1994; Hartwick and Barki, 1994). User empowerment has been found to be an effective predictor of information technology use (Doll et al., 2003). The current body of research suggests that user acceptance is possibly a critical determinant of the effectiveness and efficiency of an ESS. Because it is not typically mandatory to use ESS, an unaccepted system will seldom be used by personnel. In this study, we examine the
ARTICLE IN PRESS 1142
U. Konradt et al. / Int. J. Human-Computer Studies 64 (2006) 1141–1153
relationship between characteristics of the ESS as perceived by the users, and the consequences of user satisfaction, system usage and perceived strain. Furthermore, we explore the relationship between antecedent variables in the implementation process, i.e., organizational support, and information policy with usefulness and ease of use. Finally, we investigate whether user satisfaction is affected by the order in which questions are presented. 2. Modelling framework and hypotheses Alongside several other models of technology acceptance (Venkatesh et al., 2003), the technology acceptance model (TAM; Fig. 1; Davis, 1989, 1993) is an influential model for the explanation and prediction of system user behaviour. The TAM, which is theoretically based on the theory of reasoned action (TRA) by Fishbein and Ajzen (1975), is a model for the prediction of technology usage and refers to determinants of relevance for technology acceptance. Its application is not limited to any specific domain of human–computer interaction. According to the TAM one of the main determinants of system usage is assumed to be perceived ease of use, i.e., only if the user believes usage of a system to be easy, will he/she use the system. The second major influence is usefulness of the system as perceived by the user. This refers to the user’s evaluation of task enhancement when using the system. In addition, the model suggests that system usage is indirectly influenced by both perceived ease of use and perceived usefulness. All external variables, which may influence the acceptance of new technology usage, are assumed to be mediated by two major determinants. Only the subjective social norm is assumed to affect usage directly. Additionally, the model proposes that subjective social norm influences perceived usefulness. The validity of the TAM has been widely demonstrated in a variety of domains including microcomputers and personal computer usage (Igbaria et al., 1997; Brown et al., 2002), desktop videoconferencing in virtual teams (Townsend et al., 2001), and on-line consumer behaviour (Chen et al., 2002; Koufaris, 2002), and within a variety of information technologies (Dillon and Morris, 1996; Legris et al., 2003). Strong evidence for the core assumptions of the TAM has been repeatedly reported. Literature investigating the success or otherwise of IS implementation suggests that user satisfaction should be considered as one major influence on technology acceptance (DeLone and McLean, 1992, 2003; Wixom and Todd, 2005). Studies show that ease of use and usefulness predicts user satisfaction and system usage (Mahmood et al., 2001). In particular, empirical evidence has been gathered, which demonstrates that the TAM serves as a valid model for predicting the usage of workplace intranet and Internet systems (Szajna, 1996; Al-Gahtani and King, 1999; Anandarajan et al., 2000; Horton et al, 2001). For example, Lederer et al. (2000) found that perceived usefulness as well as perceived ease of use predicted web usage for work-related tasks. Horton
et al. (2001) revealed that ease of use and usefulness were related to self-reported intranet use and user satisfaction. In a study on employee Internet usage, Anandarajan et al. (2000) found that usefulness was positively associated with Internet use (Fig. 1). In accordance with the extensive body of research on the TAM, we hypothesize that H1. The usefulness of ESS will be positively related to user satisfaction and system usage. The model has been repeatedly extended in order to applicable to specific domains of technology usage (e.g., Chau, 1996; Venkatesh and Davis, 2000; Chau and Hu, 2002; Schillewaert et al., 2005). The following section introduces the theoretical basis for a proposed modified version of the TAM, which is used for the prediction of ESS usage within this study. 2.1. Occupational strain Reviews of the literature have confirmed the importance of computer usage with respect to peoples’ health and wellbeing (Coovert and Thompson, 2002; Hamborg and Greif, 2003). Although the TAM has seldom been applied in occupational health studies, the conceptual similarities between ease of use and stressors in work stress models (e.g., Sonnentag and Frese, 2003, for a review) are evident. Human–computer interaction research has revealed that mental models help users to learn and control software (Carroll and Olson, 1988). Operating computer systems that are not easy to use should require higher cognitive effort because expectations regarding a system mental model are not met. This would result in negative consequences, such as user errors, user frustration and aversive stress reactions (e.g., Otter and Johnson, 2000; Coyle and Gould, 2002). Secondly, computer systems have become ubiquitous in organizational settings, where employees are normally unable to fully refuse to use systems but rather reduce their frequency of use. As a consequence, users who are exposed to systems with poor ergonomic design or are faced with system malfunctions would feel a loss of control if they are unable to avoid them (Allwood and Thomee, 1998; Venkatesh, 2000). IS success literature suggests an influence of emotional aspects on Technology Acceptance Model (Davis et al., 1989) Subjective Social Norm
Usefulness System Usage External Variables
Ease of Use
Fig. 1. Technology acceptance model Davis (1989).
ARTICLE IN PRESS U. Konradt et al. / Int. J. Human-Computer Studies 64 (2006) 1141–1153
technology acceptance (McCalla et al., 1993). Finally, unpleasant emotional reactions, such as frustration, loss of confidence and anger may emerge that have detrimental effects on performance (Brave and Nass, 2003; Partala and Surakka, 2004). Taken together, these findings suggest the following hypothesis: H2. Ease of use and usefulness of ESS will be negatively related to perceived strain. 2.2. Organizational and user support in implementation processes Organizations can provide multiple ways to help their employees to adapt better to IS. As discussed by Fishbein and Ajzen (1975), organizational support is an important factor that influences an individual’s perception and attitudes towards IS. In the context of our study organizational support is defined as the extent to which top and middle management allocates adequate resources to help employees to achieve organizational goals, e.g., by providing training and technical support facilities (Mirvis et al., 1991; Grover, 1993). Lack of support from management was considered as a barrier to effective IS usage (Guimaraes and Igbaria, 1997; Phelps and Mok, 1999; Sharma and Yetton, 2003). Accordingly, positive relationships have been found between organizational support and computer usage (Fuerst and Cheney, 1982; Igbaria et al., 1997; Sharma and Yetton, 2003). Individuals who are encouraged by top management and receive adequate training as well as working resources are more motivated to explore the system without worrying about the negative consequences of system failure. Thus, it could be argued that the perceived risk of errors will be reduced. Secondly, in comparison with employees who receive less support— user satisfaction will be enhanced through greater competency in system usage and a greater belief in the helpfulness of the system functions. Individual competence and the allocation of adequate resources would reduce the time required to accomplish a computer-supported task and also enhance the benefits gained from its usage (Mahmood et al., 2000). Finally, organizational behaviour research has consistently shown that perceived organizational support
1143
(Eisenberger et al., 1986) is linked to positive employee behaviour and feelings. These include additional effort, feelings of obligation and more loyalty for a given level of social and material rewards (Eisenberger et al., 1990). A related construct to organizational support is information policy. We define information policy as an organization’s strategy for communicating their principles and priorities of information usage, and which information management principles are relevant to establishing the organization’s goals relating to cost-effectiveness, knowledge management and organizational culture. Compared with user involvement, which is defined as the subjective psychological state reflecting the importance and personal relevance that users attach to their organization (Barki and Hartwick, 1994), information policy will contribute to employees’ expectations regarding system usage. Influenced by studies in user participation, involvement and behavioural engagement (Ives and Olson, 1984; Barki and Hartwick 1994), we assume information policy to be an integral factor in system usage. Therefore, in extending the TAM framework by including antecedent variables, it was hypothesized: H3. Organizational support and (a prospective) information policy will be positively related to system usage and user satisfaction, and negatively related to perceived strain. H4. Ease of use mediates the relationship among organizational support and information policy on the one hand and perceived strain on the other. The research model examined in this study is illustrated in Fig. 2. In order to test Hypothesis 4, which proposes a mediation effect of ease of use, competing models have to be tested which differ from the basic research model by having additional paths leading from organizational support and information policy to strain, assuming partial mediation by ease of use. In Fig. 2, these paths are indicated by dashed lines. The full model comprises a complete set of associations between antecedent factors (organizational support and information policy), beliefs and attitudes about the ESS (ease of use, usefulness) and effects (system usage, user satisfaction and strain).
Research Model
Organizational Support H3
H1, H2 Usefulness
System Usage
User Satisfaction
Information Policy H3
Ease of Use H1, H2, H4
Strain
Fig. 2. Research model. Note: The original constructs of the TAM are written in italic type. Additional paths were added for further model explication. The dashed lines indicate the paths which were added to examine a mediation effect of ease of use on strain as proposed in Hypothesis 4.
ARTICLE IN PRESS 1144
U. Konradt et al. / Int. J. Human-Computer Studies 64 (2006) 1141–1153
2.3. Context effects In management information systems (MIS) research, self-reported measures are usually used to explore the expectations regarding the system and their influence on user satisfaction and system usage. Previous research questions the applicability of subjective measures (Straub et al., 1995; Horton et al., 2001). For example, Horton et al. (2001) revealed that the association of ease of use and usefulness with self-reported intranet usage was low, and that the relationship decreased when using objective measures assessing usage. Comparisons showed a weak relationship between subjective and objective measures of system usage (Straub et al., 1995). Although differences were found between system usage measured by subjective judgements and objective computer recorded login times (Straub et al., 1995), in the previous TAM literature methodological artefacts have not been considered very often. However, survey studies have found clear empirical evidence for question order effects in selfadministered surveys (Sudman et al., 1996; Schwarz et al., 1998), especially when attitudinal judgements (e.g., satisfaction) are made (Schuman and Presser, 1981; Schuman, 1992; Narayan and Krosnick, 1996). Context-related questions and temporarily accessible information may influence the outcome in that the judgement is more positive (negative) when positive (negative) information comes to mind (Sudman et al., 1996). Besides this context effect, prior information can be excluded in answer formation, rather than included, which would lead to contrast effects. Schwarz and Bless (1992) assume that an assimilation effect is more likely to appear in survey research than contrast effects. Thus, preceding questions may increase the cognitive accessibility of relevant aspects and may influence subsequent judgements (Schwarz et al., 1998; Tourangeau et al., 2003). For example, in a series of experimental studies, Oishi et al. (2003) demonstrated that individuals who were primed to specific categories of life satisfaction based their judgements of life satisfaction more heavily on these aspects. All in all, studies highlight the contextually sensitive nature of satisfaction judgements. Based on this literature, we hypothesize: H5. Respondents will show different user satisfaction scores when user satisfaction is assessed at first compared to a later position in the survey. 3. Method 3.1. Sample Participants were 517 employees of a chemical company that used an ESS system implemented in the corporate intranet. Detailed information on how the sample was drawn from the population and whether it can be considered representative is given in the sections procedures and participant respondent analysis, respectively.
The sample consisted of 31.9% females, and 68.1% males with an average age of 39.4 years (SD ¼ 8.6). In total, 34 participants (6.6%) had a basic secondary school education, 122 participants (23.6%) had successfully completed 10th grade, 129 participants (25.0%) had graduated from high school, and 214 participants (42.9%) had a university degree. The majority of the respondents (45.6%) characterized themselves as advanced IS users, 32.1% as experts, 19.1% as normal experienced IS users and 6 participants (1.2%) as beginners. The amount of the respondents’ actual Internet usage strongly varied with an average of 10.5 h/week (SD ¼ 13.5). Most of the participating employees (73.7%) were executive workers, 4.6% worked in supportive positions, 9.5% were skilled workers, while the remaining 9.9% held a leading or managing position. 3.2. Formative and reflective measures When assessing a latent construct, a measurement model has to be developed to make the construct observable by using indicators. Although widely disregarded (Jarvis et al., 2003), latent constructs differ in their relationship to their indicators (Diamantopoulos and Winklhofer, 2001). The common approach to assessing variables follows the standard of classical test theory (Lord and Novick, 1968). Classical test theory is based on the assumption that all indicators are caused by the latent construct and, hence, the indicators reflect the value of the construct. Consequently, this kind of measurement model is termed reflective. As all indicators share the same core, high correlations between them are expected. In order to achieve high internal consistency of the scale, indicators that correlate less with other indicators of the same construct can be eliminated without changing the content (Churchill, 1979). On the other hand, a formative model differs from a reflective model in terms of the causal relation between indicators and the construct (Bollen and Lennox, 1991). In a formative model, the indicators are assumed to determine the latent construct. Since each observable indicator may represent a different aspect, an assumption concerning the correlations between formative indicators cannot be made. High correlations are possible, but not necessarily expected. Thus, it is not appropriate to eliminate indicators in formative measurement models, because neglecting a single indicator with relevance to the construct would result in a different meaning (Diamantopoulos and Siguaw, 2002). In consequence, any measure of scale reliability which is based on high correlations of the indicators (e.g., Cronbach’s alpha) is not appropriate in the case of formative scales. Studies have indicated that an incorrect specification of the measurement model can result in severe biases in the structural parameters of the assumed theoretical model, and lead to wrong interpretations of effect size and/or the significance of relationships among constructs (e.g., Law et al., 1998; Jarvis et al., 2003).
ARTICLE IN PRESS U. Konradt et al. / Int. J. Human-Computer Studies 64 (2006) 1141–1153
3.3. Measures All items were measured using a 7-point Likert-type scale from 1 (‘strongly disagree’) to 7 (‘strongly agree’), except for the measurement of user satisfaction and system usage. Formative indicator items were generated by the authors. Organizational support: Due to the nature of the latent construct, a formative measurement model was selected for organizational support. The construct was measured with two items, ‘I am satisfied with the available hotline for problems in using the ESS’ (os1), and ‘I am satisfied with the on-line help functions of the ESS’ (os2). Information policy: Information policy was assessed as a latent construct with two items, ‘I was informed about the implementation of the ESS early enough’ (ip1) and ‘I received sufficient information about the implementation of the ESS’ (ip2). Usefulness: Usefulness was based on a reflective scale used in different studies (e.g., Venkatesh, 2000; Venkatesh and Davis, 2000). Reflective items included ‘The ESS makes it easy to perform administrative tasks’ (u1), ‘The ESS allows me to perform administrative tasks faster’ (u2), ‘The ESS increases the effectiveness of performing administrative tasks’ (u3), and ‘The ESS is useful for the performance of administrative tasks’ (u4). The computation of Cronbach’s alpha in the actual study indicates a high-scale reliability of 0.93. Ease of use: The four reflective items for ease of use were adopted from Davis’ (1989) original scale. The items were ‘The usage of the ESS can be learned easily’ (eos1), ‘The ESS is easy to use’ (eos2), ‘It is easy to become familiar with the ESS’ (eos3), and ‘The interaction with the ESS is clear and easy to understand’ (eos4). The average scale reliability in the current study was 0.96 (Cronbach’s alpha). User satisfaction: User satisfaction was measured by two reflective items, which differed in their format. The first item ‘Overall, how satisfied are you with the ESS?’ (us1) was answered on a Cunin-Scale (Dunham and Herman, 1975) using faces that express extreme satisfaction (coded as ‘7’) to extreme non-satisfaction (coded as ‘1’). The second indicator was taken from a list of items that were associated with the question ‘Which consequences are associated with the ESS from your point of view?’. The item ‘Increase in my satisfaction’ (us2) was answered on a 7-point Likert-type scale ranging from 1 (‘fully disagree) to 7 (‘fully agree’). The other items in the list were only for the purpose of further evaluating the ESS. Hence, they were not used in the analysis. For the current study, a Spearman correlation of 0.32 was observed (alpha ¼ 0.41) for the two indicators of user satisfaction. Although this indicates insufficient internal consistency of the user satisfaction scale, it has been retained for data analysis, because the computation of partial least squares showed a composite reliability of 0.80 (see Table 2). Strain: Strain was conceived as a reflective construct with two items including ‘While using the ESS, I feel strained’
1145
(st1) and ‘Doing administrative tasks with the ESS placed high demands on my concentration’ (st2) (alpha ¼ 0.80 in the actual study). Use: System usage was measured on a subjective level by the single item ‘Overall, I use the ESS y’ (su) with a 7point Likert-type format scale ranging from 1 (‘never’) to 7 (‘very often’). Single-item measures are often regarded as inappropriate because of reliability restriction (Churchill, 1979). However, Rossiter (2002) argued that system usage is a concrete and completely clear-cut construct. Additional items would change the conceptual meaning and result in a loss of content validity. Likewise, empirical evidence demonstrated sufficient reliability and validity of singleitem measures both with attitudinal and behavioural scales (e.g., Wanous et al., 1997; Li et al., 2000; Wanous and Hudy, 2001). 3.4. Procedures An on-line questionnaire was distributed to 1541 employees of a company who voluntarily used ESS via the company’s Intranet. Studies have shown that the data quality of electronic surveys has been proven to be no worse than that of traditional data collection methods (e.g., Boyer et al., 2002). The ESS offered a broad range of services including the verification and modification of personnel information, e.g., their home addresses and banking accounts, the submission of different kinds of applications, e.g., for holidays or flexible working hours, and travel expense accounting. The questionnaire together with a cover letter was sent to the participants. To explore the question order effect, half of the participants received a version of the questionnaire where the satisfaction item us1 was positioned at the beginning, while this item was located at the end of the questionnaire, for the other half of the participants. 4. Results The results section is structured as follows. Firstly, response related information is presented followed by a second subsection on descriptive results, which include the correlations between the major constructs of the research model. Subsequently, four subsections focus on the results for the proposed hypotheses including an introduction on how the analysis of the hypotheses was carried out. 4.1. Participant respondent analysis A total of 465 employees responded to the first mailing, and, one week later a reminder was sent to all nonrespondents via email to which another 52 persons responded. Early and late respondents did not differ in their assessments of ease of use, usefulness, user satisfaction, or strain (all t (515)o1.45, n.s.). Likewise, there were no significant differences in education, occupational position, age and sex in the proportion of late and early
ARTICLE IN PRESS
0.17*** 0.29*** 0.29*** 0.03 0.05 0.23*** 0.26*** 0.19*** 0.22*** 0.23*** 0.02 0.02 0.11* 0.14** 0.03 0.16*** 0.05 0.03 0.19*** 0.06
*po0.05, **po0.01, ***po0.001. a Hours per week. b Beginner (1), normal experienced user (2), advanced user (3), expert (4). c Primary school education (1), secondary school (2), higher secondary school grade level (3), university (4). d Supportive worker/assistant (1), skilled worker (2), executive (3), leader/management position (4). e Male (0), female (1).
0.04 0.09 0.03 0.05 0.05 0.01 0.20*** 0.14** 0.37*** 0.09 0.06 0.20*** 0.10* 0.06 0.08 0.33*** 0.55*** 0.15** 0.06 0.00 0.13** 0.05 0.02 0.20*** 0.64*** 0.42*** 0.45*** 0.25*** .07 0.12 0.02 0.06 0.10* 0.16*** 0.35*** 0.45*** 0.27*** 0.26*** 0.11* 0.07 0.00 0.11** 0.00 0.05 0.15*** 0.13** 0.13** 0.23*** 0.19*** 0.18*** 0.06 0.01 0.07 0.01 0.01 0.08 0.05 1.32 1.61 1.28 1.08 0.98 1.37 1.30 13.51 — — — 8.56 — 4.91 5.17 5.63 5.86 5.49 2.05 5.17 10.46 — — — 39.41 0.32 2 2 4 4 2 2 1 1 1 1 1 1 1 Organizational support Information policy Usefulness Ease of use User satisfaction Strain System usage Internet usagea IS experienceb Level of educationc Occupational positiond Age Gendere
8 7 6 5 4 3 2 1 SD
Since bivariate zero-order correlations do not take the multivariate relations among variables into account, we performed PLS structural equation modelling for testing Hypotheses 1–4, using PLS Graph 3.0 (Chin, 2001). PLS allows the inclusion of both reflective and formative measures in a single analysis, and the simultaneous estimation of the measurement model and the structural model (Tenenhaus et al., 2005). The tests of significance were conducted using the bootstrap resampling procedure (Efron and Tibshirani, 1986). Missing data (on average 7.2%) was handled by a regression-based multiple imputation technique (Schafer, 1997). Reviews have shown that this technique is a reliable method for estimating missing data (Schafer and Graham, 2002; Graham et al., 2003). Prior to analysing the structural parameters of the PLS model, the measurement model was tested. The analysis
Mean
4.3. Evaluation of Hypotheses 1–3
Table 1 Intercorrelations of latent variables scores and sample characteristics
The descriptive statistics and correlations between the latent variables and demographics are provided in Table 1. As expected, ease of use and usefulness both held significant, bivariate correlations with system usage (r ¼ 0:15, po0.01, and r ¼ 0:25, po0.001, respectively) and user satisfaction (r ¼ 0:33, po0.001 and r ¼ 0:42, po0.001, respectively). Furthermore, strain was significantly negatively related to the ease of use (r ¼ 0:55, po0.001) and usefulness (r ¼ 14, po0.01). Finally, the demographic variables were related in the expected directions. IS experience was significantly positively related to internet usage (r ¼ 0:23; po0.001), and level of education (r ¼ 0:23; po0.001), as well as position held within the company (r ¼ 0:26; po0.001), and was significantly negatively related to both age (r ¼ 0:19; po0.001) and gender (r ¼ 0:22; po0.001) with males stating that they had a higher level of IS experience than females. The position held within the company was positively related to level of education (r ¼ 0:29; po0.001) and age (r ¼ 0:17; po0.001), and negatively related to gender (r ¼ 0:29; po0.001) with males holding higher positions within the company than females. Thus, these correlations reflect content validity among demographic variables.
9
4.2. Descriptive statistics
1 2 3 4 5 6 7 8 9 10 11 12 13
10
11
12
respondents. Hence, the lack of differences between early and late respondents indicates no non-response bias (Dooley and Linder, 2003). The overall response rate was 33.9% (n ¼ 522). Five cases with extensive missing data (more than 70%) were removed from further analysis, reducing the sample from 522 to 517 cases. However, main results with data analysis and listwise deletion were basically the same. Analysis revealed no significant differences in demographic criteria relative to the employee population, thus indicating no selfselection bias.
0.14***
U. Konradt et al. / Int. J. Human-Computer Studies 64 (2006) 1141–1153
No. of items
1146
ARTICLE IN PRESS U. Konradt et al. / Int. J. Human-Computer Studies 64 (2006) 1141–1153
1147
Table 2 Specifications of the outer model for the estimated PLS-Model Construct
Indicator
Weight
os1 os2
0.44 (po0.5) 1.0 (po0.001)
ip1 ip2
0.62 (po0.5) 0.42 (n.s.)
Loading
AVE
Composite reliability
0.82
0.95
0.89
0.97
0.66
0.80
0.83
0.91
Organizational support
Information policy
Usefulness u1 u2 u3 u4
0.82 0.95 0.92 0.92
eou1 eou2 eou3 eou4
0.94 0.95 0.96 0.92
us1 us2
0.87 0.76
s1 s2
0.94 0.89
Ease of use
User satisfaction
Strain
reveals a good adequacy of the measurement for the estimated model. An overview of the parameters of the outer model is given in Table 2. The loadings of the reflective indicators were higher than the recommended threshold of 0.71 (cf. Barclay et al., 1995), with most loadings being above 0.9. As the weights are not relevant for the quality observations of measurement in the case of reflective constructs, these values are omitted from Table 2. Reliability analysis revealed high internal consistency of the items with a composite reliability higher than 0.91, except for user satisfaction with a reliability of 0.80. The composite reliability is a standard measure for evaluating PLS measurement models (Werts et al., 1974). In comparison to Cronbach’s alpha, which is rather a lower bound estimate, the composite reliability is a closer approximation for the reliability of a measure under the assumption of accurate parameter estimates (Chin, 1998). For each variable, the square root of the average variance extracted (AVE) values is above the correlation between the construct and other constructs of the model. Hence, more variance is shared between the component of each latent variable and its block of indicators than with any other component representing a different block of indicators. This indicates an adequate discriminant validity of the measures (Chin, 1998, p. 321). The formative indicators showed weights above 0.1. Although the indicator (ip2) for the assessment of information policy had an insignificant tvalue (t ¼ 1:84, n.s.), it was not eliminated because, in prediction models, the value of information is higher than noise if the t-value lies above 1 (cf. Hansen, 1987). In Fig. 3, the parameter estimates for the assumed relationships are illustrated. Ease of use was positively
related to both usefulness (b ¼ 0:61, po0.001) and user satisfaction (b ¼ 0:16, po0.05). The estimated path coefficient between ease of use and system usage was nonsignificant (b ¼ 0:06, n.s.). Usefulness revealed a positive relation to both user satisfaction and system usage (b ¼ 0:37, po0.001, and b ¼ 0:32, po0.001, respectively). Hence, Hypothesis 1 was confirmed. Furthermore, both ease of use and usefulness were negatively related to strain (b ¼ 0:17, po0.01, and b ¼ 0:44, po0.001, respectively). Thus, Hypothesis 2 was supported. User satisfaction also predicted system usage (b ¼ 0:15, po0.05). The structural path between strain and user satisfaction turned out to be to be non-significant (b ¼ 0:12, n.s.). The prediction of usefulness (R2 ¼ 0:42) can be interpreted as being high (cf. Chin, 1998). Likewise, independent variables, organizational support and information policy predicted a considerable amount of variance in ease of use (R2 ¼ 0:23). This result was also found for the dependant variables, system usage (R2 ¼ 0:15), user satisfaction (R2 ¼ 0:20) and strain (R2 ¼ 0:32). A supplementary model estimation, based on normalized data (cf. Tenenhaus et al., 2005, p. 161), widely confirmed the results using the default settings of PLS Graph 3.0 (Chin, 2001). Thus, Hypotheses 1–3 were confirmed. 4.4. Evaluation of Hypothesis 4 In the research model, ease of use fully mediates the relation between organizational support and strain. In order to test Hypothesis 4, we compared the full-mediation model (Model 1) with a partial-mediation model (Model 2) and a no-mediation model (Model 3). In Model 2, an
ARTICLE IN PRESS U. Konradt et al. / Int. J. Human-Computer Studies 64 (2006) 1141–1153
1148
Results of the estimated PLS-Model
Organizational Support
-0.02 n.s.
Usefulness R2 = 0.42
0.37*** 0.17*** Information Policy
System Usage R2 = 0.15
0.15* -0.06 n.s.
0.32*** User Satisfaction R2 = 0.20
0.61*** 0.07 n.s.
0.16*
-0.17** 0.12 n.s.
0.43*** Ease of Use R2 = 0.23
-0.44***
Strain R2 = 0.32
Fig. 3. Results of the estimated PLS-Model. Note:*po0.05; **po0.01; ***po0.001.
additional path between organizational support and strain was drawn, assuming partial mediation, which is indicated in Fig. 2 by a dashed line. In contrast, Model 3 includes an additional path between organizational support and strain, but the originally assumed path between ease of usage and strain is omitted. Hence, in this model no mediation effect is assumed for the relationship of organizational support and strain. The same procedure was followed to investigate a potential mediation effect by ease of use between information policy and strain. We will not describe the procedure for testing this latter effect in detail as it does not differ from the one already illustrated. PLS does not provide a global goodness-of-fit measure, which indicates how well the whole model fits the data. Thus, the change of the determination coefficient R2 was used for deciding whether the modified models result in better predictions of strain (cf. Herrmann et al., 2004). The parameter f 2 was computed as the coefficient of the R2 difference between the modified and the original model, and 1R2 of the original model (Gefen et al., 2000). A small, a medium and a high effect are represented by the values 0.02, 0.15 and 0.35, respectively (Chin, 1998). The PLS-Model with partial mediation by ease of use between organizational support and strain (Model 2) results in R2 ¼ 0:325 for strain, instead of R2 ¼ 0:321 in the case of the original model. Accordingly, the inclusion of a path from organizational support to strain resulted in no increase in explained variance in the dependant variable (f 2 ¼ 0:01). Model 3, which assumed no mediating effect by ease of use, revealed a smaller determination coefficient of 0.221 for strain with a corresponding f 2 ¼ 0:13, while the path leading from organizational support to strain shows a significant path coefficient (b ¼ 0:13, po0.01). In conclusion, Model 1, which includes a full mediation of the relationship between organizational support and strain, had the best fit for the underlying data. The results of testing the mediation effect of ease of use on the relationship between information policy and strain show a similar structure. An additional path between information policy and strain does not lead to any changes in the amount of explained variance for strain. Also, as in Model 3, omitting
the path between ease of use and strain results in a smaller R2 ¼ 0:220 for strain. This result indicates a full mediation as proposed in the basic research model. Thus, Hypothesis 4 was confirmed. 4.5. Evaluation of Hypothesis 5 Hypothesis 5, which predicted that user satisfaction presented early in the questionnaire differed to those presented later, was supported. The group with late item position (M ¼ 5:82, sd ¼ 0:68) was more satisfied with the ESS than the group with the early item position (M ¼ 5:59, sd ¼ 0:85), tð515Þ ¼ 3:36, po0.001. According to Cohen (1988) the effect size for the question order effect (d ¼ 0:30) fell into the small size range. Because data were pooled across the two treatments of prior satisfaction measurement (n ¼ 269) and later satisfaction measurement (n ¼ 248), we ran two additional PLS models to test if the observed order effect affects the path coefficients. Both models differed from the original research model in respect of the fact that the second indicator for user satisfaction (us2) was not included in the measurement model, as its position within the questionnaire was held constant in both sub samples. According to Chin (2000), the appropriate way to compare the results of different samples is to calculate t-tests, which are based on the standard errors for the structural paths. The standard error estimates are provided by running the bootstrap resampling procedure. The formula for the t-tests is as follows: Pathsample_1 Pathsample_2 t ¼ qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffihqffiffiffiffiffiffiffiffiffiffiffii . 2 ðm1Þ ðn1Þ2 1 1 2 2 mþn ðmþn2Þ s:e:sample_1 þ ðmþn2Þ s:e:sample_2 The degrees of freedom are calculated by the following equation (Chin, 2000): 2
2 2 6 s:e:sample_1 þ s:e:sample_2 df ¼ round to nearest integer4 hs:e:2 s:e:2sample_2 i sample_1 þ nþ1 mþ1
2
3
7 25.
ARTICLE IN PRESS U. Konradt et al. / Int. J. Human-Computer Studies 64 (2006) 1141–1153
1149
Table 3 Path coefficients in PLS-Models with early position (subsample 1) and late position (subsample 2) of user satisfaction Path
Usefulness4user satisfaction User satisfaction4usage Strain4user satisfaction Ease of use4user satisfaction
Subsample 1 (n ¼ 269)
t
Subsample 2 (n ¼ 248)
b
S.E.
b
S.E.
0.339 0.243 0.023 0.245
0.059 0.072 0.038 0.086
0.217 0.389 0.189 0.250
0.123 0.054 0.110 0.131
Table 3 shows that the differences in the structural paths of the research model, which involve user satisfaction, are non-significant. Thus, separate analyses with sub-samples indicate that the response order effect does not affect the structural model paths. 5. Discussion The purpose of this study was to explore attitudinal and behavioural patterns of using ESS systems. Based on an extended TAM, we explored the relationship among organizational support and information policy with ease of use and usefulness, and the relationship between usefulness with user satisfaction, strain, and system use. Overall, the results confirm the research model and the hypotheses. Our major finding is that the TAM adequately predicts user satisfaction and ESS usage. Horton et al. (2001) found that the TAM appeared to be specifically suitable for modelling intranet usage in a more prescribed and structured environment with regular and consistent usage. Since, in our study, the ESS was regularly applied by employees, results provide support for this notion. Secondly and consistent with our hypothesis, organizational-level variables of organizational user support and information policy were related to ease of use, which, in turn, is associated with perceived usefulness and system use. These results are consistent with research findings suggesting that organizational support and information policy type impact the ability to achieve both individual and organizational goals (Fuerst and Cheney, 1982; Igbaria et al., 1997; Sharma and Yetton, 2003). As discussed by Sharma and Yetton (2003), the MIS literature in general suggests strong effects of organizational and management support on information technology implementations without much empirical data being available in support of such a conjecture. Thus, our result adds empirical support for this relationship. Studies on occupational health behaviour have also revealed the negative effect of poorly designed work equipment and software on strain (Coovert and Thompson, 2002). Our results strongly support the hypothesis of a negative association of ease of use and usefulness with perceived strain. Moreover, consistent with our hypothesis, ease of use fully mediated the relationship between organizational support and strain. Researchers have argued that web tools are very easy to use (Childers
0.918 1.604 1.475 0.032
et al., 2001; Magal and Mirchandani, 2001) which would result in lower strength of ease of use in predicting system use. On the contrary, our result shows that ease of use was a stronger predictor of strain than usefulness. Thus, strain could be viewed as a more sensitive indicator of a persontask mismatch. While the TAM is found to be a useful model for predicting system acceptance and system use, the extension of the TAM by means of strain will offer an integration of information system adoption in broader concepts of human resource management and change management processes (see Legris et al., 2003). Further research should focus on identifying factors leading to reduced strain such as enhanced user skills, greater involvement, improved coping competencies and reduced computer anxiety, and organizational variables, i.e., climate for innovations and organizational receptivity (Klein and Sorra, 1996; Klein et al., 2001). In addition, specific sources of stress should be isolated; for example increased stress due to additional tasks, increased insecurity and loss of control (Semmer, 1984). Finally, over and above the long-term judgements on strain, which were covered in this study, short-term stress reactions to computer malfunctions and lack of ease of use should be analysed. Mahmood et al. (2000) argued that user satisfaction with information technology has widely been accepted as an indicator of MIS success. We found evidence for a question order effect with higher satisfaction ratings when user judgements were assessed after the other items. However, the response order effect does not affect the structural model paths. Schwarz et al. (1998) argue that context effects reflect a general context dependency in human judgements and there is no simple way to avoid context effects. The susceptibility of survey responses to variations in question order suggests that investigators conducting surveys with attitude questionnaires should pay attention to the issues of item context when designing their instruments (Sudman et al., 1996). Due to reasons concerning organizational policy, objective computerbased usage measures are often not available to researchers. Thus, the conditions, which give rise to a methodological bias of subjective measures need to be further examined (e.g., Schwarz et al., 1998). To our knowledge, this is the first time that question effects have been addressed in the context of TAM. Additional work is required not only to replicate the present findings, but to
ARTICLE IN PRESS 1150
U. Konradt et al. / Int. J. Human-Computer Studies 64 (2006) 1141–1153
identify the processes leading to judgmental biases. More knowledge about these processes would facilitate development of refined methods to measure attitudes in MIS field settings. As a simple strategy, varying the order of items will help researchers to reveal method effects. 5.1. Limitations and recommendations for future research A limitation of the research is that our data is crosssectional and not longitudinal in nature. Thus, we cannot decide whether acceptance and usage are affected by expectations or vice versa. However, longitudinal data has shown that TAM variables also satisfactorily predict system usage (Venkatesh and Davis, 2000). Moreover, it may seem a limitation of this research that we focused on questionnaire data that is limited in various ways. As proposed by ecological research approaches and ethnographic research (e.g., Bate, 1997), sampling of behavioural data in organizational settings by using the observations or utterances of the users are more closely related to users’ motivational processes. Without doubt, ecological field studies may help to uncover the dynamics of system appropriation more thoroughly and may improve our understanding of the IS system adoption. However, data collection is much more time-consuming and it is much more difficult to find co-operative partners. Thus, rather than deciding which research methods are most appropriate for MIS research, we agree with other scholars who advocated the benefits of combining both methods (e.g., Kaplan and Duchon, 1988; Mingers, 2001). Finally, self-reported measures of usage are prone to idiosyncratic biases and were generally found to differ from the true score of system usage (Straub et al., 1995). Thus, computer-recorded login-times would be preferable. Given the empirical findings that ease of use and usefulness were in fact related to the measures of self-reported use, but were unrelated to future short-term usage and long-term usage (Horton et al., 2001), we might have over-estimated the actual usage in our study. Because it is possible that common method accounts for this result, the association should be replicated in follow-up studies in which different sources are used, including log-file data or asking colleagues and superiors.
with which users can work with the system. Both are negatively related to the perceived user strain. When implementing an ESS system, it is therefore essential to focus on both usefulness and ease of use. First of all, an ESS system should be able to offer added benefits to employees. To create a truly useful ESS, it is important to provide processes that help employees in their daily work. In order to encourage regular usage of the system, it is vital to identify the applications, which are of the greatest benefit to the employees. The implementation process should start with these applications. To increase the perceived usefulness, it may be helpful to allow staff to deal with private tasks (shopping, financial transactions, business news, and so on). The results of this study further suggest that ease of use can be positively influenced by organizational support and information policy. It is therefore advisable to inform future users about the planned content, and the goals the system is intended to achieve at an early stage of the project. The involvement of the employees in the planning process should be an essential component of the project, for example by conducting interviews, performing workflow and working environment analysis, and collecting ideas based on paper prototypes. During the roll out phase employees should be informed about the project by running an internal marketing campaign. Brochures for internal distribution, presentations, demos and reports from pilot users are all good means of publicizing the project. Organizational support during the initial phase (pilot project) could be provided by a member of the senior management team. Klein et al. (2001) found that management support was positively related to financial resource availability, which affects training and support facilities, and positive implementation climate. At the system level, ease of use can also be increased by using intuitive navigation by including search capabilities, orientations, on-line help, feedback, glossaries and history tracking. As employees have different roles within their company, ease of use should also be increased by personalization, i.e., they should have access to different information, applications and processes according to their role.
5.2. Managerial implications
6. Conclusion
By empowering employees through ESS, they relieve the organization’s HR personnel of routine tasks, and allow them to concentrate on more strategic activities. Companies are expecting that new technologies will not only increase the productivity of their business processes, but also improve employee participation and satisfaction (e.g., Larsen, 2003; Marler and Dulebohn, 2005). Thus, one of the biggest challenges of any self-service initiative is to gain user acceptance and system adoption. The results of this study suggest that acceptance and adoption are mainly determined by the usefulness of the services and the ease
The results of this study have both theoretical and methodological implications for MIS research. We provided strong support for the view that TAM predicts selfassessed system usage and user satisfaction of corporate Web portals. The present findings also add to the existing body of research examining the effects of organizational support and information policy on ease of use and perceived usage strain. These findings could potentially have practical implications for firms planning to implement corporate portals. Finally, the present study makes a methodological contribution by illustrating that measures
ARTICLE IN PRESS U. Konradt et al. / Int. J. Human-Computer Studies 64 (2006) 1141–1153
of user satisfaction are impacted by the item order of the measuring instrument. Acknowledgements Parts of this study were presented at the 11th International Conference on Human–computer Interaction International, 22–27 July 2005, Las Vegas, USA. We wish to thank Gunilla Sundstrom, Roy J. Jenkins, and the anonymous reviewers for their very helpful advice on earlier versions of this paper. References Al-Gahtani, S.S., King, M., 1999. Attitudes, satisfaction and usage: factors contributing to each in the acceptance of information technology. Behaviour and Information Technology 18, 277–297. Allwood, C.M., Thomee, S., 1998. Usability and database search at the Swedish Employment Service. Behaviour and Information Technology 17, 231–241. Anandarajan, M., Simmers, C.A., Igbaria, M., 2000. An exploratory investigation of the antecedents and impact of internet usage: an individual perspective. Behaviour and Information Technology 19, 69–85. Barclay, D., Thompson, R., Higgins, C., 1995. The partial least square (PLS) approach to causal modeling, personal computer adoption and use as an illustration. Technology Studies 2, 285–309. Barki, H., Hartwick, J., 1994. Measuring user participation, user involvement, and user attitude. MIS Quarterly 18, 59–82. Bate, S.P., 1997. Whatever happened to organizational anthropology? A review of the field of organizational ethnography and anthropological studies. Human Relations 50, 1147–1175. Bollen, K., Lennox, R., 1991. Conventional wisdom on measurement: a structural equation perspective. Psychological Bulletin 110, 305–314. Boyer, K.K., Olson, J.R., Calantone, R.J., Jackson, E.C., 2002. Print versus electronic surveys: a comparison of two data collection methodologies. Journal of Operations Management 20, 357–373. Brave, S., Nass, C., 2003. Emotion in human–computer interaction. In: Jacko, J.A., Sears, A. (Eds.), The Human–Computer Interaction Handbook. Mahwah, Erlbaum, pp. 81–96. Brown, S.A., Massey, A.P., Montoya-Weiss, M., Burkman, J., 2002. Do I really have to: understanding technology acceptance when use is mandatory. European Journal of Information Systems 11, 283–295. Carroll, J.M., Olson, J.R., 1988. Mental models in human–computer interaction. In: Helander, M. (Ed.), Handbook of Human–Computer Interaction. North-Holland, Amsterdam, pp. 45–85. Chau, P.Y.K., 1996. An empirical assessment of a modified technology acceptance model. Journal of Management Information Systems 13, 185–204. Chau, P.Y.K., Hu, P.J., 2002. Examining a model of information technology acceptance by individual professionals: an exploratory study. Journal of Management Information Systems 18, 191–229. Chen, L., Gillenson, M., Sherrell, D., 2002. Enticing online consumers: an extended technology acceptance perspective. Information and Management 39, 705–719. Childers, T., Carr, C., Peck, J., Carson, S., 2001. Hedonic and utilitarian motivations for online retail shopping behavior. Journal of Retailing 77, 511–535. Chin, W.W., 1998. The partial least squares approach for structural equation modeling. In: Marcoulides, G.A. (Ed.), Modern Methods for Business Research. Lawrence Erlbaum, Hillsdale, pp. 295–336. Chin, W.W., 2000. Frequently Asked Questions—Partial Least Squares & PLS-Graph. [On-line]. Available: /http://disc-nt.cba.uh.edu/chin/ plsfaq.htmS. Chin, W.W., 2001. PLS-Graph User’s Guide. Version 3.0.
1151
Churchill Jr., G.A., 1979. A paradigm for developing better measures of marketing constructs. Journal of Marketing Research 16, 64–73. Cohen, J., 1988. Statistical Power Analysis for the Behavioral Sciences. Lawrence Erlbaum, Hillsdale, NJ. Coovert, M.D., Thompson, L.F., 2002. Technology and workplace health. Technology and workplace health. In: Quick, J.C., Tetrick, L.E. (Eds.), Handbook of Occupational Health Psychology. APA, Washington, DC, pp. 221–241. Coyle, J.R., Gould, S.J., 2002. How consumers generate clickstreams through web sites: An empirical investigation of hypertext, schema and mapping theoretical explanations. Journal of Interactive Advertising, 2 (2). Online: /http://www.jiad.org/S. Davis, F.D., 1989. Perceived usefulness, perceived ease of use, and user acceptance of information technology. MIS Quarterly 13, 318–340. Davis, F.D., 1993. User acceptance of information technology: system characteristics, user perceptions and behavioral impacts. International Journal of Man–Machine Studies 38, 475–487. DeLone, W.D., McLean, E.R., 1992. Information systems success: the quest for the dependent variable. Information Systems Research 3, 60–95. DeLone, W.D., McLean, E.R., 2003. The DeLone and McLean model of information systems success: a ten-year update. Journal of Management Information Systems 19, 9–30. Diamantopoulos, A., Siguaw, J.A., 2002. Formative versus reflective indicators in measure development: does the choice of indicators matter? Working Paper. Online: /http://www.hotelschool.cornell.edu/ chr/research/working/S. Diamantopoulos, A., Winklhofer, H.M., 2001. Index construction with formative indicators: an alternative to scale development. Journal of Marketing Research 38, 269–277. Dillon, A., Morris, M.G., 1996. User acceptance of information technology: theories and models. Annual Review of Information Science and Technology 31, 3–32. Doll, W.J., Deng, X., Metts, G.A., 2003. User empowerment in computermediated work. In: Proceedings of ISOneWorld Conference, Las Vegas, April 23–25. Dooley, L.M., Linder, J.R., 2003. The handling of nonresponse error. Human Resource Development Quarterly 14, 99–110. Dunham, R.B., Herman, J.B., 1975. Development of female faces scale for measuring job satisfaction. Journal of Applied Psychology 60, 629–632. Efron, B., Tibshirani, R.J., 1986. Bootstrap methods for standard errors, confidence intervals, and other measures of statistical accuracy. Statistical Science 1, 54–77. Eisenberger, R., Huntington, R., Hutchison, S., Sowa, D., 1986. Perceived organizational support. Journal of Applied Psychology 71, 500–507. Eisenberger, R., Fasolo, P., Davis-LaMastro, V., 1990. Perceived organizational support and employee diligence, commitment, and innovation. Journal of Applied Psychology 75, 51–59. Fishbein, M., Ajzen, I., 1975. Belief, Attitude, Intentions, and Behavior: An Introduction to Theory and Research. Addison-Wesley, Boston. Fuerst, W., Cheney, P., 1982. Factors affecting the perceived utilization of information systems. Decision Sciences 17, 329–356. Gefen, D., Straub, D.W., Boudreau, M.-C., 2000. Structural equation models and regression: guidelines for research practice. Communications of the Association for Information Systems 4, 1–79. Graham, J.W., Cumsille, P.E., Elek-Fisk, E., 2003. Methods for handling missing data. In: Schinka, J.A., Velicer, W.F. (Eds.), Research Methods in Psychology, Volume 2 of Handbook of Psychology. Wiley, New York, pp. 87–114. Grover, V., 1993. An empirically derived model for the adoption of customer-based interorganizational systems. Decision Science 24, 603–640. Guimaraes, T., Igbaria, M., 1997. Client/server system success: exploring the human side. Decision Sciences 28, 851–876. Hamborg, K.-C., Greif, S., 2003. New technologies and stress. In: Schabracq, M. J., Winnubst, J.A., Cooper, C.L. (Eds.), Handbook of
ARTICLE IN PRESS 1152
U. Konradt et al. / Int. J. Human-Computer Studies 64 (2006) 1141–1153
Work and Health Psychology, second ed. Wiley, Chichester, pp. 209–235. Hansen, G., 1987. Multikollinearita¨t und Prognosefehler (Multicollinearity and prediction error). Jahrbuecher fu¨r Nationaloekonomie und Statistik 203, 357–370. Hartwick, J., Barki, H., 1994. Explaining the role of user participation in information system use. Management Science 40, 440–465. Herrmann, A., Huber, F., Kressmann, F., 2004. Partial least squares: Ein Leitfaden zur Spezifikation, Schaetzung und Beurteilung varianzbasierter Strukturgleichungsmodelle (Partial least squares: a guide for specification, estimation and judgement of variance based SEM). Working Paper, University of Mainz. Horton, R.P., Buck, R., Waterson, P.E., Clegg, C.W., 2001. Explaining intranet use with the technology acceptance model. Journal of Information Technology 16, 237–249. Igbaria, M., Zinatelli, N., Cragg, P., Cavaye, A.L.M., 1997. Personal computing acceptance factors in small firms: a structural equation model. MIS Quarterly 21, 279–305. Ives, B., Olson, M.H., 1984. User participation and MIS success: a review of research. Management Science 30, 586–603. Jarvis, C.B., MacKenzie, S.B., Podsakoff, P.M., 2003. A critical review of construct indicators and measurement model mispecifications in marketing and consumer research. Journal of Consumer Research 30, 199–218. Kaplan, B., Duchon, D., 1988. Combining qualitative and quantitative methods in information systems research: a case study. MIS Quarterly 12, 571–586. Klein, K.J., Sorra, J.S., 1996. The challenge of innovation implementation. Academy of Management Review 21, 1055–1080. Klein, K.J., Conn, A.B., Sorra, J.S., 2001. Implementing computerized technology: an organizational analysis. Journal of Applied Psychology 86, 811–834. Koufaris, M., 2002. Applying the technology acceptance model and flow theory to online consumer behavior. Information Systems Research 13, 205–223. Larsen, K.R.T., 2003. A taxonomy of antecedents of information systems success: variable analysis studies. Journal of Management Information Systems 20, 169–246. Law, K.S., Wong, C.S., Mobley, W.H., 1998. Toward a taxonomy of multidimensional constructs. Academy of Management Review 23, 741–755. Lederer, A.L., Maupin, D.J., Sena, M.P., Zhuang, Y., 2000. The technology acceptance model and the World Wide Web. Decision Support Systems 29, 269–282. Legris, P., Ingham, J., Collerette, P., 2003. Why do people use information technology? A critical review of the technology acceptance model. Information and Management 40, 191–204. Lengnick-Hall, M.L., Moritz, S., 2003. The impact of e-HR on the human resource management function. Journal of Labor Research 24, 365–379. Li, S., Carlson, E., Holm, K., 2000. Validation of a single-item measure of usual physical activity. Perceptual and Motor Skills 91, 593–602. Lord, F.M., Novick, M.R., 1968. Statistical Theories of Mental Test Scores. Addison-Wesley, Reading, MA. Magal, S.R., Mirchandani, D.A., 2001. Validation of the technology acceptance model for Internet tools. Proceedings of the Americas Conference on Information Systems, 1–16. Mahmood, M.A., Burn, J.M., Gemoets, L.A., Jacquez, C., 2000. Variables affecting information technology end-user satisfaction: a meta-analysis of the empirical literature. International Journal of Human Computer Studies 52, 751–771. Mahmood, M.A., Hall, L., Swanberg, D.L., 2001. Factors affecting information technology usage: a meta-analysis of the empirical literature. Journal of Organizational Computing and Electronic Commerce 11, 107–130. Marler, J.H., Dulebohn, J.H., 2005. A model of employee self-service technology acceptance. Research in Personnel and Human Resources Management 24, 137–180.
McCalla, R., Ezingeard, J.-N., Money, K., 1993. A behavioural approach to CRM systems evaluation. Electronic Journal of Information Systems Evaluation 6, 145–154. Mingers, J., 2001. Combining research methods: towards a pluralistic methodology. Information Systems Research 12, 240–259. Mirvis, P.H., Sales, A.L., Hackett, E.J., 1991. The implementation and adoption of new technology in organizations: the impact on work, people, and culture. Human Resource Management 30, 113–139. Narayan, S., Krosnick, J.A., 1996. Education moderates some response effects in attitude measurement. Public Opinion Quarterly 60, 58–88. Oishi, S., Schimmack, U., Colcombe, S.J., 2003. The contextual and systematic nature of life satisfaction judgements. Journal of Experimental Social Psychology 39, 232–247. Otter, M., Johnson, H., 2000. Lost in hyperspace: metrics and mental models. Interacting with Computers 13, 1–40. Partala, T., Surakka, V., 2004. The effects of affective interventions in human–computer interaction. Interacting with Computers 16, 295–309. Phelps, R., Mok, M., 1999. Managing the risks of intranet implementation: an empirical study of user satisfaction. Journal of Information Technology 14, 39–52. Rossiter, J.R., 2002. The C-OAR-SE procedure for scale development in marketing. International Journal of Research in Marketing 19, 305–335. Schafer, J.L., 1997. Analysis of Incomplete Multivariate Data. Chapman & Hall, New York. Schafer, J.L., Graham, J.W., 2002. Missing data: our view of the state of the art. Psychological Methods 7, 147–177. Schillewaert, N., Ahearne, M.J., Frambach, R.T., Moenaert, R.K., 2005. The adoption of information technology in the sales force. Industrial Marketing Management 34, 323–336. Schuman, H., 1992. Context effects: state of the past/state of the art. In: Schwarz, N., Sudman, S. (Eds.), Context Effects in Social and Psychological Research. Springer, New York, pp. 4–20. Schuman, H., Presser, S., 1981. Questions and Answers in Attitude Surveys. Academic Press, San Diego. Schwarz, N., Bless, H., 1992. Constructing reality and its alternatives: an inclusion/exclusion model of assimilation and contrast effects in social judgment. In: Martin, L., Tesser, A. (Eds.), The Construction of Social Judgments. LEA, Hillsdale, NJ, pp. 217–245. Schwarz, N., Groves, R., Schuman, H., 1998. Survey methods. In: Gilbert, D., Fiske, S., Lindzey, G. (Eds.), Handbook of Social Psychology, fourth ed., vol. 1. McGraw-Hill, New York, pp. 143–179. Semmer, N., 1984. Stressbezogene Taetigkeitsanalyse: Psychologische Untersuchungen von Stress am Arbeitsplatz (Stress-related action analysis: Psychological analysis of job stress). Beltz, Weinheim. Sharma, R., Yetton, P., 2003. The contingent effects of management support and task interdependence on successful information systems implementation. MIS Quarterly 27, 533–556. Sonnentag, S., Frese, M., 2003. Stress in organizations. In: Borman, W.C., Ilgen, D.R., Klimoski, R.J. (Eds.), Handbook of Psychology, vol. 12: Industrial and Organizational Psychology. Wiley, New York, pp. 453–491. Straub, D., Limayen, M., Karahanna-Evaristo, E., 1995. Measuring system usage: implications for IS theory testing. Management Science 41, 1328–1342. Sudman, S., Bradburn, N., Schwarz, N., 1996. Thinking about Answers: The Application of Cognitive Processes to Survey Methodology. Jossey-Bass, San Francisco. Szajna, B., 1996. Empirical evaluation of the revised technology acceptance model. Management Science 42, 85–92. Tenenhaus, M., Vinzi, V.E., Chatelin, Y.-M., Lauro, C., 2005. PLS path modeling. Computational Statistics and Data Analysis 48, 159–205. Tourangeau, R., Singer, E., Presser, S., 2003. Context effects in attitude surveys: effects on remote items and impact on predictive validity. Sociological Methods Research 31, 486–513. Townsend, A.M., Demarie, S.M., Hendrickson, A.R., 2001. Desktop video conferencing in virtual workgroups: anticipation, system
ARTICLE IN PRESS U. Konradt et al. / Int. J. Human-Computer Studies 64 (2006) 1141–1153 evaluation and performance. Information Systems Journal 11, 213–227. Venkatesh, V., 2000. Determinants of perceived ease of use: integrating control, intrinsic motivation, and emotion into the technology acceptance model. Information Systems Research 11 (4), 342–365. Venkatesh, V., Davis, F.D., 2000. A theoretical extension of the technology acceptance model: four longitudinal field studies. Management Science 46, 186–204. Venkatesh, V., Morris, M.G., Davis, G.B., Davis, F.D., 2003. User acceptance of information technology: toward a unified view. MIS Quarterly 27, 425–478.
1153
Wanous, J., Reichers, A.E., Hudy, M., 1997. Overall job satisfaction: how good are single-item measures? Journal of Applied Psychology 82, 247–252. Wanous, J.P., Hudy, M.J., 2001. Single item reliability: a replication and extension. Organizational Research Methods 4, 361–375. Werts, C.E., Linn, R.L., Jo¨reskog, K.G., 1974. Interclass reliability estimates: testing structural assumptions. Educational and Psychological Measurement 34, 25–33. Wixom, B.H., Todd, P.A., 2005. Theoretical integration of user satisfaction and technology acceptance. Information Systems Research 16, 85–102.