Operational Use evaluation of IT investments: An investigation into potential benefits

Operational Use evaluation of IT investments: An investigation into potential benefits

European Journal of Operational Research 173 (2006) 1000–1011 www.elsevier.com/locate/ejor Operational Use evaluation of IT investments: An investiga...

164KB Sizes 0 Downloads 7 Views

European Journal of Operational Research 173 (2006) 1000–1011 www.elsevier.com/locate/ejor

Operational Use evaluation of IT investments: An investigation into potential benefits Hussein Al-Yaseen b

a,1

, Tillal Eldabi

b,* ,

David Y. Lees b, Ray J. Paul

b,c

a Department of Information Technology, Al-Ahliyya National University, 247, Amman 19328, Amman, Jordan School of Information Systems, Computing, and Mathematics, Brunel University, Uxbridge, Middlesex, UB8 3PH, United Kingdom c Department of Information Systems, London School of Economics, Houghton Street, London WC2A 2AE, United Kingdom

Available online 16 August 2005

Abstract The process of evaluation of IT projects often seems to cease just as quantifiable results start to become available—in Operational Use (OU). This paper investigates OU IT evaluation and contrasts it with the evaluation undertaken during the specification, construction, and testing of IT projects; which we choose to call Prior Operational Use (POU) to distinguish it from OU. Analysis of 123 usable responses from the FTSE 500 companies, show that many companies appear not to undertake OU evaluation. However, where OU evaluation was conducted, it appears to be of clear value to the organisations. Benefits claimed include the ability to assess deviations from their original plans, and to provide a basis for validating the original methods used (in their POU evaluations).  2005 Elsevier B.V. All rights reserved. Keywords: Prior Operational Use evaluation; Operational Use evaluation; IT investment appraisal

1. Introduction Expenditure on information technology (IT) in the United Kingdom—and other countries for that matter—is continuously increasing as companies rely more and more on IT. Consequently, the issue of IT evaluation is increasingly a concern for all decision makers. Currently, a large percentage of *

1

Corresponding author. E-mail address: [email protected] (T. Eldabi). Previously at: Brunel University.

organisational new capital investment is spent on IT, directly or indirectly. Managers would like to be sure that investment on IT is economically justifiable (Farbey et al., 1993). Justifying expenditure on IT is a long standing problem, and managers for the past few decades have expressed concerns about the value they are getting from IT investments; moreover they have been searching for ways to evaluate and justify the use of IT. ÔMany conduct cost/benefit evaluation on projects, but most of them have an element of fictionÕ. The saddest part is that it is not just the benefits that are fictional,

0377-2217/$ - see front matter  2005 Elsevier B.V. All rights reserved. doi:10.1016/j.ejor.2005.07.001

H. Al-Yaseen et al. / European Journal of Operational Research 173 (2006) 1000–1011

but the costs are as wellÕ (Farbey et al., 1993). Such a continuous increase in investment coupled by continuous need for justification presents a challenge to the information systems community. Many authors agree that evaluation of investment is a key issue for such IT projects and their management: Kumar (1990), Dabrowska and Cornford (2001), and Irani et al. (2002). Investment justification and evaluation of effectiveness is traditionally—within fields other than IT—a complex process. However, analysts usually manage to get an answer, which they can feel confidant is a valid representation of the real value. But in IT, confidence in measures has never reached a level similar to traditional products. Many organisations report that they are uncertain about how to measure the impact and the outcomes of their IT investments. This is mainly attributable to the fact that IT returns-on-investment are mostly intangible which makes it difficult to measure using traditional accounting practice. IT evaluation has been a widely explored issue in order to resolve the above issues and in search of reliable measurement drivers. Most of the theoretical literature in IT evaluation (such as Bradford and Florin, 2003; Gunasekaran et al., 2001; Lin and Pervan, 2003; Liu et al., 2003; Remenyi et al., 2000; Irani and Love, 2002) tends to depart from the traditional accounting-based evaluation methods by appreciating the intangible aspects of IT benefits as well as the tangible ones. Authors are more inclined to view evaluation as part of the planning activity only or, in some cases, as part of the development process. There are also a number of empirical studies—such as those reviewed by Ballantine et al. (1996)—which examined ex-ante evaluation, yet only a few (for example Kumar, 1990; and to some extent Beynon-Davies et al., 2004) that have explored the ex-post evaluation. Generally speaking most empirical and theoretical articles (with very few exceptions) tend to classify IT evaluation as a planning activity or take a temporal view along the development life-cycle only to stop short of the operational phase. Although a number of the above authors have touched upon this phase, evaluation activities are still not represented as integral parts of the evaluation process. The extent to which organisations

1001

adopt rigorous evaluation at the operational phase is unknown. In this paper, we aim to empirically explore the evaluation process by extending the temporal view—with more concentration on the operational phase—in order to understand issues related to IT evaluation after project completion. We start in the following section by defining IT evaluation for the purpose of this research. We then use this as the a theoretical basis for the collection of data from major companies in the UK regarding their approaches and processes for IT project evaluation, as well as their rationale and application of any OU evaluation that they conducted. The section after that redefines the research problem and the key research questions in relation to the two forms of evaluation. The next sections discuss the research methodology, data collection, results, and synthesis, respectively. In the final section, we present lessons learned from this research.

2. The purposes and forms of evaluation Evaluation has been defined as the process of assessing the worth of something (Beynon-Davies et al., 2000). Another definition given is that it can be defined as the process of establishing—by quantitative or qualitative means—the worth of IT to the organisation (Willcocks, 1992). We take the stance that evaluation is a process that takes place at different points in time, or continuously, explicitly searching for (quantitatively or qualitatively) the impact of IT projects (Eldabi et al., 2003). The value of this latter definition is that it explicitly recognises the different stages in the full lifecycle of an Information System in which evaluation is performed, and provides the opportunity to discriminate between two decidedly different views of the evaluation process, each serving different aims. The first view of evaluation is as a means to gain direction in the IS project. Here, ÔpredictiveÕ evaluation is performed to forecast the impact of the project. Using financial and other quantitative estimates, the evaluation process provides support and justification for the investment through the forecasting of projected baseline indicators such as Payback, Net Present Value (NPV) or Internal

1002

H. Al-Yaseen et al. / European Journal of Operational Research 173 (2006) 1000–1011

Systems’ life cycle Prior Operational Use Evaluation

Operational Use Evaluation Time

Development stages

System into operational use

Fig. 1. IS/IT evaluation types in the systemsÕ life cycle.

Rate of Return (IRR) (Farbey et al., 1993; Liu et al., 2003; Yeo and Qiu, 2003). It is known variously as Ôex-anteÕ evaluation (Remenyi et al., 2000), ÔformativeÕ evaluation (Brown and Kiernan, 2001), or as we shall refer to it, ÔPrior Operational UseÕ (POU) evaluation. This form of evaluation guides the project, and may lead to changes in the way the system is structured and carried out. It does not however give any feedback beyond the design, implementation, and delivery of the project outcomes. In contrast, evaluation can also be considered in terms of the effectiveness of the IT system in situ— what a system actually accomplishes in relation to its stated goals (Al-Yaseen et al., 2004; Eldabi et al., 2003). This form of evaluation draws on real rather than projected data, and can be used to justify adoption (Love and Irani, 2001; Irani, 2002); estimate the direct cost of the system, estimate the tangible benefits of the system (Liu et al., 2003); ensure that the system meets requirements (Irani, 2002); measure the system effectiveness and efficiency (Poon and Wagner, 2001); measure the quality of programs and to estimate indirect costs and other costs (Love and Irani, 2001); or to measure the quality of programmes (Eldabi et al., 2003). This type of evaluation should be performed during the operational phase of the project. We shall refer to this type as ÔOperational UseÕ (OU) evaluation. Fig. 1 shows these forms of evaluation with respect to the life cycle from a systemÕs inception to the end of its useful life.

3. The problem and the research opportunity Most of the literature (such as Beynon-Davies et al., 2000; Farbey et al., 1999; Jones and Hughes,

2000; Walsham, 1999; Remenyi et al., 2000) attempts to improve the process of evaluation by means of either (a) consolidating and enumerating more factors to consider in the evaluation, or (b) adding more theoretical rigour to the techniques used (Irani, 2002; Irani and Love, 2002). As mentioned above, most studies concentrated on what we termed the POU phase with high emphasis on early stages of development. In contrast, we find that only rarely that OU evaluation has been studied. The most recent and comprehensive empirical study in this category was conducted 15 years ago in Canada by Kumar (1990). The main problem is that there is no body of knowledge in the area to help improve the techniques used in evaluation at this stage, which encourages decision makers to refrain from employing it altogether. For this reason we have decided to research into the practitionersÕ perceptions of the evaluation process and the practices associated with the evaluation adopted within large organisations. We attempt to obtain insights into OU evaluation in order to identify the real extent to which OU evaluation is practised and what lessons that could be learned to improve knowledge about it. To do that—we believe—the following questions need to be answered by practitioners who are most involved with the evaluation processes. Such answers are obtained by posing the following questions as a platform for our research activity: • What is the balance between POU and OU IT evaluations? • What are the main reasons for adopting each of the evaluation types? • What criteria are currently being used for evaluating IT investment in each type of evaluation?

H. Al-Yaseen et al. / European Journal of Operational Research 173 (2006) 1000–1011

• When is Operational Use evaluation performed? • What are the main reasons for adopting a comparison between the outcomes of evaluation types?

1003

researchers might follow up on this. The following section describes the processes of questionnaire design, deployment, and analysis used, and summarises the participant characteristics. Fig. 2 presents the sequential structure of the research phases and activities within each.

4. Research approach

4.1. Research phases

The research theme is based on a comparison between POU and OU as means for the identification of current practices of OU evaluation and to understand its application within an organisational context. As suggested by the argument of Tashakkori and Teddlie (2003), we opted for quantitative research through questionnaires as an appropriate instrument base for starting the research. No doubt, other research approaches would be beneficial and we anticipate other

Phase one reviews both types of evaluation (POU and OU). The main issues identified in the literature were then used to develop a questionnaire that focuses on how organisations carry out evaluation of their IT systems. The questionnaire is split into six sections centred on gathering information on:

Fig. 2. Research phases.

1. 2. 3. 4.

organisational background; information technology infrastructure; business issues of IT investment; prior Operational Use evaluation in different stages of system life cycle (feasibility, design, implementation, and testing and completion); 5. Operational Use evaluation; as well as 6. other information related to both types of evaluation. Before the formal survey was sent to the companies, two pilot iterations were conducted. The first iteration involved four doctoral students. Based on their feedback, certain items in the questionnaire were modified, along with minor layout changes, which were made in order to improve clarity and readability. The second iteration involved four professionals—two academics, one IT manager in a business organisation, and one business analyst in another organisation. There were only cosmetic changes at this iteration, giving us the confidence to issue the questionnaire. In phase two, the questionnaire developed in phase one was sent to the top 500 organisations in the UK (the FTSE 500). The questionnaires were mailed to IT managers or top executives. As shown in Table 1, returns covered a variety of organisations from financial services, information technology, manufacturing, transport, central government, consultancy, retail/wholesaling, and publishing. Of the 500 questionnaires posted, 152

1004

H. Al-Yaseen et al. / European Journal of Operational Research 173 (2006) 1000–1011

the OU evaluation criteria (codename: OUeC). Each of these variables were measured using a five point Likert scales (1 = not important and 5 = very important). For technically interested readers we report that a factor analysis technique was employed in order to identify possible categories. Factor analysis was performed in three steps (following Berthold and Hand, 2003):

Table 1 Organisations in the sample Organisation

Percentage

Financial services Manufacturing Information technology Retail/wholesaling Computer manufacturing Central government Consultancy Transport Publishing Others

19 15 14 9 7 6 6 5 3 16

responses were received; 18 were returned unanswered and 11 were returned but incomplete. The latter two categories of responses were ignored making the final number of usable responses 123, giving a response rate of 24.6%. This rate was considered to be above expectation given that the generally accepted average responses to non-incentive based questionnaires are around 20%. In phase three, we analysed the data from the responses of the questionnaire using a combination of the parametric statistical methods, Descriptive Analysis and Factor Analysis (Pett et al., 2003). Organisations were asked to select from the list the closest choice of reason for adopting each of Prior Operational Use and Operational Use evaluation. A summary of the key responses to the questionnaire—the reasons for adopting Prior Operational Use evaluation (codename: POUeR), and Operational Use evaluation (codename: OUeR)—are tabulated in Appendix A, along with

(1) A matrix of correlation coefficients for all possible pairings of the variables was generated. (2) Factors were then extracted from the correlation matrix using principal factors analysis. (3) The factors were rotated to maximise the relationships between the variables and some of the factors and minimise association with others using Varimax Kaiser Normalisation, which maintained independence among the mathematical factors. The eigenvalues determined which factors remained in the analysis. Following KaiserÕs criterion, factors with an eigenvalue of less than 1 were excluded. A Screen plot provides a graphic image of the eigenvalue for each component extracted (see Figs. 3 and 4). 5. RespondentsÕ characteristics The average monthly IT budget for the organisations in the sample was £2,513,000 with the median at £1,645,000. 25% of the participating

12.000

Eigenvalues

10.000 8.000

FOUR

6.000

FACT CONCI

4.000 2.000 0.000 -2.000

1

2

3

4

5

6

7

8

9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 Variables

Fig. 3. Eigenvalue of the reasons for adopting Prior Operational Use evaluation.

H. Al-Yaseen et al. / European Journal of Operational Research 173 (2006) 1000–1011

1005

Eigenvalues

4.5 4

Three factors explaining 92.31% of all the variance

3.5 3 2.5 2 1.5 1 0.5 0 1

2

3

4

6 Variables

5

7

8

9

10

Fig. 4. Eigenvalue of the reasons for adopting Operational Use evaluation.

organisations have monthly IT budget exceeding £2,660,000 and 10% of the participating organisations have a monthly IT budget of £5,903,000. On average, the participating organisations had been using IT for approximately 16–20 years and most had a history of more than 20 years of using IT. 85% of the participating organisations had a central integrated IT infrastructure department, while 15% of each department in the participating organisations had its own IT infrastructure. 8.1% of the participating organisations had adopted IT as a response to problem(s); whilst 26% had adopted IT searching for ways of improving effectiveness and standing in the marketplace, and 65.9% had adopted IT systems for both reasons.

6. Data analysis and preliminary findings This section presents aggregated results from direct answers to the research questions mentioned above. The basic issues considered here are: reasons for adopting either types of evaluations, criteria for evaluations, reasons for comparisons between the two types, and reasons for any such gaps. 6.1. Reasons for adopting Prior Operational Use evaluation The results are presented in Table 2. Using a factor analysis cut-off level of 0.5, four factors were considered the main reasons of adopting Prior Operational Use evaluation (explaining 91.47% of the variance—see Fig. 3), which we de-

Table 2 Reasons for adopting Prior Operational Use evaluation— Factor analysis Reasons

Factors System completion and justification

POUeR1 POUeR2 POUeR3 POUeR4 POUeR5 POUeR6 POUeR7 POUeR8 POUeR9 POUeR10 POUeR11 POUeR12 POUeR13 POUeR14 POUeR15 POUeR16 POUeR17 POUeR18 POUeR19 POUeR20 POUeR21 POUeR22 POUeR23 POUeR24

System costs

System benefits

Other reason

0.967 0.982 0.991 0.986 0.950 0.942 0.972 0.970 0.955 0.966 0.898 0.919 0.884 0.842 0.899 0.880 0.902 0.932 0.926 0.936 0.775 0.861 0.828 0.792

Note: Only loadings greater than 0.50 are shown.

scribe as Ôsystem completion and justificationÕ, Ôsystem costsÕ, Ôsystem benefitsÕ, and Ôother reasonsÕ. The first factor Ôsystem completion and justificationÕ is highly correlated with ten variables, the

1006

H. Al-Yaseen et al. / European Journal of Operational Research 173 (2006) 1000–1011

second factor Ôsystem costsÕ is highly correlated with ten variables, and the third factor Ôsystem benefitsÕ are highly correlated with three factors, whilst the fourth factor Ôother reasonsÕ is highly correlated with one variable barriers for adopting the system which was also found to be the least evaluated reason in practice, as shown in Table 2. A glossary of variables is found in Appendix A. 6.2. Reasons for adopting Operational Use evaluation The most important reasons for adopting Operational Use evaluation were identified from a fivepoint Likert scale ranging from 1 (not important) to 5 (very important). The results are presented in Table 3. Employing a factor analysis cut-off level of 0.5, three factors were considered as the main reasons of adopting Operational Use evaluation (explaining 92.31% of the variance—see Fig. 4), which we call Ôsystem costsÕ, Ôsystem benefitsÕ, and Ôother reasonsÕ. 6.3. Operational Use evaluation criteria The results are presented in Table 4. A factor analysis cut-off level of 0.5 was employed; Operational Use evaluation criteria were resulted in four factors explaining 87.03% of the variance (Fig. 5), which we termed Ôsystem completionÕ,

Table 3 Reasons for adopting Operational Use evaluation—Factor analysis Variables

Criteria

Factors System completion

OUeC1 OUeC2 OUeC3 OUeC4 OUeC5 OUeC6 OUeC7 OUeC8 OUeC9 OUeC10 OUeC11 OUeC12 OUeC13 OUeC14 OUeC15 OUeC16 OUeC17

System information

System impact

Other criteria

0.973 0.869 0.894 0.865 0.776 0.973 0.973 0.784 0.974 0.979 0.874 0.842 0.959 0.874 0.928 0.849 0.933

Note: Only loadings greater than 0.50 are shown.

Ôsystem informationÕ, Ôsystem impactÕ, and Ôother criteriaÕ. The first factor Ôsystem completionÕ is highly correlated to seven criteria, the second factor Ôsystem informationÕ are highly correlated to five criteria, the third factor Ôsystem impactÕ is highly correlated to four criteria, whilst Ôother criteriaÕ is correlated to one criterion—net operating costs, which was also found to be the least evaluated criteria in practice. For more information, see Table 4 which shows the construct loadings for the reasons of adopting Operational Use evaluation.

Factors Other reasons

OUeR1 OUeR2 OUeR3 OUeR4 OUeR5 OUeR6 OUeR7 OUeR8 OUeR9 OUeR10

Table 4 Operational Use evaluation criteria—Factor analysis

System benefits

System costs

0.951 0.951 0.941 0.912 0.977 0.978 0.982

Note: Only loadings greater than 0.50 are shown.

0.946 0.919 0.922

6.4. Reasons for adopting a comparison between prior Operational Use and Operational Use evaluation Most of the organisations (77.7%) that carried out a formal OU evaluation conducted it in a comparison with the outcomes of POU evaluation, and found that there was an important ÔgapÕ or inconsistency between the evaluations. This gap comprised three major dimensions—gaps in estimating the systemsÕ economic lifespan, cost, and benefits.

H. Al-Yaseen et al. / European Journal of Operational Research 173 (2006) 1000–1011

1007

7 6

Four factors explaining 87.03% of all the variance

Eigenvalues

5 4 3 2 1 0 -1

1

2

3

4

5

6

8

7

9

10

11

12

13

14

15

16

17

Variables Fig. 5. Eigenvalue of Operational Use evaluation criteria.

Table 5 Reasons for the gap between POU and OU evaluation Reason

Mean

Standard deviation

Lack of an appropriate evaluation method Lack of agreement on evaluation criteria Groups who are involved in the evaluation process Intangible benefits of the system Availability of qualified evaluator Changes to user requirements Changes to system requirements Maintenance costs of the system Operational costs of the system Indirect costs of the system Changes to the markets requirements

4.67 4.58 4.49 4.42 4.36 4.11 4.09 3.91 3.76 3.53 3.44

0.48 0.50 0.55 0.62 0.65 0.86 0.73 0.73 0.68 0.69 0.69

The main reasons for adopting a comparison were again identified using a five-point Likert scale ranging from 1 (not important) to 5 (very important). The two most important reasons were to check that the planned benefits were achieved and to compare between planned and actual costs. The least two important reasons for the comparison were to record lessons for the future and to improve the evaluation process for future systems. 6.5. Reasons for the gap between POU and OU evaluation The main reasons for adopting this comparative approach were measured on a five-point Likert

scale ranging from 1 (not important) to 5 (very important), as shown in Table 5.

7. Synthesis All of the responding organisations have and do carry out formal POU evaluation, but only about a third (36.5%) currently perform a formal OU evaluation IT use. This means that about twothirds (63.5%) of the organisations do not gather any evidence to establish how successful their IT projects were, therefore cannot use such information from OU evaluation to improve their evaluation techniques.

1008

H. Al-Yaseen et al. / European Journal of Operational Research 173 (2006) 1000–1011

The most popular reasons for adopting OU evaluation were related to formal aspects of signing off the project (based around traditional measures such as meeting requirements, and achieving agreed metrics for effectiveness, usage, efficiency, security, performance, etc.), and system costs. The two factors—systemsÕ benefits and adoption barriers—were found to be less important. On the other hand, amongst the 45 organisations, the most frequent reason for adopting OU evaluation was to do with the systemsÕ benefits (both tangible and intangible). Most of the sampled organisations attach greater importance to the measurement of benefits rather than the measurement of costs. The most frequently cited criterion for OU evaluation was system information (accuracy of information, timeliness and currency of information, adequacy of information, and quality of programs). The most important claimed use and benefit of adopting OU evaluations was system cost (operational cost, training cost, maintenance cost, upgrade cost, reduction in other staff cost, reduction in salaries, and other expenses saved). Results suggest that most decision makers do not place much importance on OU evaluation of their IT systems. Most managers tend to think of it only as a formality rather than a proper evaluation process. It can be postulated that such a perception plays an important role in hindering the adoption of OU evaluation. Results also provide evidence that OU is useful if it is perceived as more than just a formality. For example, amongst the 45 who consider adopting OU evaluation those companies who seriously perform it tend to gain considerable benefits, including the validation of their original POU evolutional estimates. But more importantly OU evaluation helps those organisations to better appreciate and capture the intangible benefits associated with IT. Evidently, if IT evaluation is starting to capture the benefit side more than the cost side, then OU evaluation—given the above results—should play an important role in gauging such benefits. To summarise the findings, it is clear that the practitioners are not appreciating the full benefits of OU and need to be aware of such benefits. Such lack of appreciation is evidently behind the appar-

ent scarcity of implementations of OU evaluation, which negatively feeds back into perceptions and so forth.

8. Conclusions The main aim of this research was to capture a picture of operational use (OU) evaluation in contrast with Prior Operational Use (POU) evaluation as practiced within UK organisations in order to understand obstacles hindering the full implementation of OU evaluation and its potential benefits. In a survey of the FTSE 500 companies we found out that around two thirds of the 123 respondent organisations gave less importance to the OU evaluation of IT than POU. Of those organisations who did use OU evaluation, some thought of it as a completion formality for signing off the project. Further findings from the research survey suggest that within a structured approach, OU could be beneficial to organisations when acquiring new systems. This matches the expectation that whatever is learned from current evaluation ought to be useful to evaluate new systems. We have considered the survey result that companies appear to perform OU evaluation as a formality rather than to reflect on (and improve) the appreciation of benefits. We postulate the reason for this is that whilst the potential benefits of engaging with a process of OU evaluation exists, the organisational structure within which it must operate does not generally cater for it. A clear contrast between POU and OU is evident when considering modern project management approaches such as PRINCE2, which usually incorporates frequent cycles of POU evaluation (OGC, 2002) as a fundamental component of the method. The fixed time horizon inherent in project-based work can be the precursor to a considerable organisational omission in full project evaluation. This omission occurs because no interest group is charged with assessing the value of the IT project over its entire lifecycle (from inception to decommissioning), which would therefore include OU. In other words, project completion is taken to mean exactly that—so evaluation ceases when the system becomes operational because the self-contained and

H. Al-Yaseen et al. / European Journal of Operational Research 173 (2006) 1000–1011

budgeted project has then ended. After completion there is nothing else to do. A further finding that can be attributed to this study is that when organisations carry out both types of evaluation (OU and POU) the deviation from original estimates became a focal point for further analysis. Our study shows that the reasons for adopting the OU–POU comparison were to enable the auditing of the planned benefits and to learn lessons appropriate for future projects (see Table 5). Our results, regarding obstacles to OU evaluations, were found to be mutually supported by the study by Owens and Beynon-Davies (1999) on mission critical systems. Currently, only organisations who perform serious OU evaluation understand the benefits of it. And there are not many of these, so very little analysis exists on planned costs and actual costs (or benefits). Without OU evaluation the cost of future projects would seem likely to be less accurately estimated. Our research results are entirely consistent with this observation. At the moment the cost of lost opportunities can be conjectured to be on the increase. Without OU evaluation how can we know whether this is true or not, or much else about what is going on? Our study confirms that dissemination of the importance of OU evaluation amongst both the academic and practitionersÕ communities could play an important role in more IT effectiveness and less disappointments. We hope the reader agrees, in which case this paper has made such a contribution. Appendix A. Variables (reasons) codenames used for analysis

Reasons

Description of reasons

Reasons for adopting Prior Operational Use evaluation POUeR1 System meets requirements POUeR2 System effectiveness POUeR3 System usage POUeR4 System efficiency POUeR5 Justify adoption POUeR6 System security POUeR7 System performance

1009

Appendix A (continued) Reasons

Description of reasons

POUeR8

Quality and completeness of system documentation Hardware performance Quality of programs Operational costs Training costs Maintenance costs Upgrade costs Reduction in clerical salaries Reduction in other staff costs Other expenses saved Direct costs Indirect costs Other costs Tangible benefits Intangible benefits Other benefits Barriers of adopting the system

POUeR9 POUeR10 POUeR11 POUeR12 POUeR13 POUeR14 POUeR15 POUeR16 POUeR17 POUeR18 POUeR19 POUeR20 POUeR21 POUeR22 POUeR23 POUeR24

Reasons for adopting Operational Use evaluation OUeR1 Estimating of system life OUeR2 Justify system adoption OUeR3 Risks OUeR4 Barriers OUeR5 Tangible benefits OUeR6 Intangible benefits OUeR7 Other benefits OUeR8 Direct costs OUeR9 Indirect costs OUeR10 Other costs Operational Use evaluation criteria OUeC1 Internal controls OUeC2 Project schedule compliance OUeC3 System security and disaster protection OUeC4 Hardware performance OUeC5 System performance versus specifications OUeC6 System usage OUeC7 Quality and completeness of system documentation OUeC8 Accuracy of information OUeC9 Timeliness and currency of information (continued on next page)

1010

H. Al-Yaseen et al. / European Journal of Operational Research 173 (2006) 1000–1011

Appendix A (continued) Reasons

Description of reasons

OUeC10 OUeC11 OUeC12 OUeC13

Adequacy of information Appropriateness of information Quality of programs User satisfaction and attitude towards systems User friendliness of system–user interface SystemÕs impacts on users and their jobs SystemÕs fit with the impact upon organization Net operating costs (savings of system)

OUeC14 OUeC15 OUeC16 OUeC17

References Al-Yaseen, H., Eldabi, T., Paul, R.J., 2004. A quantitative assessment of operational use evaluation of information technology: Benefits and barriers. In: Proceedings of the Tenth Americas Conference on Information Systems, New York, August 2004, pp. 688–692. Ballantine, J.A., Galliers, R.D., Stray, S.J., 1996. Information systems/technology evaluation practices: Evidence from UK organizations. Journal of Information Technology 11, 129– 141. Berthold, M., Hand, D.J., 2003. Intelligent Data Analysis, second ed. Springer-Verlag, Berlin. Beynon-Davies, P., Owens, I., Lloyd-Williams, M., 2000. IS Failure, Evaluation and Organisational Learning. UKAIS, Cardiff, pp. 444–452. Beynon-Davies, P., Owens, I., Williams, M.D., 2004. Information systems evaluation and the information systems development process. Enterprise Information Management 17, 276–282. Bradford, M., Florin, J., 2003. Examining the role of innovation diffusion factors on the implementation success of enterprise resources planning systems. International Journal of Accounting Information systems (4), 205–225. Brown, J., Kiernan, N., 2001. Assessing the subsequent effect of a formative evaluation on a program. Journal of Evaluation and Program Planning 24, 129–143. Dabrowska, E.K., Cornford, T., 2001. Evaluation and telehealth—An interpretative study. In: Proceedings of the Thirty-Fourth Annual Hawaii International Conference on System Sciences (HICSS)-34, January 2001, Maui, Hawaii. Computer Society Press of the IEEE, Piscataway, NJ. Eldabi, T., Paul, R.J., Sbeih, H., 2003. Operational use evaluation/post implementation evaluation of IT. In: UKAIS, 2003, Warwick.

Farbey, B., Land, F., Targett, D., 1993. How to Assess Your IT Investment: A Study of Methods and Practice. ButterworthHeinemann Ltd., London. Farbey, B., Land, F., Targett, D., 1999. Moving IS evaluation forward: Learning themes and research issues. Journal of Strategic Information Systems 8, 189–207. Gunasekaran, A., Love, P.E.D., Rahimi, F., Miele, R., 2001. A model for investment justification in information technology projects. International Journal of Information Management 21, 349–364. Irani, Z., 2002. Information systems evaluation: Navigating through the problem domain. International Journal of Information and Management 40, 11–24. Irani, Z., Love, P.E.D., 2002. Developing a frame of reference for ex-ante IT/IS investment evaluation. European Journal of Information Systems 11, 74–82. Irani, Z., Sharif, A., Love, P.E.D., Kahraman, C., 2002. Applying concepts of fuzzy cognitive mapping to model: The IT/IS investment evaluation process. International Journal of Production Economics (75), 199–211. Jones, S., Hughes, J., 2000. Understanding IS evaluation as a complex social process. In: Chung, H.M. (Ed.), Proceedings of the 2000 Americas Conference on Information Systems (AMCIS), 10–13 August, Long Beach, CA. Association for Information Systems, Atlanta. pp. 1123–1127. Kumar, K., 1990. Post implementation evaluation of computer information systems: Current practices. Communications of the Association for Computer Machinery 33 (2), 203–212. Lin, C., Pervan, G., 2003. The practice of IS/IT benefits management in large Australian organisations. International Journal of Information and Management (41), 13–24. Liu, Y., Yu, F., Su, S.Y.W., Lam, H., 2003. A cost-benefit evaluation server for decision support in e-business. Journal of Decision Support Systems (36), 81–97. Love, P.E.D., Irani, Z., 2001. Evaluation of IT costs in construction. Journal of Automation in Construction 10, 649–658. OGC, 2002. Managing successful projects with PRINCE2. Office of Government Commerce, London. Owens, I., Beynon-Davies, P., 1999. The post implementation evaluation of mission-critical information systems and organisational learning. In: Proceedings of the Sevenths European Conference of Information Systems, Copenhagen, Copenhagen Business School. pp. 806–813. Pett, M.A., Lackey, N.R., Sullivan, J.J., 2003. Making Sense of Factor Analysis: The use of Factor Analysis for Instrument Development in Health Care Research. Sage Publications, London. Poon, P., Wagner, C., 2001. Critical success factors revisited: Success and failure cases of information systems for senior executives. Journal of Decision Support Systems 30, 393– 418. Remenyi, D., Money, A., Sherwood-Smith, M., Irani, Z., 2000. The Effective Management and Management of IT Costs and Benefits. Butterworth-Heinemann Ltd., London. Tashakkori, A., Teddlie, C., 2003. The past and the future of mixed methods research: From methodological triangula-

H. Al-Yaseen et al. / European Journal of Operational Research 173 (2006) 1000–1011 tion to mixed methods designs. In: A., Tashakkori, C., Teddlie (Eds.), Handbook of Mixed Methods in Social and Behavioral Research. Sage, Thousand Oaks, CA. Walsham, G., 1999. Interpretive evaluation design for information systems. In: Willcocks, L., Lester, S. (Eds.), Beyond the IT Productivity Paradox. Wiley, Chichester, pp. 363–380.

1011

Willcocks, L., 1992. Evaluating information technology investments, research findings and reappraisal. Journal of Information Systems 2, 243–268. Yeo, K.T., Qiu, F., 2003. The value of management flexibility— A real option approach to investment evaluation. International Journal of Project Management 21, 243–250.