~
International Journal of Information Management, Vol. 16, No. 3, pp. 205-217, 1996
Pergamon
Copyright © 1996 Elsevier Science Ltd Printed in Great Britain. All rights reserved 0268-4012/96 $15.00 + 0.00
S0268--4012(96)00005--9
The Management of Change for Information Systems Evaluation Practice: Experience from a Case Study V SERAFEIMIDIS AND S SMITHSON
Most of today's cost-driven, project evaluation methodologies and accountancy frameworks fail to take into account the intangible benefits and associated risks, and cannot reflect the infrastructural nature of modern information systems (IS). This paper argues that an interpretivist framework is initially needed to understand and study the IS evaluation process. The emphasis here is on describing and analysing processes of change regarding information technology (IT) appraisal practices in context, illustrating why and how their content and the strategies for introducing them can be constrained and/or enabled by features of the organizational context. In these terms, this paper analyses the case of a UK insurance organization where the need for a rigorous IT appraisal methodology initiated a 12-month project to design and develop such a method and a series of supporting tools. The paper examines the use and evolution of the methodology over the past two years. Copyright (~ 1996 Elsevier Science Ltd
Vasilis Serafeimidis is a graduate of the A t h e n s University of Economics and Business. He also holds an MSc degree in Analysis, Design and M a n a g e m e n t of the lnlormation Systems. He has worked as a consultant for a n u m b e r of years and currently is a P h D candidate and teaching assistant at the L o n d o n School of Economics (University of London). Steve Smithson, is a senior lecturer in information systems at the L o n d o n School of Economics. He holds a BSc and MSc both from the L o n d o n School of Economics. He received his P h D in Information Systems in 1989. Previously, he worked for 12 years in industry, mostly in transport and distribution. IpRICE WATERHOUSE REVIEW
formation
(1992) In-
Technology Review
1992/93
Price W a t e r h o u s e , London; WILLCOCKS, L AND LESTER, S (1993) 'Evaluation and control of IS investments. Recent U K survey evidence" Research and Discussion Papers Oxford Institute of Information Management, RDP93/3 2ANGELI., 1 0 AND SMITHSON, S (1991) In-
formation Systems Management-Opportunities and Risks MacMillan, London; FARBEY, B, LAND, F AND TARGETT, D continued on page 206
Introduction With increasing levels of information technology (IT) investment 1 and the growing centrality of information systems (IS) within organizations, evaluation is becoming widely recognized as a very important activity. It is important at the levels of operations, monitoring and control, the allocation of scarce organizational resources, and business planning and strategy. 2 However, despite its importance, it is often ignored or carried out inefficiently or ineffectively because of its complex and elusive nature. 3 Evaluation has a number of overlapping interpretations, it is considered problematic both conceptually and operationally 4 and although a number of evaluation methodologies are available 5 none of them is completely adequate in all situations. A traditional (formal-rational or functionalism) approach sees evaluation as an external judgement on an information system which is treated as if it existed largely in isolation from its human and organizational effects and places excessive emphasis on the technological aspects at the expense of the organizational and social aspects. 6 In so doing it neglects the organizational context and process of IS development and its content, elements which are critical to the successful application of IT in support of business strategy and objectives. In general, more attention has been focused over the years on prescribing how to carry out evaluations (with project-driven and cost-focused approaches) rather than analysing their role and effects. 205
Management of change for IS evaluation practice: V Serafeimidis and S Smithson
Figure 1 The elements of evaluation continued from page 205 (1993) How to Assess your IT Investment.
A Study o f Methods and Practice Butterworth Heinemann, Oxford; HAWGOOD, J AND LAND, F (1988) ' A multivalent approach to information systems assessment' in BJORN-ANDE,RSEN~ N AND DAVIS~
Information Systems Assessment: Issues and Challenges North Holland, A m -
G n (LOS)
sterdam, pp 103-124; WILLCOCKS, L AND LESTER, S (1993) 'Evaluating the feasibility
of information technology investments' Research and Discussion Papers Oxford Institute of Information Management RDP93/1 3DICKSON, G W, WELLS, C E AND WILKERS,
a n (1988) 'Toward a derived set of measures for assessing IS organisations' in BJORN-ANDERSEN, J AND DAVIS~ G B (EDS)
Information Systems Assessment: Issues and Challenges North Holland, Amsterdam, pp 129--147 cit, R e f 2; SYMONS, V J (1990) 'Evaluation of information systems: IS development in the Processing Company' Journal of Information Technology 5 194-204 5FARBEY ETAL, op cit, R e f 2; WILLCOCKS, L (1992) 'Evaluating information technology 4WILLCOCKS AND LESTER, 013
Interpretivism, on the other hand, offers less of a model of IS evaluation and more a framework for analysis. It is a social analytical approach founded on assumptions and practices typically associated with structuration theory. 7 Information systems can be understood through their interaction with the organizational context in which they are embedded and their role in the process of organizational change accompanying the introduction of IS. s This paper begins with a description of an interpretivist framework for the analysis of IS evaluation activities, based on the notions of content, context and process. The rationale for the framework and its essential characteristics are discussed in some length before we apply the framework to a particular case. This case, rather than being an example of an evaluation study, concerns the development of an evaluation methodology and the introduction of a new evaluation practice in a large organization. We show how the framework can be used to explain the events that took place in the case study organization during the development stages and to elicit principles regarding the management of change introduced by an IT investment appraisal approach.
investments: research findings and reappraisal' Journal of Information Systems 2
A framework of analysis
~
In order to be managed successfully, and having in mind its elusiveness and complexity, evaluation requires a firm framework which can act as a foundation for discussion of the various aspects of IS evaluation in its organizational and business context. This acts as a frame of reference for the broad issues, as well as the individual goals, activities and tools of the evaluation process. The interpretive framework proposed here expands the traditional narrow approach of the identification and quantification of the tangible costs and benefits of an IT investment and introduces a multiple perspective approach which takes into account organizational values, social structures, potential outcomes and the associated risks. This broader conceptualization (Figure 1) sets out the linkages between the content, process, and context of evaluation and their interactions.
243-268 RSCHHEIM, R AND SMITHSON, S (1988) ' A
critical analysis of information systems evaluation' in BJORN-ANDERSEN, N AND DAVIS, G a (EDS) Information Systems Assessment:
Issues and Challenges North Holland, A m sterdam, pp 17-37 7GIDDENS, A (1979) Central Problems in Social Theory: Action, Structure and Contradiction in Social Analysis University of California Press, Berkeley, C A 8ORLIKOWSKI, W J AND BAROUDI, J J (1991)
'Studying information technology in organizations: research approaches and assumptions' Information Systems Research 2 (1) 1-28; WARD, J M (1987) 'InteRcontinued on page 207
206
Management of change for IS evaluation practice: V Serafeimidis and S Smithson
This framework of analysis is based on the contextualist principles introduced by Pettigrew 9 and an earlier version has been presented by Serafeimidis and Smithson.l° The current version draws on the work of Farbey et al, 11 Sauer, 12 Symons, 13 Walsham, 14 and Willcocks and Margetts 15 together with the views of 40 UK academics and practitioners who were interviewed as part of the study. Context, process and content o f evaluation
continued from page 206 rating information systems into business strategies' Long Range Planning 20 (3) 1%29 '~PE]TIGREW, a M (1985) The Awakening Giant: Continuity and Change in ICI Blackwell, Oxford; PETr~GREW, A M ANt) WHIPP, A (1991) Managing Change for Competitive Success Blackwell Publishers, Oxford 10SERAFEIMIDIS, V AND SMITHSON, S (1994) 'Evaluation of IS/IT unvestments: understanding and support' in BROWN, A AND REMENVl, D (EDS) Proceedings of the First
European Conference on Information Technology Investment Evaluation Operational Research Society, Birmingham llFARBEY ET AL, op cit, Ref 2 12SAUER, C (1994) 'A model of the information systems evaluation process: a synthesis of politico-rational and interpretivist views' in Proceedings of the Second European
Conference
on
Information
Systems
Netherlands, 30-31 May 13SVMONS, V J (1991) 'A review of information systems evaluation: content, context and process' European Journal of lnformation Systems 1 (3) 205-212 14WALSHAM, G (1993) Interpreting Information Systems in Organizations John Wiley,
Chichester, Series in Information Systems 15WILLCOCKS, L AND MARGETrS, H (1994) 'Risk and information systems: developing thc analysis' in WILLCOCKS, L (ED) Informa-
tion Management. The Evaluation of Information Systems Investments Chapman & Hall, London, pp 207-227 16SCOT]" MORTON, M S (ED) (1991) The Corporation of the 1990s. Information Technol-
ogy and Organizational Transformation Oxford University Press, New York 17MII-,ES, R AND SNOW, C (1986) 'Organizations new concepts for new forms" California Management Review 28-62073; eOWELL, W (1990) 'Neither market nor hierarchy: network forms of organization' Research on Organisational Behavior 12 295-336; ROCKART, J AND SHORT, J (1991) 'The networked organization and the management of interdependence' The Corporation of the 1990s Oxford University Press, New York 18WISEMAN, D (1994) 'Information economics: a practical approach to valuing information systems' in WILLCOCKS, L (ED)
Information Management. The Evaluation of Information Systems Investments Chapman and Hall, London, pp 171-187 lgFARBEY /zT AL, op cit, R e f 2
The context of evaluation may include external factors, typically beyond the control of the organization, that the organization and its members need to respond to and accommodate; for example, the national economic situation, national and local government policy, level of government support, markets and market demands, competition, supplier availability and expertise, and other environmental pressures. The growth of interorganizational systems has meant that the goals and objectives of external trading partners must be considered as well as the ambitions of individuals or groups within the organization. Here one may see IT as supporting active and mutually helpful relationships between organizations (ie supplier and customer). IT can also give organizations the capability to conduct joint problem-solving exercises. In terms of the internal context of evaluation, important factors include the organization's strategy, structure, corporate culture, reward system, management (financial and information), human resources and industrial relations arrangements; IS infrastructure and management; changing business needs; changing stakeholder needs/objectives; and employee relations. Scott Morton ~6 summarizes these into five major sets of forces influencing the organizational context: management processes, structure, individuals and roles, technology and strategy. Here, the following types of questions are generated: Who is involved? Why is the evaluation being carried out? What level of change is being aimed at? Are there conflicting aims or interests? In organizations with more of a network structure, ~7 the emphasis shifts towards internal linkages, or the interdependence between various people and groups within the organization. We may also adduce here the idea of the business enterprise as an orchestra rather than a hierarchy. ~s In such cases, IT is essential to facilitate communication between individuals and groups. These ideas, with their implications for the evaluation of the role of IT, can be incorporated in the conceptual framework. The informal procedures and information flows around an IS are often integral to its purpose and functioning. They are reliant on the complex network of social relationships within an organization and its trading partners and on current patterns and attitudes. Active consideration should be given to the ways in which IS influence the diversity of official and unofficial information flows. The context additionally includes the many perspectives which are brought to evaluation by the different parties influenced by their level of authority and control within the organization. The introduction of a new system, and the way in which it is evaluated, could be understood better if it is seen as social action, rather than as a straightforward investment evaluation. ~9 Therefore, analysis of the formal and informal relationships supporting IS brings social and political interaction to centre stage, and with it the importance of stakeholders' perspectives, where stakeholders may be groups or larger collectives (eg an organization). Conflicts of interest often emerge within as well as between stakeholder
207
Management of change for IS evaluation practice: V Serafeimidis and S Smithson
groups which can affect evaluation, much of which is subjective, based on stakeholder value judgements. Symons 2° argues that effective evaluation means understanding and taking seriously the perspectives of individual stakeholders and interest groups. It also means examining the mechanisms of representation of different interests, the institutional means by which divergent evaluations can be discussed, and the ability of different groups to have access to informed opinion and relevant data regarding the options available. By facilitating communication and consultation in this way, evaluation can encourage the involvement and commitment of stakeholder groups. Evaluation can thus play a central role in the process of organizational change accompanying the introduction of IS. The content includes proposed changes and their substance (eg type of technology, size, complexity), their impact (radical or incremental), the definitional and technical uncertainty and a precise definition of what problem the evaluation is supposed to solve. The content of an evaluation refers to the value of an IT investment as a contribution to business strategy and organizational effectiveness (eg financial and other costs and benefits), the criteria to be considered, the associated risks and a consensus of what should be measured. Here it is particularly important to look beyond the narrow quantification of costs and benefits to an analysis of the opportunities presented by IT, together with the potential constraints on its application and an assessment of the processes of change, organizational support and conflict management. The criteria used to make a decision on IS investments are at the heart of that decision, and they have significance for a number of reasons. 21 First, how the criteria are interpreted significantly impacts the effectiveness with which IS investment decisions are made. Secondly, they reflect the effectiveness with which IS resources are being used, the degree to which senior management are involved, and the level of integration between corporate/business-unit strategy and systems strategy. Thirdly, the criteria are significant for the organization's finance and management accounting function, in terms of its role in optimizing return on investment, and its involvement in the cost benefit analysis (CBA) that may precede an IS capital investment decision. Another important element is the 'what' or 'who' is the source of values and how these values guide the approach taken to the evaluation process. Unless the evaluation process takes on board the fundamental 2°Op cit, Ref 13 values of the organization, it is likely to be dismissed as 'counter21BACObl, J (1994) 'Why companies invest in information technology' in WILLCOCKS,L cultural'. This is not of course to argue that company values must never (ED) Information Management. The Eva- be challenged. Rather, it is to say that there has to be a managed process luation of Information Systems Investments of change and a new consensus formed. Chapman and Hall, London, pp 31-47 The process of evaluation covers how the evaluation is done and how 22FARBEY E T AL, op cit, Ref 2; WARD, op cit, Ref 8; EARL, M (1989) Management the issues are perceived. In this layer, the way in which evaluation is Strategies for Information Technology carried out (the techniques and methods used), its social role, the way it Prentice Hall, London; ETZERODT, P AND r~ADSEN, K H (1988) 'Information systems plays itself out over time, and the results of the evaluation are placed in assessment as a learning process' in BJORN- the foreground. It includes assessments by managers, IS professionals ANDERSEN, N AND DAVIS, G B (EDS) Inand users at all stages of IS development and operation. It is very formation Systems Assessment: Issues and Challenges North Holland, Amsterdam, pp important that a means of communication with every level of the 333-345; GALLIERS, R D (1991) 'Strategic organization is established to achieve organizational and individual information systems planning: myths, real- learning. The significance of the process layer is that it draws attention ity and guidelines for successful implementation' European Journal of Informa- to evaluation as a (group) learning process, 22 mediating between content and context. The evaluation process should be regarded as a tion Systems 1 (1) 55~4
208
Management of change for IS evaluation practice: V Serafeimidis and S Smithson
means to encourage the involvement and commitment of stakeholders, because of the central role of evaluation in organizational change. Suitably supported and structured, we envisage a process similar to de Geus's 23 Structural Modelling Activity where groups are brought together specifically to construct working models of aspects of the organization. Lasting benefit can be gained from those involved making their mental models explicit as a prelude to reaching an agreement on what is important for the organization, based on a common understanding of how the organization functions. In the same way, we believe that a systematic procedure, based on explicit frameworks, could also be a powerful means for reaching a common understanding of a system's benefits, based on a consensus of the organization's objectives. In short, it becomes possible to challenge individuals' existing models of organizational ambition with a view to replacing them with a better, commonly agreed one. Usually, an IT evaluation produces outputs. Outcomes are the impacts of these outputs on the recipient stakeholders. Outcomes may be planned (ie strategic) or unanticipated (ie learning), desirable or otherwise. Outputs typically assessed in IS evaluation studies are technical performance, operational efficiency, business process and financial impacts, business benefits, user (stakeholder) acceptance, and user satisfaction. Measuring these outcomes is a necessary task but can be complicated and difficult. All the evaluation elements are strongly linked together. The content provides the central kernel of what is to be evaluated, while the process describes h o w this should be done, and the context examines the organizational background, in terms of w h o is involved and why. A s Pettigrew24 argues, 'Formulating the content of a strategic change crucially entails managing its context and process'. Farbey et aft 5 note that cases where systems are justified by a 'back-door' route represent the context entirely overwhelming the content. Symons26 argues that the lesson from the content/context link is that introduction of IS means designing the work itself, not just the tool with which to do it. The context/process link suggests treating evaluation as continuing throughout the various stages of system development. An exclusive focus on the content of evaluation (what we want to do) fails to take into account the context of evaluation (the forces for continuity in the situation) and thereby precipitates its own failure. A historical understanding of all the evaluation elements (context, content and process) is necessary because IT-related changes and their evaluation evolve over time and, at any particular point, present a series of constraints and opportunities shaped by the previous history. In the case study discussed below, we extend the notion of process to include the development process for an evaluation methodology, rather than limiting the discussion to its process aspects in use. However, the meanings of content and context remain unchanged.
23DE GEUS, A P
(1992) 'Modelling to predict
or learn' European Journal of Operational Research 59 1-5 24pE-rrI6REW,op cit, Ref 9 25FARBE¥L'T A L , op cit, Ref 2 260p cit, Ref 13 27AB1 (1990) Insurance Statistics 198589 Association of British Insurers, London
Case study of a UK insurance organization The d e v e l o p m e n t c o n t e x t
Insurance is a major contributor to the UK's economy;27 the world-wide premium income figure in 1989 for total life and total general insurance (UK operators) was £44 295 million. Research by Codington and 209
Management of change for IS evaluation practice: V Serafeimidis and S Smithson
Wilsona8 found that the insurance industry relies heavily on IT to handle the routine business of administering insurance policies and would not be able to conduct business without it. The strategic use of IT has already helped to introduce direct marketing and new channels of distribution which would not have been possible otherwise. 29 The organization in question is a large well established British provider of individual life and general insurance as well as pensions. IT expenditure had been a significant item in the budget for many years, amounting to approximately £71 million in 1992, and £57 million in 1993 and, with this experience and the whole sector's climate, it is perhaps not surprising that the company had developed a fairly mature and sophisticated view of the relationship between IT and its underlying business. The managers realized that the evaluation of benefits deriving from the use of IT should be seen as a question of business reality rather than technical wizardry and there was a need to establish a causal link between items of IT expenditure and business performance in bottomline terms. In 1990, after the finance director had become interested in information economics,3° one of the company's divisions initiated the development of a standard corporate methodology for appraising IT investments. It was clear by that time that trying to assess the return from their IT investment was too complex for traditional finance-based CBAs. One of the major problems with CBA was that few business or systems people had a formal grounding in project accounting. Thus, it was vital to raise the awareness of all the stakeholders associated with IS projects in order to produce higher quality CBAs, more reliably and much more quickly. It seemed clear that, while IT resources ought to be treated no differently from other capital resources, all too often in practice they were different. Much depended on what kind of IT was being considered, what kind of benefits were expected and, no less significantly, what kind of IT was even available. A corporate IT investment appraisal methodology, or approach, was seen as the best way to select the right portfolio of IT projects and to ensure, through effective risk analysis and benefits management, the delivery of high quality systems and the achievement of both hard and soft benefits. The primary objective was to find ways to maximize the value for money that the company obtained from its investment in IT and to introduce them consistently across the corporation. This objective remained clear and constant even though the sub-goals changed significantly during development. The evaluation (or appraisal) process was also seen as an opportunity for learning so that the understanding of problems became deeper as time went on. The main objective was translated into a number of sub-goals:
28CODINGTON, S AND WILSON, T D
(1988)
'Information systems strategies in the UK insurance industry' International Journal of Information Management 14 (3) 188-203 29Ibid; op cit, Ref 14 30pARKER, M M~ BENSON, R J AND TRAINOR~
H E (1988) Information Economics: Link-
ing Business Performance to Information Technology Prentice-Hall, New Jersey
210
• In terms of project selection: to help the business allocate scarce resources and put together a balanced IT project portfolio by measuring how initiatives would contribute towards the business goals and thus to select the best IT projects accordingly. • In terms of project management: to make sure that the development team and the rest of the business have a common understanding of a project's goals, to reduce the risks throughout the development life-cycle, and to help track the costs and benefits. • In terms of benefits delivery: to measure the success of the project
Management of change for IS evaluation practice: V Serafeimidis and S Smithson
after its delivery including whether the business is actually achieving all the promised benefits. • The design criteria for the tools were: simplicity (both to administer and understand), ease of comprehension, completeness, and credibility of the results.
The development process
3~Op cit, Ref 28 32FARBEY ET AL, op cit, Ref 2
Development, which started in 1991, was mainly bottom-up in organizational terms, from a departmental and divisional level to a corporate one. The request for a rigorous evaluation mechanism implied the need for the clear description of the corporate business objectives as an initial way of measuring the contribution of the IS to the success of the organization (value for money) and the formulation of an adequate IS plan and portfolio selection. The business objectives would always be the sources for the requirements that the IT investment tries to meet and, at the same time, the appraisal or evaluation process should be responsible for measuring the success of achieving them. Therefore, working top-down conceptually, the project team carried out an analysis of all the business goals, measures and key performance indicators underlying the organizational objectives and identified a subset which were key for IT project appraisal. In addition, two extra objectives were added: management information and staff attitude/ morale. The final task in this phase was to make sure that the full set of objectives were accepted by the key stakeholders. In order to determine their relative importance (obtain a weighting) to the division as a whole they used a Delphic approach. After two iterations they achieved a good consensus, with sales effectiveness, customer service, unit cost, and customer base having the highest priorities. These were not very different from Codington and Wilson's3~ findings regarding the critical success factors of the whole UK insurance industry. The way in which a method goes 'with the grain' of the organizational culture is one of the most important criteria for its selection and development. The adaptation of the tools and techniques to the appraisal environment, and the political and cultural aspects of each case are among the most important issues. Moreover, organizations cannot 'change their spots' easily. In our case the organization traditionally used very formal financial methods for cost justification and could not be expected suddenly to switch to a 'soft' way of thinking. The champions of the method nevertheless deemed it necessary to take on board risks, intangible and strategic outcomes. Although we are ourselves convinced of the importance of such factors, a considerable programme of persuasion and confidence-building may be required as part of the evaluation process. According to Farbey et a132 this should involve communication action, directed towards achieving the acceptability of the evaluation, as well as discursive action to build an understanding of the terms in which matters are discussed (ie the terms in the questionnaires). Even though this development was initiated by an IT department, they tried to make sure that the other divisional business processes and the rest Of the corporation were aware of the development through seminars and the distribution of reports and information leaflets. Business managers who had been avoiding active involvement in IT decisions for decades, because so many IT projects defied formal CBA, seemed keen on being involved in the new attempt.
211
Management of changefor IS evaluationpractice: V Serafeimidis and S Smithson The content of the I T investment appraisal methodology The method consisted of three main streams, appraising: the 'hard' financial costs and benefits, the risks, and the strategic and intangible benefits. The main deliverables were: a financial model of the costs and benefits, a risk management plan, and a benefits profile. The project deliverables were validated using two sets of successful and unsuccessful project proposals before the method was fully launched. 1. The financial appraisal was quite straightforward and most of the effort in this area had been to raise the competence of IT practitioners. A financial appraisal guide and computer spreadsheet templates were developed to help in producing financial cases and carrying out sensitivity analysis. They automatically calculated measures such as the project payback period, the internal rate of return (IRR) and the net present value (NPV) and encouraged sensitivity analysis as well. 2. Risk appraisal included the identification of the main risks in a project and, where appropriate, an estimation of the probability of them occurring (in low or high terms) and/or an estimation of their potential impact if they do occur. Risks were classified into delivery risks (or system risks); ie what could prevent the delivery of a high quality system, on time and to budget? and benefit delivery risks (or commercial risks) such as business volatility and social and political aspects, where the business identified what could prevent the realization of the benefits, given a totally successful system implementation. A number of screen-based questionnaires (or 'tick-lists') were developed to guide managers in thinking about risks and their management (including the analysis of their probability and impact). 3. The third part dealt with intangible benefits, evaluating them quantitatively and setting up metrics and milestones which formed a crucial part of benefits management. Nine key benefit areas were agreed with the decision makers (bearing in mind the organizational goals) as the most important for project appraisal, and a simple questionnaire (16 questions) was constructed to measure the total contribution to these objectives and their general impact. The results of the intangible benefits evaluation exercise were shown graphically as a benefits profile which ranked the project(s), in terms of the number of benefit categories, and weighted the ranking according to the range of their impact. This provided a simple but complete picture of the overall impact of a project, and made sure that both the business users and the IT developers shared the same understanding and expectation of what the project would deliver. This benefits profile was to be used to assess the initial business case, and then revised continuously, as the project proceeded, so that the intangibles became clearer and more measurable. In addition, a benefit delivery plan was produced which stated what had to be done to monitor these benefits and any activities that would be necessary to realize the benefits. The developers of the appraisal method believed strongly that simple tools such as tick-lists were valuable in this area. They also took great care to avoid encouraging a mechanical, unthinking attitude to the application of tools. The developers found that this was best encouraged through individual training, using a project manager's current appraisal tasks as the basic educational platform. Most of the users found the techniques and tools easy to apply and the 212
Management of change for IS evaluation practice: V Serafeimidis and S Smithson
very act of applying them was extremely useful in clarifying a project's objectives. There was a significant improvement, especially in the financial awareness of the IS development teams. Good quality costbenefit and sensitivity analyses became the rule with a clear focus on delivering the maximum total return to the business from day one of a project's life. A key design objective was to build in flexibility, a goal which proved particularly pertinent given the rapid changes that took place in the various organizational divisions. A critical review of the content of the appraisal method and tools As designed the method had various advantages. At the beginning, the main intention was to develop an appraisal system to aid project selection and prioritization by business users although, for the developers of the investment appraisal system, high quality benefit management was seen to be at least of equal importance. Both objectives relied upon being able to estimate accurately a project's total contribution to the company's business goals and a key design criterion was to make sure that the results were credible to all levels of business and IS management. The general opinion was that the method and the tools had been successful and improved the IS maturity of the company. In addition they provided support at a very low cost in terms of time and effort, to both the business users and the system developers in order to achieve a mutual understanding of the project's purpose and the related risks, before significant resources were spent on analysis and design. The appraisal methodology 'enforced' the reviews of the business case at the end of each project life cycle stage. These reviews were taken seriously based on the estimate that at least two out of 10 of the company's projects would be cancelled somewhere along the way. This was often because the business requirements had either changed or gone away altogether. This could be seen as a safe way for the organization to avoid project failures. According to the systems manager, the company achieved significant cost reduction/avoidance and specifically it saved at least £1.8 million over a six-month period (during 1992/93) through the comprehensive appraisal of projects' real business benefits. The systems manager also argued that the risks could be clearly identified and managed by explicitly setting out accountabilities, metrics and milestones and benefits delivery has improved in general. However, there were a number of problems. Firstly, the application of the method to a project proposal did not take into account the size of the project. For instance, a 40-man project would usually 'win' over a 10-man project because it achieved more and in theory delivered more benefit. In order to be accurate this benefit should have been divided by the resources used in order to establish a common measure for comparison. An even more accurate way would have been to compare the benefits derived from a big project with the aggregate benefits of a portfolio of small projects taken together (which might use the same amount of resources). One of the biggest weaknesses of the tools was that many terms on the questionnaires, check-lists, and spreadsheet models were open to very subjective interpretation. Because they were not defined properly, people tended to interpret them in different ways. One attempt to solve this problem was the provision of training sessions where users were taught the exact meaning of the terms. Regarding the risk analysis, the commercial risks were seen by some 213
Management of change for IS evaluation practice: V Serafeimidis and S Smithson
users as being at too high a level while, on the other hand, the low level risk analysis (project/system risks) was carried out too early. Although the latter provided useful information for project managers, 'It is useful when you know the project plan, the scope, and the delivery dates, then you can go through this risk analysis but the method requires it before you even set up the project and know if the project should go ahead or not' a user argued. The rationale of the risk analysis was to identify potentially small problems and suggest ways of managing them which would be incorporated in the project management plan.
The evolution of the appraisal methodology: content and process in context Despite its advantages, the methodology only achieved a relatively limited level of success, due to a variety of factors, as the content and process interacted with a changing organizational context. This section describes the key events and explains how certain factors constrained the adoption of the methodology as a corporate standard. The influence of the outer organizational context on the IT appraisal practices originated from government policies and particular legislation. Additionally, the appraisal practices of the 'Times Top 200' companies, which had initially been investigated, played an important role as well. Internally, the personal interest and concern of the systems strategy manager, the manager of the business systems coordination group and the employees of the IT division who championed this attempt were the only strong motivations for the initial project. The other stakeholders lacked awareness of the significance of the evaluation process and felt 'forced' to use the method as a standard communication tool between themselves and systems people. In addition, the lack of senior support at a corporate level, the management policies, the management and organizational changes, and the importance of cost were important factors in the limited success of the methodology. A key event (or non-event) was the failure to obtain the full agreement of the finance director to use the methodology. He agreed with the concepts but would not accept the method and the tools, unless 'instructed' by the corporate level to do so, which meant therefore that they could not be introduced as a corporate standard. This had a 'knock-on' effect as the marketing division expressed their willingness to use the methodology provided that it was introduced as an accepted methodology by the finance division. In many instances, organizational history is an important element of the contextual framework. The previous finance-oriented practices and the distance between them and the new softer approaches was taken into account in the development of the appraisal method. At the same time, the IT people had their own approach and criteria to evaluate systems. The underlying notion was to build upon the traditional methods, involve the most dominant stakeholders and also make them aware of the different aspects investigated (ie to make IT managers think about the business benefits). In their mainstream business the company had experience moving from the use of mechanistic to more organic models of management in order to implement strategic changes. This tradition was clear in the evolution of the appraisal method and the major changes that took place during the key six months of 1994. One of the objectives, as mentioned above, was to introduce consistency across the organization regarding IT appraisal processes, some214
Management of change for IS evaluation practice: V Serafeimidis and S Smithson
thing which was not fully achieved. This was partly due to the lack of senior management support even though many business staff had become familiar with the principles of the method because they considered them to be the only way of seeing their ideas implemented by the systems group. Another constraint was the belief in some parts of the organization that the method was IT-focused, although arguably this was not the case. Thus, users and potential users did not see the method as the decision making tool it was designed to be. The ultimate user of the method was a centrally positioned impact assessment group which had special responsibilities for controlling computer-based changes and their impact on the division. This key group controlled the resource management function with regard to system development staff, which were regarded as a scarce resource. The appraisal method was used for a period of 12-18 months (1993-94) to check whether projects were worth developing p e r se, rather than trying to prioritize proposals. This resulted in projects being approved that exceeded the resources available to develop them. Therefore, an additional prioritization process was used, based on criteria such as legislative changes, staying in business, the business area which initiated the project and the impact on the critical success factors (eg cost reduction, revenue increase versus strategy). This proved to be a source of dissatisfaction for business sponsors who, having learned the methodology, had to resort to additional (more political) means to 'support' their proposals. During the first nine months of 1994, the situation deteriorated because of the shortage of developers compared with the number of proposed projects. Very soon it was realized that the project appraisal and evaluation mechanism should change focus from assessing individual project proposals to identifying the best portfolio of projects, given the available level of resources. This was in spite of the original claims that the method would optimize the project portfolio. The second main driving force was a directive that the business and systems groups should collaborate more together and the decision making process for project selection and prioritization should be broader by involving both groups. Because of this, during the past six months (mid-1994), many organizational changes took place within the division concerning how to choose the 'best' systems to be developed. Furthermore, there was a shift in the criteria used with the systems division adopting a 'softer' approach which paid more attention to human aspects and agreement and cooperation with business managers, rather than religiously following the results (and scores) of the traditional mechanistic tools and techniques. This was implemented through a new organizational group made up of business and systems managers whose objective was to move the focus and responsibility for project approval away from the systems division to the overall business. In addition to the new group, another new role was that of account managers (usually middle managers) to act as bridges (or interfaces) between each business area and the impact assessment group. Their role was to work with the business area in order to refine the objectives of the project and to liaise generally with the impact assessment group. As a result of these changes, the use and importance of the assessment method declined considerably. The financial appraisal component is still used to calculate certain financial aspects (ie payback period) in order to rank similar types of project. However, the intangible benefits
215
Management of change for IS evaluation practice: V Serafeimidis and S Smithson
profile is only used for particular projects which aim at service improvements because, due to current pressures, it is very unusual for a project to be accepted on the basis of intangible benefits. The risk analysis component has largely fallen into disuse altogether.
Conclusion We have argued that the evaluation of information systems and information technology is a difficult and complex activity for which the use of an interpretivist framework, comprising context, content and process, offers additional insight for researchers. The use of the framework was demonstrated in a case study of the introduction of a new evaluation method in a large organization. Despite the maturity of the organizational context and the effort put into the development of the methodology, as well as the attention paid to the process of introducing the methodology into the organization, the project only achieved a relatively limited level of success. Changes in the organizational context, together with failings in the development process and the content of the methodology, left much of the methodology side-tracked at the end of the day. However, it could be argued that the organization learned much from the exercise. From a research perspective, the case demonstrates the dynamic nature of organizations and the useful insight provided by analysing such a case using an interpretive framework. It also provides guidelines for the successful management of the changes introduced by an IT investment appraisal practice. Perhaps the most important practical lessons derived from the current case study could be summarized as follows. State the assumptions Identify from the beginning the role and the expected impacts of the change introduced. Propose remedial actions for negative cases and attempt to resolve conflicts. It is also important that changes in the evaluation practices are tuned with other organizational changes. Focus upon stakeholders and management involvement Ensure management commitment and support (ie always have a 'champion' and a 'business sponsor'). Furthermore, ensure commitment from people at all levels within the enterprise to achieve the benefits of the change. This requires: • Partnership between those designing the change and those operating it. This could be achieved by encouraging stakeholders' participation during design and development; for example, by carrying out awareness exercises. • Ownership of all aspects of the change at appropriate levels throughout the organization as well as ownership of the expected benefits. • Establish strong and direct communication links between business and systems people. Investigate the political and social nature Investigate thoroughly the political and social dimensions of the change. Special care should be taken for continuous management education, and reaction to political and social impacts.
216
Management of change for IS evaluation practice: V Serafeimidis and S Srnithson
The role of 'infrastructure' Investigate the existence of the necessary organizational and IT infrastructure to support the change introduced, as well as potential impacts on organizational structure and task description. Build on previous practices Adopt previous organizational practices in the areas of project management, IT development methodologies, financial appraisal, capital investment appraisal and study the way that the evaluation method goes 'with the grain' of the organizational culture. Methodological content and process A rigorous evaluation mechanism should focus not on 'the good projects to be developed' (projects good per se) but on the portfolio of the best projects given the available resources. In other words a portfolio approach to evaluate and prioritise IT investments should be considered. In this case the key aspects of decision making are: What is it most important to do? (benefits to be gained); What is capable of being done? (resources available); and What is likely to succeed? (risks to deal with). At a practical and operational level: • The business objectives, the business performance indicators and investment requirements should be derived before initiating the appraisal exercise. • Include all the possible and desirable value criteria (ie financial, intangibles, risks) and support an IT portfolio investment approach. • Ensure that a solid risk analysis and management techniques as well as benefits realization plan for tangible and intangible benefits are adopted. • Provide a variety of simple, user-friendly, and well-supported tools which will encourage communication between stakeholders and promote organizational learning.
217