Success measures for information systems strategic planning Edmond P. Fitzgerald Department of Information Systems, Faculty of Business, Southern Queensland, Toowoomba 4350, Australia
University
of
The large investment in information technology (IT) and the increasing impact of information systems (IS) on the competitive capability of organizations have focused attention on the need for effective IS strategic planning. As a result, current methods of measuring IS strategic planning success (or effectiveness) have come under scrutiny. A review of the IS planning literature shows that despite the availability of many different approaches for measuring IS strategic planning success, all have major limitations. The determinants of IS strategic planning success are still to be identified and valid measurement instruments are yet to be defined. The objective of this article, therefore, is to focus attention on the need for research into the measurement of IS strategic planning success. In addressing this aim, after clarifying the terminology to be used and outlining the significance of this problem, this paper reviews the IS planning effectiveness literature. Current approaches for measuring IS strategic planning success are examined and, based on this analysis, two frameworks relating to its measurement are presented. The first framework provides an outline of approaches currently available and the second proposes directions for future research. Recommendations regarding the empirical work that will be necessary to test the adequacy of, and to extend, the research framework are then provided. Keywords: information systems strategic planning, success of information systems strategic planning, information systems planning success measures, information systems planning effectiveness measures
Information systems (IS) have assumed an increasingly strategic role in organizations (Porter and Millar, 1985; Earl, 1989; King et al., 1989a; Keen, 1991) and there is greater emphasis in recent years on organizational efficiency and effectiveness and constant clamour for accountability. In this context, measuring the effectiveness (or success) of the planning of these systems would be expected to be of major concern to both researchers and practitioners alike. A review of the relevant literature shows that, from the research perspective at least, this is not so. Very little significant research into ways of measuring the effectiveness of information systems strategic planning (ISSP) is reported. There has, however, been considerable research interest in aspects of ISSP effectiveness other than its measurement. For instance, a large part of IS planning research has focused on identifying, but not measuring, factors which contribute to successful and unsuccessful ISSP. In addition, there is a considerable body of Received
January
1993; revised
0963~8687/93/040335-16
0
paper
accepted
by Dr Marianne
1993 Butterworth-Heinemann
Ltd
Broadbent,
September
1993
335
Success measures for IS strategic planning
literature, both normative and descriptive, on the steps involved in planning information systems effectively. Why then, despite the importance of ISSP to IS managers (Brancheau and Wetherbe, 1987; Davenport and Buday, 1988; Earl, 1989; Watson, 1989), has there been such little research interest in evaluating its effectiveness? The dearth of research may appear to indicate that both managers and researchers are prepared to accept on face value that ISSP is worthwhile without attempting to prove its efficacy. A more plausible explanation is that the complexity and ill-defined nature of ISSP processes and the difficulty of operationalizing the associated constructs have meant that many researchers have avoided the area and the remainder are still grappling with the problems of identifying the determinants of ISSP success and of devising appropriate measurement instruments. The objective of this paper is to focus attention on the need for research into the measurement of ISSP success. In addressing this aim, this paper will present two frameworks for evaluating ISSP success based on a review of the IS planning and related literatures. The first framework provides an outline of approaches currently available and the second proposes directions for future research. After clarifying the terminology to be used and considering the importance of this problem, this paper will concentrate on a review of the IS planning effectiveness literature. Current approaches for measuring ISSP success will be explored and, based on this analysis, the above mentioned frameworks for evaluating ISSP success will be presented. Recommendations regarding the empirical work that will be necessary to test the adequacy of, and to extend, the research framework will then be provided.
Definition of terms Despite wide research interest in IS planning over many years, as exemplified below, there is no consensus on the use or meaning of its major terms. Over a decade ago, the situation was so ill-defined that it was described as a ‘semantics jungle’ (Holloway and King, 1979, p. 74) and it is probably worse now due to the addition of even more terms. Because of this confusion and imprecision surrounding IS planning terminology, the terms being used in this paper will be clarified prior to exploring methods of measuring IS planning effectiveness. Perusal of the IS strategic planning literature reveals that a variety of terms are used, generably interchangeably, to describe identical or similar planning activities. These terms include: l information systems planning (ISP) (Galliers, 1987); l information systems strategic planning (ISSP) (King, 1988); l strategic planning for information systems (SPIS) (King, 1988); l strategic information systems planning (SISP) (Earl, 1990); l corporate information technology planning (CITP) (Department of Finance, 1991). Adding to the confusion resulting from this plethora of terms is the lack of agreement on their meaning. An analysis of definitions of IS strategic planning shows the terms are usually, but not always, synonyms referring to the overall planning of information systems, as exemplified by Lederer and Sethi (1988, p. 445) who define information systems strategic planning as: . . . the process of deciding the objectives for organizational computing and identifying potential computer applications which the organization should implement. 336
Journal
of Strategic
Information
Systems
E. P. FITZGERALD
Sometimes, however, researchers reserve the term, strategic information systems planning, for the planning of strategic information systems (SIS) per se. For instance, in keeping with Wiseman’s definition of a SIS as ‘. . . a system which is used to support or shape the competitive strategy of an organization’ (Wiseman, 1985, p. 7), Rackoff et al. (1985) define strategic information systems planning as: . . . the planning of information systems used to support or shape an organization’s competitive strategy, its plan for gaining and/or maintaining advantage.
The above definition excludes the planning of systems which are classified as non-strategic. The intention in this paper is to encompass all information systems (operational, management control, strategic) which are planned by organizations. So for the purposes of this paper, the term information systems strategic planning (ISSP) has been adopted and it will be used with the following meaning: l
l
in general terms: the planning undertaken by an organization when it seeks to determine its information systems requirements globally and systematically, so that it can prepare to meet its short-term and long-term needs (Wiseman, 1988, P. 76); more specifically: the process of identifying a portfolio of computer-based applications that will assist an organization in executing its business plans and consequently realizing its business goals (alignment) and/or the process of searching for applications with a high impact and the ability to create an advantage over competitors (impact) (Lederer and Sethi, 1988, p. 446).
Why it is important to measure ISSP success Before addressing the problem of measurement of ISSP success, the reasons for so doing will be examined. Justification for evaluating the effectiveness of ISSP is noticeably absent in the IS planning literature; however, it is addressed in the corporate planning literature. For instance, Venkatraman and Ramanujam (1987, p. 688) defend their call for specific research attention to the concept of ‘planning system success’ on three grounds, all three of which it is argued are equally applicable to IS planning success: l l
l
theoretically, it is important to define ‘planning system success’ to assist in the development of a theory of formal planning; empirically, the concept provides a more relevant and valid conceptualization of the benefits of planning than surrogate indicators such as return on investment (ROI) and return on earnings (ROE); pragmatically, to assist organizations to adapt their planning process to changing corporate and environmental conditions.
The desire to measure ISSP effectiveness is a predictable outcome of the investment of significant resources in information systems/information technology (IS/IT) and of the increasing dependence of many organizations on IS/IT. In the early days of computing, large sums were invested in IS/IT, often with little requirement for accountability. However, when expected benefits frequently were not forthcoming, it was predictable that the day would come when management would want to be assured that this money was being well spent. In many organizations that day has now come (Hackett, 1990). In proposing an instrument and a model for evaluating the success of information Vo12 No 4 December
1993
337
Success measures for IS strategic planning
centres (IC), Magal identifies three reasons for undertaking such an evaluation: to justify an IC’s existence; to improve IC performance; and to motivate IC staff (1991, p. 91). Similar reasons are advanced for measuring ISSP success: to justify the undertaking of ISSP; to improve the process of ISSP; and to motivate ISSP stakeholders. One way an organization may try to justify the undertaking of ISSP is to attempt to identify and measure benefits, both tangible and intangible. The potential benefits of ISSP have been widely recognized in both the IS planning literature (Pyburn, 1983; Porter and Millar, 1985; Galliers, 1987; Johnson and Carrico, 1988; Earl, 1989; Keen, 1991) and in practice. One indication that management believes benefits are at least potentially available is provided by the fact that ISSP is widely practised. For example, in Australia, since 1987, all Commonwealth Government Departments and agencies have been required to produce an IS strategic plan on a three-yearly basis, and in 1991 this requirement was expanded to include all Commonwealth agencies which are subject to the Audit Act (Department of Finance, 1991). In addition, Galliers’ survey of Australian IS managers (Galliers, 1987) found that ISSP was practised by 80 per cent of the organizations which he surveyed. However, recent research into ISSP practice in Australia (Galliers, 1987), England (Earl, 1990) and the USA (Lederer and Sethi, 1988) has uncovered high levels of management dissatisfaction with ISSP outcomes. For instance, in his 1986 survey of information systems planning in Australia, Galliers found that 47 per cent of user managers rate IS planning as being ‘. . . less than successful’ (Galliers, 1987, p. 30). The picture in the USA, according to Lederer and Sethi (1988, p. 453), is even bleaker where 53 per cent of managers were dissatisfied with strategic information systems plans. In a 1989 survey in the UK, Earl (1990, p. 271) found that 32 per cent of managers believed that ISSP ‘ . . . was not worth doing’. Perhaps this explains why many surveys in recent years in the USA (Brancheau and Wetherbe, 1987), UK (Earl, 1989), Europe (Davenport and Buday, 1988) and Australia (Watson, 1989) have ranked ‘improving IS strategic planning’ as the most critical information systems management issue facing IS managers. One of the major impediments to resolving this issue is the lack of a suitable measurement instrument, so an integral part of the process of improving IS strategic planning will be the development of methods to measure its effectiveness. In summary, it is argued that the call for specific research attention to the concept of ‘ISSP success’ can be defended theoretically, empirically and pragmatically, and the desire to measure ISSP effectiveness is a predictable outcome of the investment of significant resources in IS/IT and of the increasing dependence of many organizations on IS/IT. Reasons for doing such an evaluation include: to justify the undertaking of ISSP; to improve the process of ISSP; and to motivate ISSP stakeholders. Even though the potential benefits of ISSP are well recognized, surveys show high levels of management dissatisfaction with its outcomes. It is suggested that a necessary condition for improving IS strategic planning will be the development of methods to measure its effectiveness.
Previous research Overview of ZSSP research directions
A review of the IS planning literature shows that research has concentrated l l
on:
providing normative and descriptive models for undertaking ISSP; identifying factors which contribute to successful and unsuccessful ISSP; and
338
Journal of Strategic Information Systems
E. P. FITZGERALD l
evaluating the effectiveness implementation.
of computer-based
information
systems and their
Before examining ISSP success measurement research per se, a brief review of the current status of research in these three related IS areas and in the strategic (corporate) planning area will be undertaken. This is relevant as both the direction and findings of this research have significance for the investigation of ISSP success. Related ISSP research
The IS planning literature is not lacking in either normative (Bowman et al., 1983; Porter and Millar, 1985; Henderson and Sifonis, 1988) or descriptive models (Pyburn, 1983; Copeland and McKenney, 1988; Singleton et al., 1988; Earl, 1990; Waema, 1990) of ISSP. Following Schendel and Hofer (1979, p. 388), here the term ‘normative’ refers to ‘how things should be done’ in contrast with ‘descriptive’ which refers to ‘how things are done’. Interestingly, normative work has generally preceded descriptive empirical work (Huff and Reger, 1987, p. 213) and, in the view of at least one researcher (Galliers, 1987), ISSP practice will improve when it heeds normative directives. Both normative and descriptive models may be of use in the measurement of ISSP effectiveness by providing benchmarks against which an organization’s ISSP can be rated: normative models might provide ‘ideal’ standards, and descriptive models ‘best practice’ or industry standards. Much of the recent research into IS planning has been directed, at least in part, at identifying the success/unsuccess factors relating to the conduct of ISSP (Lederer and Mendelow, 1986; Galliers, 1987; Copeland and McKenney, 1988; Johnson and Carrico, 1988; Lederer and Sethi, 1988,1991; King et al., 1989b; Tavakolian, 1989; Earl, 1990; Reich and Benbasat, 1990; Waema and Walsham, 1990; Broadbent and Weill, 1991). These factors, of course, facilitate the development of normative models of ISSP. In addition, however, as will be seen below, they are likely to have an increasingly important role in operationalizing the constructs in models designed to evaluate ISSP success. The third area of IS research which is pertinent to ISSP success has as its focus the evaluation of the effectiveness of information systems and their implementation (Ein-Dor and Segev, 1978; King and Rodriquez, 1978; Lucas, 1978; Bruwer, 1984; Rivard and Huff, 1984; Doll, 1985; Miller and Doyle, 1987; Trite and Treaty, 1988; Raymond, 1990; Reich and Benbasat, 1990; Magal, 1991). Whilst this research is not directed at IS planning, its methodologies, constructs, variables and measurement instruments are obviously of interest to reseachers attempting to measure ISSP effectiveness. Of its two most popular dependent variables, system usage will probably have only limited applicability in the measurement of ISSP success (too indirect), but user satisfaction is likely to have a significant role. The final area of relevant research to be mentioned briefly is the literature dealing with the evaluation of the effectiveness of strategic (corporate) planning. As IS planning has a lot in common with strategic planning at the organizational and business unit levels, and as strategic planning has been an important focus of the management literature since the early 1960s one might expect to find sound foundations upon which to base research into the measurement of ISSP success. Unfortunately this is not the case. Strategic planning research is equally handicapped by the lack of an appropriate operationalizing scheme for measuring the success of planning systems and only in recent years has attention been directed to attempting to identify, operationalize and measure the key constructs (Venkatraman and Ramanujam, 1987, p. 687). However, despite its limitations, Vol2 No 4 December
1993
339
Success measures for IS strategic planning
there is some evidence in the IS planning literature that this research is benefiting ISSP research. As will be discussed in greater detail in the next section, the few areas where some progress has been made in measuring ISSP effectiveness, have basically been extensions of earlier research in the strategic (corporate) planning field. For example, King’s (1988) normative model for evaluating ISSP effectiveness is basically his earlier (1983, 1984) model for measuring organizational planning effectiveness, with IS terms substituted for management terms. Likewise, one of the few serious attempts to operationalize and measure key constructs relating to ISSP success is the work undertaken by the Raghunathans (1989) in extending to ISSP the seminal strategic planning success research of Venkatraman and Ramanuj am (1987). ZSSP success measurement
research
An examination of the IS planning literature has uncovered only a handful of research efforts in which an evaluation of ISSP effectiveness was the main objective (King, 1988; Raghunathan and King, 1988; Raghunathan and Raghunathan, 1989, 1990; Lederer and Sethi, 1991). Although it is not claimed that this list is comprehensive in terms of including all ISSP success measurement research, it can be considered representative of the state of the art in this area. It is only very recently, 1988 onwards, that research has focused on this topic. Prior to this, assessing ISSP success had been a secondary consideration of a number of studies (Pyburn, 1983; Galliers, 1987). An outline of the main features/principal findings of each of these research efforts follows. In the early studies, only very basic indirect operational measures of ISSP effectiveness were used. For instance, to measure the degree of success of the strategic IS planning, Pyburn (1983) asked four or five senior operating executives in eight organizations the following two questions and rated their responses on a three-point interval scale ranging from failure to success: l l
the degree to which IS seemed to be addressing what they perceived to be the critical needs of the business (effectiveness); and the degree to which the IS organization seemed to be well managed (efficiency).
In his survey of IS managers in both the UK and Australia, Galliers (1987) had two questions dealing with the measurement of success in IS planning (ISP): l l
How successful has ISP been in your organization (to be rated from 1, ‘totally unsuccessful’ to 4, ‘highly successful’)? Why has ISP been successful/unsuccessful in your organization? And how is ISP success measured (no scales but provision for brief comments)?
For both questions, l l l
the IS managers were asked to answer:
from the IS/DP department viewpoint; from the senior management viewpoint; and from the middle management/user viewpoint.
So, in this case, the conclusions were based on perceptions (of the IS manager) and on perceptions of perceptions (the IS manager’s perceptions of the perceptions of both senior and user management of ISSP success). Galliers’ finding (1987, p. 253) that different stakeholders are likely to apply different success measures supports similiar earlier findings in the corporate planning literature and highlights the need for multiple stakeholder analysis of ISSP success. The limitations of these early studies notwithstanding, they did serve to direct 340
Journal
of Strategic
Information
Systems
E. P. FITZGERALD
attention to the problem of measuring the effectiveness of ISSP and to highlight the need for further research. The first significant advance on these early attempts came in 1988 with King’s normative model (see Figure 1) in which he defined eight (A to H) evaluation points on a schematic model of ISSP. As mentioned earlier, this model was an adaptation of King’s (1983, 1984) model for evaluating strategic (corporate) planning. According to King (1988, p. 105), assessments made at each of the eight evaluation points, when viewed as an overall ‘effectiveness profile’, constitute a comprehensive assessment of ISSP. The strengths of this model derive from the underlying methodological bases of its evaluation procedure (King, 1988, p. 106): l l l
multi-dimensional assessment: eight dependent variables in an attempt to cope with the complexity of ISSP; internal and external benchmarks: ISSP is evaluated in terms of its specific goals (internal) and performance standards of competitors (external); multiple stakeholder analysis: as ISSP serves a variety of interest groups.
However, the model is by no means the final solution to the problem of measuring ISSP effectiveness. It has at least four major limitations. First, no attempt has been made to validate the eight determinants of ISSP success upon which the measurement approach is based. Are these necessary and sufficient criteria of ISSP effectiveness? Second, no single overall utility of an ISSP system is provided; the overall evaluation is based on subjective judgements of mainly ‘soft’ data. Third, following from the previous limitation, most of the measurement instruments for the key constructs are described in rather vague terms. For example, one of the metrics for the key concept, Relative Efficiency of the IS Planning System, is described as follows: ‘It is relatively easy to estimate “industry standards” as well as standards for “well-managed” firms to which a firm might wish to be compared’ (King, 1988, p. 106). Clearly, appropriate constructs must be conceptualized and relevant metrics devised. Finally, the nature of the multivariate relationship
External Standards L
I t
t
t
I I-
-
lnfarmat0nal Inputs to IS Planning
@i
c
4
-
-i-@
I
Resource Inputs to IS Planning
l
-
-
1
4 IS Planning System
44 IS Planning outputs
-
IS planning evaluation framework
Vol2 No 4 December 1993
1
7
I I I I
I
0
Figure 1.
-
+
Business Performance
0
(King, 1988, p. 105) 341
Success measures for IS strategic planning
Table 1. Recent ISSP effectiveness research
Measurement instruments
Reference
Variable(s)
Raghunathan and King (1988)
Dependent: Impact of ISSP
UIS
Independent: extent of: l IS strategic planning l IS systems planning l plan implementation
Likert scales: 5-point scale, from ‘strongly agree’ to ‘strongly disagree’
Raghunathan and Raghunathan (1989)
Raghunathan and Raghunathan (1990)
342
Dependent: ISSP success: overall acceptance of IS planning resources provided for IS planning link to firm concerns user and top management involvement in IS planning environmental considerations internal considerations planning systems capability Independent: extent of change in usage of IS steering committee over past 5 years
Dependent: ISSP success: l improvements in planning system capability (12 key capabilities of planning systems) l fulfilment of planning objectives (6 key objectives of planning systems) Independent: top management support for ISSP attention to internal design
Methodology
Survey of 140 firms
Survey of 189 firms Likert scales: 5-point scale using multiple items
Likert scales: 5-point scale, from ‘significant decrease’ to ‘significant increase’ Survey of 192 firms Likert scales: 5-point scale, from ‘much improvement’ to ‘much deterioration’
Likert scales: 5-point scale (multiple items) Table
1. continued
Journal of Strategic Information Systems
E. P. FITZGERALD Table 1. continued Reference
Variable(s)
Lederer and
Dependent: ISSP success: l satisfaction of the planner
Sethi (1991)
Independent: problems of ISSP
Measurement instruments
Methodology Survey of 80 firms
Likert scales: 5-point scale, from ‘extremely satisfied’ to ‘extremely dissatisfied’ Likert scales: rated each of 49 ISSP problems: 5-point scale, from ‘not a problem’ to ‘extreme problem’
between the eight criteria of ISSP success needs to be determined and allowed for in the model. Table 1 contains the results of an examination of the remaining four studies in which measurement of ISSP success was the main objective. One point which is
highlighted by the presentation of the analysis of these studies in tabular form is that recent research is paying greater attention to methodological issues. In contrast with the earlier research reviewed above, all four studies are characterized by construct measurement issues. This is demonstrated by the attempts to operationalize the selected constructs and to measure their changes. However, as the researchers themselves admit (Raghunathan and King, 1988, p. 92; Lederer and Sethi, 1991, p. 117), their mainly unidimensional dependent variables are a very narrow operationalization of ISSP effectiveness. In addition, as is clearly indicated in Table 1 by the widespread use of Likert scales, there is a large degree of subjectivity involved in the measurement approaches adopted thus far. The contribution of this research, therefore, comes principally not from its findings, but from the directions it suggests for future research. This issue will be taken up later. In summary, this review highlights the need for future research to be directed at conceptualizing and developing valid measurements of the key dimensions of the ISSP effectiveness construct. Venkatraman’s (1989, p. 942) summation of Venkatraman and Grant’s (1986) description of the status of business strategy research in the mid-1990s provides an accurate summary of the current status of research into ISSP effectiveness evaluation: ‘most existing measures . . . are either nominal (and/or single-item) scales that have questionable measurement properties or multi-item scales whose measurement properties (such as reliability, and unidimensionality, convergent and discriminant validity as well as nomological validity) have not been systematically assessed’. As a guide for researchers planning to redress these deficiencies, Sethi and King’s recent paper (1991) provides an excellent description of the normative process of construct measurement and identifies the difficult problems involved and suggests some ways of dealing with these problems. Vol2 No 4 December 1993
343
Success measures for
ISSP evaluation
IS
strategic planning
frameworks
In this section, two frameworks dealing with the measurement of ISSP effectiveness (or success) are described. The first categorizes the approaches currently available in the planning literature. The second prescribes a research approach by which it is suggested many of the limitations of the first framework eventually may be overcome. Current approaches framework
As discussed above, a number of different approaches for evaluating the success of planning systems has been suggested in both the management and IS literatures. To assist researchers and organizations wishing to measure the effectiveness of ISSP systems, Table 2 brings these approaches together in a single framework. Venkatraman and Ramanujam’s (1987) description of Cameron and Whetton’s (1983) four approaches for organizational effectiveness measurement provided the foundation for the framework. These have been adapted to the IS environment and extended and modified by incorporating relevant research from the areas reviewed in the preceding section of this paper. It was necessary to add a fifth approach, impact judgement, to account for those methods which attempt to judge the effect ISSP has on performance and other organizational variables. The current ‘pre-scientific’ state of research in this area is well exemplified in Table 2; all five approaches rely on subjective judgement. It appears we can Table 2. Current approaches to ISSP success measurement 1. Goal-centred judgement Purpose:
To determine the degree of fulfilment of the objectives of ISSP. Previous research:
Pyburn (1983); Galliers (1987); Venkatraman
and Ramanujam
(1987); King (1988).
Typical questions: l
l
To what extent have the objectives recorded in the IS strategic plan, and subsequently modified, been achieved? Examples, assuming typical ISSP impact and alignment goals: to what extent: - does the ISSP system facilitate the exploitation of IT for competitive advantage? - is investment in IS/IT aligned with business goals?
Limitations:
Dyson and Foster (1980, p. 164) list its ‘severe drawbacks’ as: l tending to assume a static environment; l setting of readily attainable goals by planners to ensure success; l unduly subjective nature of evaluation. 2. Comparative judgement Purpose:
To compare the effectiveness of an organization’s organizations.
ISSP with that of other similar
Previous research:
King (1988); Earl (1990). Typical questions: l
How does this ISSP system (or component comparable organizations?
344
thereof) compare with similar systems in Table 2. continued Journal
of Strategic
Information
Systems
E. P. FITZGERALD
Table 2. continued Limitations: l
Difficulty of obtaining data of such a commercially sensitive nature.
3. Normative judgement Purpose:
To compare the effectiveness of an organization’s
ISSP with ‘ideal’ benchmarks,
Previous research:
King (1988) [an example of these ‘ideal’ benchmarks in Figure 11; Earl (1990); Lederer and Sethi (1991).
are the external standards depicted
Typical questions: l
l
How many of the problems/success factors typical of ISSP has this system encountered1 achieved/avoided? How does ISSP performance compare with that of an ‘ideal’ system? (This test can be applied to many aspects of ISSP: goals, process, implementation.) For example, an organization’s ISSP planning process could be normatively evaluated by comparison with a generic model of ISSP such as that proposed by Bowman, Davis and Wetherbe (1983).
Limitations: l
l
Comparisons are useful only if the ‘ideal’ or external benchmarks are valued by (important) members of the ISSP stakeholder set. The referent for comparison must be changed to reflect changing conditions (Cameron and Whetten, 1983).
4. Improvement judgement perspective Purpose:
To assess the degree of change (improvement/deterioration)
in ISSP.
Previous research:
Dyson and Foster (1980); Venkatraman and Raghunathan (1989,199O).
and Ramanujam
(1987); King (1988); Raghunathan
Typical questions: l
How much has ISSP improved due to, for instance, the implementation Committee?
of an IS Steering
Limitations: l l
Trends and degree of improvement/deterioration are often based on subjective opinion. May be maintaining a consistently low/high standard, without variation.
5. Impact judgement Purpose:
To ascertain the impact of ISSP on specific organizational
variables.
Previous research:
Ein-Dor and Segev (1978); King (1988); Raghunathan Raymond ( 1990).
and King (1988); Weill (1989);
Typical questions: l l
l
What impact has ISSP had on the organization? What is the measurable effect of IT investment (assuming prior planning) on firm performance? What is the effect on ISSP success of the organizational context?
Limitations: l l
Ignores the ‘process’ benefits of ISSP. If measured by ‘bottom line’ financial measures (King, 1988): - unable to isolate effects of ISSP subsystems (loses diagnostic capability); - effects of many intervening and moderating variables to account for.
Vol2 No 4 December 1993
345
Success measures for IS strategic planning
measure ISSP success only indirectly. In early ISSP effectiveness research (Pyburn, 1983; Galliers, 1987), generally only one of these approaches was used to evaluate the success of ISSP. However, as noted in Table 2, all five of the approaches have their limitations. More recently, in an attempt to overcome these limitations, some researchers have suggested using a combination of these approaches to evaluate ISSP success. This is exemplified in King’s (1988) multi-dimensional assessment (see Figure 1). However, as highlighted in the critique of King’s model, even when all five approaches are utilized a major impediment to valid assessment of ISSP effectiveness still remains; the determinants of ISSP success have not yet been validated nor rigorously operationalized. The following framework proposes an approach for addressing this limitation. Research framework
Even though recent research into ISSP success measurement is paying greater attention to these methodological issues (Raghunathan and Raghunathan, 1989, 1990; Lederer and Sethi, 1991), the limitations of current approaches, highlighted by the framework in Table 2 and the review of previous research above, indicate that validated measurement instruments which will meet the needs of industry are yet to be devised. Before any progress can be made towards the development of standard instruments for measuring the effectiveness of ISSP systems, there are important and difficult conceptual problems to be resolved. Primary among these is identifying and validating both the determinants of ISSP success and the dimensions of ISSP systems that influence their success. The purpose of the following framework is to direct future research towards this objective. The framework is based on the work of Venkatraman and Grant (1986), Ramanujam, Venkatraman and Camillus (1986) and Venkatraman and Ramanujam (1987). The ISSP success measurement research framework illustrated in Figure 2 indicates that future research in this area should include five critical steps. In the discussion which follows, key issues relating to the first four steps are presented. The fifth is adequately addressed in most research methods texts. Identify/validate
ZSSP determinants of success. These are the criteria which determine whether planning has been successful. There is support from a number of sources that planning system success should be based on multiple facets of the system (Dyson and Foster, 1980; Ackoff, 1981; Ramanujam et al., 1986; Ansoff, 1990). Accordingly, King (1988) has proposed 10 different determinants of ISSP success but provides no indication that he has tested their content validity. In a recent paper, Sethi and King (1991) recommend the development of a preliminary model of a construct ‘. . . even if there is little previous research’ (p. 455). The replication work of Raghunathan and Raghunathan (1990) is a first step in that direction.
Identify/validate the dimensions of ZSSP systems that influence their success. These
are the operational indicators for each of the constructs which have been identified as the determinants of success in step 1. A suggested source of these dimensions is the literature referred to above which identified the success factor and/or problems of ISSP. As yet there is no consensus on what are these dimensions, so research is needed to select and validate them. 346
Journal
of Strategic
Information
Systems
E. P. FITZGERALD
2
t
Figure 2.
Identify /validate the dimensions of ISSP systems that influence their success
An ISSP success measurement
research framework
Devise measurement instruments for each of the constructs identified in steps 1 and 2 above. The difficulty in empirically determining ISSP success has led researchers to
adopt surrogate constructs, such as user satisfaction, that are more readily measurable. However, the need to develop more sophisticated metrics for the ISSP process has been emphasized in the preceding sections of this paper. ISSP success should be measured not only by a number of different planning attributes but it should also encompass the views of the entire ISSP stakeholder set (Galliers, 1987; Venkatraman and Ramanujam, 1987; King, 1988; Earl, 1990). The process determinants will need to be measured using some kind of ordinal (not ratio or even interval) scale. Most of the recent research (see Table 1) has used a set of the latter types of such scales, one for each attribute ranked from one to five. One of the limitations of this approach is the difficulty of obtaining a valid overall measure of system effectiveness, in that a simple weighted average or other aggregation of the scores for the multiple attributes would lead to a result of questionable validity (Cochrane and Zeleny, 1973). Vo12 No 4 December 1993
347
Success measures for IS strategicplanning
Link each of these constructs with its operational indicator. Venkatraman and Grant (1986) assert that such a step would go a long way towards reducing the problem described earlier in this paper, of using different terms to illustrate similar concepts.
Conclusion Even though the potential benefits of ISSP are well recognized, surveys in major western nations have found high levels of management dissatisfaction with its outcomes. As significant resources are invested in IT and many organizations are becoming increasingly dependent on it, predictably current methods of measuring ISSP effectiveness have come under scrutiny. Despite the availability of many different approaches (Table 2), research has found all have major limitations. The examination of previous research highlighted the need for future efforts to be directed at conceptualizing and developing valid measurements of the key dimensions of the ISSP success construct. For, as is shown in Table 1, just as Venkatraman and Grant (1986) found when assessing the status of business strategy research in the mid-1980s ‘most existing measures (of ISSP success) are either nominal and/or single-item scales that have questionable measurement properties or multi-item scales whose measurement properties, such as reliability, and unidimensionality, convergent and discriminant validity as well as nomological validity, have not been systematically assessed’ (Venkatraman, 1989, p. 942). A research framework (Figure 2) containing five critical steps has been proposed to enable future research to progress systematically towards a resolution of the complex problem of measuring ISSP effectiveness.
References Ackoff, R (1981) ‘On the use of models in corporate planning’ Strategic Manage. .Z. 2,. pp 353-359 Ansoff, H I (1990) The New Corporate Strategy Penguin, New York Bowman, B, Davis, G and Wetherbe, J (1983) ‘Three stage model of MIS planning’ Znf & Manage. 6, 1, pp 11-25 Brancheau, J C and Wetherbe, J C (1987) ‘Key issues in information systems management’ MIS Quarterly 11, 1, pp 23-45 Broadbent, M and Weill, P (1991) ‘Developing business and information strategy alignment: a study in the banking industry’ Proc. of 12th Znt. Conf. on Information Systems New York (16-18 December) Bruwer, P (1984) ‘A descriptive model of success for computer-based information systems’ Znf. & Manage. (April), pp 63-67 Cameron, K and Whetten, D (1983) ‘Some conclusions about organizational effectiveness’ In Cameron, K and Whetten, D (eds) Organizational Effectiveness: A Comparison of Multiple Methods Academic Press, New York, pp 261-277 Cochrane, J and Zeleny, M (1973) Multiple Criteria Decision Making University of South Carolina Press, Columbia, SC Copeland, D and McKenney, J (1988) ‘Airline reservations systems: lessons from history’ MIS Quarterly 12, 3, pp 352-370 Davenport, T and Buday, R (1988) ‘Critical issues in information management in 1988’ Index Group Department of Finance (1991) Developing a Business Driven IT Strategy: Corporate IT Planning Guidelines Department of Finance, Canberra Doll, W (1985) ‘Avenues for top management involvement in successful MIS development’ MIS Quarterly 9, 1, pp 17-35
348
Journal of Strategic Information Systems
E. P. FITZGERALD
Dyson, R and Foster, M (1980) ‘Effectiveness in strategic planning’ Eur. J. Operational Res. 5, 3, pp 163-170 Earl, M J (1989) Management Strategiesfor Information Technology Prentice-Hall, London Earl, M J (1990) ‘Approaches to strategic information systems planning experience in twenty-one United Kingdom companies’ Proc. of 11th Int. Conf. on Information Systems (ZCZS) Sim Copenhagen (16-19 December) pp 271-277 Ein-Dor, P and Segev, E (1978) ‘Organizational context and the success of management information systems’ Manage. Sci. 24, 10, pp 1064-1077 Galliers, R (1987) ‘Information systems planning in Britain and Australia in the mid-1980s: key success factors’ Unpublished PhD Thesis, London School of Economics and Political Science Hackett, G (1990) ‘Investment in technology: the service sector sinkhole’ Sloan Manage. Rev. (Winter), p 97 Henderson, J and Sifonis, J (1988) ‘The value of strategic planning: understanding consistency, validity, and IS markets’ MIS Quarterly 12, 3, pp 187-202 Holloway, C and King, W (1979) ‘Evaluating alternative approaches to strategic planning’ Long Range Planning 12, 4, pp 74-78 Huff, A and Reger, R (1987) ‘A review of strategic process research’ J. Manage. 13, 2, pp 211-236 Johnson, H R and Carrico, S (1988) ‘Developing capabilities to use information strategically’ MIS Quarterly 12, 1, pp 37-48 Keen, P G (1991) Shaping the Future: Business Design Through Information Harvard Business School, Boston, MA King, W (1983) ‘Evaluating strategic planning systems’ Strategic Manage. J. 4,3, pp 263-278 King, W (1984) ‘Evaluating the effectiveness of your planning’ Managerial Planning 33, 2, PP 4-9
King, W (1988) ‘How effective is your information systems planning?’ Long Range Planning 2115, 111, pp 103-112 King, W and Rodriquez, J (1978) ‘Evaluating management information systems’ MIS Quarterly 2, 3, pp 43-52 King, W, Grover, V and Hufnagel, E (1989a) ‘Using information technology for sustainable competitive advantage: some empirical evidence’ Znf. & Manage. 17, 2, pp 87-93 King, W, Grover, V and Hufnagel, E (1989b) ‘Seeking competitive advantage using information-intensive strategies: facilitators and inhibitors’ (Chapter 3) In Laudon, K C and Turner, J A (eds) Information Technology and Management Strategy Prentice-Hall, Englewood Cliffs, NJ Lederer, A and Mendelow, A (1986) ‘Issues in information systems planning’ Znf. & Manage. 10, 5, pp 245-254 Lederer, A and Sethi, V (1988) ‘The implementation of strategic information systems planning methodologies’ MIS Quarterly 12, 3, pp 445-462 Lederer, A and Sethi, V (1991) ‘Critical dimensions of strategic information systems planning’ Decision Sciences 22, 1, pp 104-119 Lucas, H (1978) ‘Empirical evidence for a descriptive model of implementation’ MIS Quarterly 2, 2, pp 27-42 Magal, S (1991) ‘A model for evaluating information center success’ J. Manage. If. Syst. 8, 1, pp 91-106 Miller, J and Doyle, B (1987) ‘Measuring the effectiveness of computer-based information systems in the financial services sector’ MIS Quarterly (March), pp 107-124 Porter, M and Millar, V (1985) ‘How information gives you competitive advantage’ Harvard Bus. Rev. (July-August), pp 149-160 Pyburn, J (1983) ‘Linking the MIS plan with corporate strategy: an exploratory study’ MIS Quarterly 7, 2, pp 1-14 Rackoff, N, Wiseman, C and Ullrich, W A (1985) ‘Information systems for competitive advantage: implementation of a planning process’ MIS Quarterfy 9, 4, p 285 Raghunathan, T S and King, W (1988) ‘The impact of information systems planning on the organization’ Omega 16, 2, pp 85-94 Vol2 No 4 December 1993
349
Success measures for IS strategic planning
Raghunathan, B and Raghunathan, T S (1989) ‘MIS steering committees: their effect on information systems planning’ J. Znf. Syst. 3, 2, pp 104-116 Raghunathan, B and Raghunathan, T S (1990) ‘Planning system success: replication to the information systems context’ Working Paper, University of Toledo, OH Ramanujam, V, Venkatraman, N and Camillus, J C (1986) ‘Multi-objective assessment of effectiveness of strategic planning: a discriminant analysis approach’ Acad. Manage. Rev. 29, 2, pp 347-372 Raymond, L (1990) ‘Organizational context and information systems success: a contingency approach’ J. Manage. hf. Syst. 6, 4, pp 5-20 Rivard, S and Huff, S (1984) ‘User developed applications: evaluation of success from the DP department perspective’ MIS Quarterly 8, 1, pp 39-50 Reich, B and Benbasat, I (1990) ‘An empirical investigation of factors influencing the success of customer-oriented strategic systems’ Znf. Syst. Res. 1, 3, pp 325-347 Schendel, D and Hofer, C (eds) (1979) Strategic Management: A Near View of Business Policy and Planning Little Brown & Co, Boston, MA Sethi, V and King, W (1991) ‘Construct measurement in information systems research: an illustration in strategic systems’ Decision Sciences 22, 3, pp 455-472 Singleton, J, McLean, E and Altman, E (1988) ‘Measuring information systems performance: experience with the management by results systems at Security Pacific Bank’ MIS Quarterly 12, 2, pp 325-338 Tavakolian, H (1989) ‘Linking the information technology structure with organizational competitive advantage: a survey’ MIS Quarterly 13, 3, pp 309-318 Trite, A and Treaty, M (1988) ‘Utilization as a dependent variable in MIS research’ Data Base 19, 314, pp 33-41 Venkatraman, N (1989) ‘Strategic orientation of business enterprises: the construct, dimensionality, and measurement’ Manage. Sci. 35, 8, pp 942-962 Venkatraman, N and Grant, J (1986) ‘Construct measurement in organizational strategy research: a critique and proposal’ Acad. Manage. Rev. 11, 1, pp 71-87 Venkatraman, N and Ramanujam, V (1987) ‘Planning systems success: a conceptualization and an operational model’ Manage. Sci. 33, 6 , pp 687-705 Waema, T M (1990) ‘Information systems strategy formation in financial services sector organizations’ Unpublished PhD Thesis, Cambridge University Waema, T M and Walsham, G (1990) ‘Information systems strategy formulation’ Znf. & Manage. (January), pp 29-39 Watson, R T (1989) ‘Key issues in information systems management: an Australian perspective - 1988’ The Australian Computer Journal 21, 2, pp 118-129 Weill, P (1989) ‘The relationship between investment in IT and firm performance in the manufacturing sector’ Working Paper No. Z8, GSM University of Melbourne Wiseman,
C (1985) Strategy and Computers:
Dow Jones-Irwin, Wiseman,
350
Homewood,
Znformation Systems as Competitive Weapons
IL
C (1988) Strategic Information Systems Irwin, Homewood,
IL
Journal of Strategic Information Systems