Copyright e IFAC Integrated Systems Engineering, Baden-Baden, Gennany, 1994
INFORMATION SYSTEMS EVALUATION AND EFFECTIVENESS
GEORGE E. HARAMIS University of Macedonia Egnatia 156, P.D. Box 1591, GR-54006 Thessaloniki, Greece
Abstract. Information Systems (I.S's) Evaluation is traditionally based on its development and operating cost. This is very natural, since with the new system a higher effectiveness of the corporation in the specific domain is aimed at. There are, however, cases in which the new system offers, in addition to the economic advantages for which it has been planned, more benefits which are not apparent at first sight. With regard to the above concept, in this paper are examined: the general aspects of I.S's evaluation - the criteria for I.S's evaluation - the qualitative and quantitative measurements for I.S.'s Evaluation - the I.S.'s Reliability, Operational Availability and Effectiveness. Keywords. Systems Evaluation, Effectiveness, Development, Selection.
1. GENERAL ASPECI'S OF INFORMATION SYSlEMS EVALUATION
restricting the data-processing cost. In general, all these cases of benefits appear either as an increase in profits or as a decrease in cost; hence the analyst, who evaluates the system, must refer to the profit or cost by presenting the related benefit in terms of corresponding quantitative measures.
The decision for Information Systems development, as well as for Information Systems selection, is an important and difficult job which strongly affects the corporations future. One very significant factor for optimizing the effectiveness of an Information System (I.S.) is its careful and effective evaluation.
In addition, the evaluator must work objectively during the evaluating process; this is not a very easy task and for this reason, several objective programs and/or procedures have been developed. The effort to be objective during a system evaluation usually limits the sense of evaluation; the most common cases of such restrictions refer to: - the requirements which should exist in relation to the system's capacities, determining the criteria for assessing whether the objectives for which the system has been developed have been met, - selecting one significant system parameter, as a basis for the evaluation, and, at the same time, setting aside other parameters, and - avoiding the study of the system's elements which do not lend themselves to research, study or measurements.
The term "evaluation" includes the sense of control which signifies the estimation of a "quantity", which is called effectiveness. It is obvious, however, that we cannot control something without being able to previously measure it. The effectiveness of a system can be measured by comparing its output with the cost of system implementation and operation. Measuring systems effectiveness is based on the estimation of benefits which will result from utilizing its output. These benefits usually are: increase in productivity, better job planning, better quality control, better inventory follow-up, better credit control, better customer service, keeping the cost of work within limits, and
It is also known that although there are no difficulties in specifying the sense of 65
ones are given: - What is the system like? - What can be accomplished by the system? - What is its cost? - To what extent and at which points can it be improved? - How is its support (maintenance) done? - How easy is its modification? - How reliable is it?
information and its uses, it is not relatively easy to identify the "informative function" within the system, while it is even less easy to evaluate it. On the other hand, the function of analysis and design manifest themselves differently from one system to another, and depend a lot on the Corporation's as well as on the EDP Centre's organization. All of the above justify the thought that system evaluation should preferably be done by the specialist (analyst), who must be fair and must have the required experience, since he is the only one capable of estimating the variability, uncertainty and time parameters, etc. better than any "automatic" and "objective" 1.S. of evaluating I.S's.
More specifically and regarding the systems design evaluation a method of questioning the requirements at each step of the design process will help to ensure the correctness of the systems development. A start can be made by using questions to make sure that the basic aspects of What? Where? When? Who? How? Who else? and Why? have been properly considered, for each part of design developed.
Finally, it is better to check the accuracy and completeness of the system development continuously as it is progresses (rolling evaluation) rather than leave it at the end of the development, during the step of the formal systems walkthroughs or inspections. 2. CRITERIA FOR I.S's EVALUATION The believed, that it is more clear the Criteria for I.S's Development Evaluation and the Criteria for 1.S's Selection Evaluation to be examined separately in this paper.
This process can then be expanded into more detail, by evaluating each part of design against the following: • function / purpose, • performance, • reliability, • maintainability, • flexibility, • ease of implementation, • ease of production, • ease of validation, • personnel requirements (training etc), • location(s) / machinery, • material, and • services.
2.1. Criteria for /'S's Development Evaluation
The dynamic nature of today's I.S's makes it necessary to regard them not as productive "equipment" but as dynamic features of control and administration of corporate activity, whithin the frame of their economic evaluation. In order to accomplish this, systems evaluation must be done: a. based on a listing of the corporation's future and probably variable economic procedures, and b. in relation to the precision, completeness and timeplans of these procedures.
Obviously, if one or more of these tests reveals a deficiency, then the item being evaluated must be redesigned. Furthermore, the evaluation process is more effective when its criteria are divided into criteria set by the User Department and by the I.S's Department.
These measurements are nevertheless complicated and by all means, require the existence of an adequate model of the corporation. On the other hand the criteria and the procedure for I.S's development (or selection) evaluation are influenced by the resources available in the I.S's Department.
Users' Criteria Stated as users' major criteria for systems evaluation are: - the precision of the system's objectives, - the full achievement of these objectives, - the system's ability to function properly, its reliability, - its availability at any given instant,
In practice, during a system's evaluation, answers to questions similar to the following
66
-
the ability to adapt itself (flexibility) to any conditions whatsoever, and - how easily it can be updated (maintainability).
Financial The cost of the package. Whether or not any "no-charge" items offered (i.e. free training, free testing time, etc.).
I.S's Department's Criteria In addition to the users' criteria for systems evaluation, stated as the I.S's Department's criteria are: - the time required for the system's development, - the c0!'lputer's operating time during its execution, - the possibility of improvement and modification, - the completeness of its documentation, - how easy its operation is by the computer operators, the "input-output" controllers, and the users, and - its contribution to the education of the I.S's Department's personnel.
Software - Programs language(s). Programs structure. Data Base Management System (DBMS). Number of batch processing programs. Number of real time processing programs. Possibilities for enhancements and/or modifications to the system. Systems documentation; existence/completeness/quality with regard to the following: • the general description of the system, • the user's manual, • the programs specifications, • the files description, • the data preparation manual, and • the operations manual.
Evaluation Process In all cases of systems evaluation, whether or not the criteria are divided into Users' criteria and I.S's Department's criteria the evaluating process must always be do~e in two phases: - In ~he first phase, whi.ch will take place dunng .system analysIs and design, an evaluatmg matrix must be formulated which will include all criteria as well a~ the ~inimum (percentages) acceptable requIrements set for each criterion. In the second phase, which must take place immediately after system test and parallel run, the matrix .of the first phase must be completed with the quantitative measurements which have been derived during th~ evalu.ation of the new system; a companson will then follow with each criterion's minimum requirements and the .system will. either be accepted or the project team will be asked to improve it at the points where it is inadequate.
Hardware - Required size of CPU and peripherals. - Whether or not required back-up for CPU and peripherals. - Disks capacity and number of drives. - Tapes number, density and number of units. - VDU's type, number and their location. Installation - Size and structure of the group for system's installation. - Schedule time for each of the installation phases (including system test and parallel run or cutover). - I~fluence on the Corporation's organizatIOn. Training - Training courses provision for users, systems analysts, programmers, system programmers, computer operators and data entry operators. - Existence of training manuals. Required training time for each I.S's Department's specialty.
2.2. Criteria for l.S's Selection Evaluation In current times, it is often more preferable to purchase and install a ready system rather than one which must be developed by the systems analysts and programmers of the corporation. The most important usual reas~ns for this is to save time and money, and m some instances, the inability of the I.S's Department to develop such a large system. In these cases, the purpose of the evaluation of proposed systems from different software houses is to select the most appropriate system for the corporation. The evaluation should be based on the following criteria:
Miscellaneous - Existence of favorable contract terms Reasons, duration and relative reco~ery f~atures in the cases of system's downtime. Terminal response time (worst! average/ best). Siz~ of the project team for system's mamtenance.
67
3. QUALITATIVE AND QUANTITATIVE MEASUREMENTS FOR 1.S's EVALUATION
= Coefficient of the Present Value.
The quality of an I.S. depends on the quality of analysis, design and its programs. The most important parameters which can be used by an empirical technique for the quantitative and qualitative measurements during system evaluation are:
Certainly the benefits added to the team as a result of one member working are impossible to calculate precisely. An approximate estimate of the value of a member who will be replaced by another can be based on the calculation of the cost of the selection of an analyst or programmer, his training, and his being updated on team problems. This cost, as a consequence, will influence Bij and Cij of the above equation.
The System's Documents The number and the contents of the documents (forms) used by the present system, whether or not the system operates in a computerized manner, indicates . the cases which the analyst has to analyze In order to determine the input of the new system and to proceed to the design of not only the corresponding new documents, but also to the design of the complete system (flow charts, file design, etc).
The System's Complexity The degree of the system's complexity can be determined during the stage of its preliminary study, according to the escalation of the complexity of the programs (very simple, simple, complicated, very complicated).
The System's Functions The number of present functions i.e. the number of functions performed by the system before it has been computerized (or before its improvement, if it has already been computerized) is one of the significant factors determining the amount of work required for system development. Other factors are the number of phases of the new system, the number of the new programs, as well as the number of programs which will be subject to change, in case the present (computerized) system has to be modified.
The System's Documentation Furthermore, the technique of qualitative and quantitative measurements must investigate how much the project team has followed the standards which correspond to each step of the system's development phases. These steps are: . - Study of the present system (meetIngs, collection and study of the used documents, conferences, study of files, etc.). - Determining the requirements of the new system. - Cost study of the development and operations of the new system. - System's analysis and design (design of new documents, general and detailed flow-charts, files, etc. determining required times for system implementation and execution). - Specifications for new programs. Specifications for modifications on existing programs. - Keeping the programmers informed on the requirements of the new programs or on the requirements resulting from the modifications on the present system. - Design of block diagrams of the programs. Programs coding, compilation and debugging. - Programs test. Documentation files for system's analysis and design, programs, and system's operation.
The Value of Project Team Members The determination of the value of the project team members, or its capital in Human Assets (H), is expressed by approximation of the following equation:
H= where Bij
v n
f i=j
J.I.
~
j=v
(Bij - Cij)
r
Pi" + 1 Vj+1 -n In V
= Benefits added to the team (corporation) from the work of member i in the year j. = Cost of the occupation of member i in the year j . = Number of team members. = Current age. = Number of years up until the departure of the members, due to age limit.
The Program's Effectiveness Especially for the development phase of the system programs, the technique can use parameters to measure the quality, i.e. the
Pij + 1 = Probability that member i Pin remains in the team up to the end of year j+ 1. 68
effectiveness of each program; these parameters are based on: - program structure, - program size, - adequacy of test data, and - locating any points in the program, which leave room for improvement.
It is evident that the best I.S. becomes useless when errors occuring in the system do not disappear on time, or when there exists no possibility of foreseeing and dealing with the fault during the phases of its development or especially while the system is operating. System faults can be due to combinations of causes, and/or to single causes. The most usual causes are faulty study (analysis-design), faulty implementation (programs), faulty operations, and finally, faulty maintenance (modifications, improvements).
The size of the program, especially with regard to computers having a limited memory capacity, is one of the most important factors concerning its productivity. In the technique for systems evaluation, it is possible to use the coefficient which is given by the relation of the estimated and final size of the program in bytes:
In general, reliability or its complement, i.e. unreliability or probability of failure (error), creates during the development stage of the system, the typical and essential requirements for its rational analysis-design, programming, as well as for the maintenance of the system during its operation. It is obvious that knowledge of the elements of the system which is to be computerized is necessary to clarify the extent and the requirements of the job as well as to identify possible areas for improvement. This is important now when the costs for systems development and operation are especially high.
ESTIMATED SIZE (Bytes) ACTUAL SIZE (Bytes) The Time Estimation It is also possible to use in the evaluation technique the corresponding coefficient given by the scheduled time and the actual time of program execution:
Quality control is the most significant element of reliability, it ensures that the results obtained by system design are converted into an "operating" I.S., according to the specifications given in the analysis.
SCHEDULE PROCESSING TIME ACTUAL PROCESSING TIME
System's Operational Availability (0 A) depends on: - The system's reliability, since high reliability signifies high probability of normal system operation (without interruptions due to errors). - The existence of a high degree of system's maintainability as a result of locating and repairing the error in a minimum of down-time.
4. I.S's RELIABll.JTY, OPERATIONAL AVAILABILITY AND EFFECTIVENESS
The existence of I.S's gives the corporation the opportunity to utilize these systems in achieving its goal and objectives. Operational Availability or readiness of the existing system is a function of its Reliability and its Maintainability, because the system either operates normally (due to its reliability), or is subject to failures (errors). In the last case, the system must be brought into good condition (modified or improved) so it can resume operation.
The Availability (A) of a system may be given by the relation: A= m / (m + J.l)
It can be said that Maintainability is the probability that a failure will be repaired within a specified time after the failure occurs.
where m = Mean Time Between Failures (MBTF) J.l = Mean Down Time (MDT). Alternatively, Availability may be defined to be the probability that the system is operating at a given instant.
System's Reliability is defined as the probability that the system will operate normally according to the prescribed conditions and for one (at least) given time period. Reliability is a probability related to the event that no error (failure) appears during the operation of a system.
It is understood that a high degree of operational availability requires rapid notification of the occurrence of an error
69
and quick intervention of the analysts and programmers, and in some cases, of the users. As System's Effectiveness (SE) is considered the system's "ability" to solve or to process the relative problems or data according to the expectations (system's requirements) of its development and it is expressed by the relation: SE
= OA * RM * DE
where OA = Operational Availability RM = Reliability of Mission. As RM is defined the probability that an I.S. will operate without errors till the end of its mission (scope) under the presupposition of its successful operation at the beginning of the mission. DE =
Design Efficiency. DE is the probability that an I.S. during its mission, will operate in accordance to its design requirements and will give the required output.
It is evident that when we are speaking only about Systems Reliability we suppose that the Operational Availability and Design Efficiency are both 100%. 5. REFERENCES Boyd, D.F. and H.1. Krasnow. Economic Evaluation of Management Information Systems, Management Information Systems, Selected Readings. Edited by T.W. McRae. Burril, C. and L. Ellsworth. Modern Project Management, Burril - Ellsworth Associates, Inc., Tenafly, NJ .. Carter, R. (1987). The Information Technology Handbook. Heinemann Professional Publishing Ltd, London. Cutts, G. (1991). Structured Systems Analysis and Design Methodology. Blackwell Scientific Publications, Oxford. mM, Systems Analysis, BI04 McFarlan, F.W. and 1. L. Mckenney (1983). Corporate Information Systems Management. R.D. Irwin, Inc .. National Computing Centre (NCC). Guidelines for Computer Managers. Walker, M.G. (1981). Managing Software Reliability - The Paradigmatic Approach. North Holland, New York - Oxford.
70