Software quality in the fourth-generation technique environment

Software quality in the fourth-generation technique environment

Softwarequalityin the fourth-generationtechnique environment by TREVOR D CROSSMAN B 0th managers and technicians involved in the software developme...

712KB Sizes 0 Downloads 31 Views

Softwarequalityin the fourth-generationtechnique environment by TREVOR

D CROSSMAN

B

0th managers and technicians involved in the software development industry are frequently faced with dilemmas. One common dilemma is related to the building of quality software. A sure way of widening the credibility gap between the user community and the computer industry is to implement poor quality software. It will not be long before a user loses confidence in, and commitment to, a system which is difficult to use, is consistently unreliable and takes significant effort to repair. Un-

Abstract: Quality is a problem to software developers, as it is difficult to define and measure. The increasing use of fourth-generation techniques is helping to resolve the maintenance problem, which is a symptom of poor quality software. However, ~GTs really shift the emphasis of quality control, rather than remove the problem. There is still the need to specify qualityfactors and measure them. In the 4GT environment users have a greater opportunity to become involved in systemdevelopment, and their role must then be defined. Keywords: data processing, software techniques,fourth-generation languages, quality. Professor Crossman is head of the Division of Information Systems at the University of Witwatersrand.

4

0011-684X/85/100004-03$03.00

0

fortunately, however, the traditional software development environment does not ensure automatically that quality programs will be developed. In fact, it sometimes positively hinders this possibility. Because this dilemma causes concern, both researchers and practitioners have given the problem serious attention. Significant work has been done to try to define software quality and to suggest methods of measuring this quality both during development and after implementation of the systemle6. So, theoretically, there is the possibility that quality software will be built, in spite of the traditional building methods. However, in practice the problem remains. It is common for projects to begin without the required levels of software quality being defined. It is equally common for projects to be completed without any formal attempt being made to measure the quality of software implemented. In this regard it is almost the norm for software developers to aim at nothing, and to hit it. (In fact, if any attempt is made to measure anything, it is likely to be the ‘productivity’ of the programmers, as if it is possible to measure this without the quality of their work being controlled7.) Unfortunately, proof that building

1985 Butterworth

& Co (Publishers)

Ltd.

quality software constitutes a problem is referred to so often that the impact of the proof is in danger of being lost. It is widely accepted that effective software quality management is ultimately reflected in the implemented system’s maintenance costs. High maintenance costs are a good indicator of low quality software. Often we are reminded that maintenance costs are a major portion of the total costs of a system 1,6. It must be concluded, therefore, that the ability to build quality software does not appear to be widely distributed. There is overwhelming evidence that the dilemma remains.

Will the problem go away? Claims are made that advances in technology will change all this. The increased use of fourth-generation techniques (4GT) can make a positive contribution towards resolving the maintenance problem819: when construction, Algorithm done by humans, tends to be errorprone, but the use of nonprocedural languages relieve the system developers of this responsibility. Screen formatters and very highlevel languages accelerate the software development activity and help

data processing

make the whole task of systems implementation less labour-intensive. Prototyping, with its claim that it is easier to identify weakness in existing systems than to describe what is needed in an imaginary system’, simplifies systems requirements definition. So, 4GT enable systems to be built quickly, to contain fewer errors and to be easier to change. This must contribute significantly to resolving the maintenance problem. Now, if there is hope that the maintenance problem will be lessened, is it not tempting to assume that 4GT will help automatically to resolve the problem of developing quality software? This wish seems to be reinforced by the fact that most counts of software quality are based on code, and systems developers are less and less involved in writing lines of executable code. Added to this, in the 4GT environment developers have little or no control over software quality factors such as program structuredness, module cohesion and algorithm simplicity - all of which are regarded as central to the quality of software. It appears as if the use of 4GT removes one of the software quality dilemmas, and so the problem appears to be resolved.

The new environment It is important that conclusions in this regard are not hastily made. A careful look at software quality in the 4GT environment is recommended. Shift qfemphasis If it is accepted that software quality requires a multidimensional definition6 and that one dimension is concerned with the acceptability of the software to the user community, it immediately becomes apparent that in the 4GT environment, the concern for software quality shifts from primarily maintainability issues, to what can be

~0127 no 10

december 1985

described as user-oriented issues. Halloram et. al. are among those who suggest that software quality is defined in terms of both ‘internal’ and ‘external’ factors. Included in the ‘external’ factors are dimensions such as relevance, timeliness and cost3. No matter what development method is used, these issues will persist. Table 1 is an adaptation of Halloram’s quality control matrix which shows two things: l

l

how few software quality factors can be disregarded in spite of the introduction of 4GT, how many of the remaining factors

which determine the overall quality of software affect the user directly. So the hope that the new technology will allow a deemphasis on the dilemma of software quality is shattered. There is no deemphasis, just a shift of emphasis. Better control The 4GT environment brings undoubted advantages to software quality management. One of the recognized methods of quality control is an ‘after-the-event’ measure’“. While this may be appropriate in some

manufacturing environments, it creates difficulties for traditional software development because any correction cycles are time-consuming and costly. The 4GT environment allows this after-the-event control to be more practical. Because of the shorter development times, development iterations can be taken to improve the software quality. These iterations can be repeated until the increased quality of a further iteration is no longer justified in terms of the cost involved. This gives project managers an opportunity to exert meaningful control over system quality. In fact, it enables them to manage it.

Continued problems Even in the new application development environment, the attention of both practitioners and academics is still required to grapple with the remaining problems of software quality. It is suggested that problems fall into the three broad areas of: specification, 0 measurement, 0 user involvement. l

Because the problem of software quality does not disappear with the dawn of the 4GT age, the difficulty of specifying the appropriate quality factors for each application environment remains. The question of what level of quality is required for each system must still be determined to prevent either too little (or too much) time, money and effort being spent on building quality into the software. While it is comparitively easy to measure some factors of software quality (like timeliness and cost), others (like system relevance), present more serious problems. While methods of measuring system relevance centred on user satisfaction reports are suggested” , perhaps Bernstein was right when he wrote that it is: unlikely

6

that

all the characteristics

and

properties of software that constitute its quality will yield to quantitative measurementl.

If methods of measurement cannot be found, it is suggested that this be more openly admitted to the user community. Tell them we currently lack all the tools and skills necessary to perform effective software quality control and management. In the 4GT environment the user has the opportunity to become more involved in the system development process (and perhaps build the system without relying on analysts or programmers). However, in this regard at least two problem areas can be identified. First, because the potential of computer technology may not be fully understood, users may be satisfied too easily by new systems. If the opportunity is not taken to benefit from an analyst - acting as a change agent to help to identify the users’ needs there is a danger that system requirements will be specified just in terms of user wants. To prevent this underutilization of technology, it is suggested that a specialist is needed who understands the user environment, the technology and how to determine user requirements. Second, as users become more familiar with system development tools and achieve success in systems development, there is the risk that they themselves will develop into ‘technocrats’ - and the errors of the enthusiastic amateur, like the programmers and analysts of the 196Os, will be repeated.

tial of computer technology cannot be exploited unless software quality can be defined and measured, controlled and managed.

References

8

9

10 Conclusion Obviously changes in application development methods must continue to be introduced, and more sophisticated methods of building computer - based systems must continue to be researched. However, although development technology may be different and the users’ and application developers’ roles may change, the poten-

11

12

Arthur, J ‘Software Quality Management’ Datamation (Dec. 1984) pp 115-120 Fagan, M Design and Code Inspections IBM TR 00.2763 (June 1976) Halloram, D et. al ‘Systems Development Quality Control’ MIS Quarterly (Dec. 1978) pp 1-13 Halstead, M H Elements of Software Science Elsevier North Holland, NY (1977) Harrison, W A ‘Software Complexity Metrics’ J. Systems Mgt. (July 1984) pp 28-30 McCall, J A ‘An Introduction to Software Quality Metrics’ Software Quality Management, Petrocelli NY (1979) p 127 Presser, L ‘Reversing the Priorities’ Datamation (Sept. 1981) p 208 Duffy, N et al. Fourth Generation Languages: The Quiet Revolution in Information Systems Fact and Opinion Paper, Graduate School of Business Administration, Univ. Witwatersrand No. 16 (Oct. 1982) Jenkins, A M ‘Meeting the Challenge for Information Systems in the 80s’ Znformatica Indiana University Bloomington No. 2 (1982) Crossman, T D ‘Software Quality’ Systems (May 1982) pp 11-19 Roman, D ‘How to keep the User Satisfied’ Comput. Dec. (Jan. 1982) p 79 Bernstein, M I ‘Productivity or Quality’ Readers Forum, Datamation (Apr. 1979) pp 227-229 0

Division of Business Information Systems, University of the Wirwatersrand, Johannesburg, South Africa.

data processing