Quality assurance in higher education: theoretical considerations and empirical evidence

Quality assurance in higher education: theoretical considerations and empirical evidence

Srudies in Educational Pergamon SO191.491X(96)00007-7 QUALITY Evaluation, Vol. 22, No. 2, pp. 115-137, 1996 Copyright 0 1996 Elsevier Science Ltd ...

2MB Sizes 0 Downloads 117 Views

Srudies

in Educational

Pergamon SO191.491X(96)00007-7

QUALITY

Evaluation, Vol. 22, No. 2, pp. 115-137, 1996 Copyright 0 1996 Elsevier Science Ltd Printed in Great Britain. All rkhts --- reserved 0191-491X/96 $15.00 + 0.00

ASSURANCE IN HIGHER EDUCATION: THEORETICAL CONSIDERATIONS AND EMPIRICAL EVIDENCE Mien Segers* and Filip Dochy””

“Department of Educational Development and Research, University of Limburg, Maastricht, The Netherlands l *Centre for Educational Technology, Open University, Heeden, The Netherlands

Introduction Over the last decade, American as well as Western European higher education have made considerable progress towards quality enhancement. During the nineties, attention continues to be focused on quality. The United States has a long tradition of accreditation with the evaluation of the quality of institutions or programs. As a number of European countries moved to a more market-oriented steering policy for higher education with an emphasis on accountability through external quality assurance systems, there was a growing interest in the American accreditation system. The crucial questions were: What are the merits of the accreditation system? What can we learn from it? In Western Europe, the Netherlands played an important role in the discussion on the merits and caveats of various ways of implementing quality assurance systems. In the Netherlands, since the beginning of the sixties, there was a shift from a governmentdirected to a more market-directed steering conception in higher education. The main goal was to improve the planning system of higher education. Economy, efficiency and effectiveness, devolved budgeting and accountability were high on the agenda of the government and therefore of the Association of Co-operating Dutch Universities. These developments were due to the economic crisis, financial constraints and changing demands. They involved making choices between mutually exclusive alternatives each with its own combinations of inputs, outputs, impacts and benefits (Sizer, 1990). Therefore, performance indicators and management information were systematically introduced. Their importance for planning as well as for quality assurance was stressed. 115

116

M. Segers

and F. Dochy

In the planning of Dutch higher education, the Higher Education and Research Plan, published every two years, plays an important role. Since 1988, a quality assurance system which uses indicators is introduced in order to monitor the functioning of the institutions. The example of the accreditation system in the US played a significant role in the development of the system. This article presents the main results of a study on the use of performance indicators within a quality assurance system in higher education in the Netherlands. The first section describes the context of the development of a quality assurance system in the Netherlands. The second section defines the main concepts within the central theme of quality assurance. In the third section, the Dutch external quality assurance system is discussed on the basis of the case of the schools of economics and business administration. Comparisons are made with US experiences with the accreditation system. T‘he Context of the Development

of a Quality Assurance System

The Dutch higher education system differs distinctly from that of the United States. According to Ferris: “Higher education in the Netherlands is essentially a public sector monopoly with multiple institutions. The United States, on the other hand, is more complex with each of the fifty states having a diverse set of public institutions that operate parallel to a diverse set of private institution” (1991, p. 94). The Dutch universities are all primarily government-funded, with government contributing more than 80% in almost all cases. “This put the universities to some extent in a position of dependence with regard to the government.” (Dutch Higher Education and Research, 1988, p. 16). Since the seventies, higher education has been increasingly subjected to a variety of demographic, social, economic and technological changes, which require a new direction of management. In 1985, after decades of strong government involvement, the Secretary of Education and Sciences issued a report announcing a new approach to the management of higher education. The Higher Education, Autonomy and Quality report was subtitled: “Another way of management”. In this report the Secretary announces a more detached role on the part of the government and a more market-directed policy in higher education emphasizing the quality of research and instruction, and accountability. The university, unlike before, is required to justify itself, its purposes, its methods of attaining those purposes, its allocation of precious resources, its priorities and its responsibilities to the individual and to society. Institutions of higher education are expected to be accountable internally as well as externally. The Ministry of Education and Sciences explains the shift as follows: A condition for obtaining funding is that the establishments must adhere to the national statutory regulations and conform to the planning and funding system. The WWO is the legislation applying specifically to the universities. A variety of subjects is embodied in the legislation. For example, there are exhaustive regulations governing the procedures to be followed prior to the establishment of curricula, the internal administrative organisation is laid down in full, and the rights and duties of both the government and the

Quality

Assurance

117

establishments are specified. The WWO is supplemented by national regulations of a constitutionally lower order in the form of the University Statute, an Implementation Decree and a few regulations concerning the budget and investments. The government is endeavouring to reduce the number of regulations and to increase the autonomy of universities...The abolition of regulations with which educational establishments must comply in advance goes hand in hand with stringent output control. A quality control system will have to be devised both in respect of quantity (results, research output, etc.) and quality. (Dutch Higher Education and Research, 1988, p. 16) In summary, the new steering conception advocates:

l

loosening of governmental

regulations; responsibilities; to the development and adaptation of an adequate system for quality

l , greater emphasis on institutional l

l

higher priority assurance; more attention processes.

to the realization

of an appropriate co-ordination

of decision-making

Table 1: The Change Toward the New Steering Conception in Higher Education Regulation

Deregulation

Detailed previous steering No quality control

Control afterwards Quality assurance

Government-directed

Market-directed

Fading of institutional responsibilities

Emphasis on institutional responsibilities

Institutional equality

Institutional differentiation and competition

Constancy

Flexibility

The new conception is expected to lead to a high quality system of higher education by means of greater differentiation and competition between deregulated institutions of higher education. Nevertheless, this steering conception only provides the main outlines of the necessary changes. In 1988, the first Higher Education and Research Plan was published. This was the first planning document based on the new steering philosophy, which entails an increased autonomy of the institutes through the abolition of government regulations and the development of a system of retrospective quality control. The plan proposes the managerial agenda for the dialogue between the government, the institutions and the labor market. It discusses the developments within the field of higher education and within society, and formulates new directions for the management of higher

118

M. Segers and F. Dochy

education and its different disciplines. Performance indicators are described as tools that play a significant role in this new thinking on government control. They can help to facility the dialogue between the government and the institutions for higher education with respect to mutual responsibilities. This dialogue is a necessary condition for autonomous policy-making. Since 1985, the Higher Education and Research Plan has been published three times. Alongside the Higher Education and Research Plan an external quality assurance system has been implemented since 1988 within the Dutch universities, by the Association of Co-operating Dutch Universities (Ferris, 1991; Daalder, 1992; Goedegebuure, Maassen, Westerheijden, 1990; Bijleveld, 1989, & Dutch Higher Education and Research, 1988). This quality assurance system will be described and analysed in the next section. It is a development plan based on the revised legislation for higher education of 1985. Some Main Concepts In the discussion on the evaluation of higher education, the concepts of quality, quality assurance systems and performance indicators are widely used. The following section, which draws on a research study by Segers (1993) on quality assurance, brings together some of the different concepts in use in the current arguments about how quality assurance might be implemented. Quality In higher education, like in all other sectors of society, quality is an important issue for students, parents, employers, researchers, the academic staff, industry, taxpayers as well as governmental officials. Although it is a commonly used concept, there is no widely accepted definition or description. Various aspects of the concept of quality are described. It refers to something good, beautiful, valuable, and, in the context of evaluation, it suggests that a prior survey was done by experts. Quality is related to the aim of the higher education institution: it is defined as fitness for purpose or as effectiveness in achieving institutional goals. According to Green (1994) “A high quality institution is one that clearly states its mission (or purpose) and is efficient and effective in meeting the goals that it has set itself’ (p. 15). Moreover, quality refers to something dynamic, (Batten & Trafford, 1985; Conrad & Pratt, 1985; Findlay, 1990; Levine, 1982; Ross & M%hlck, 1990): thus, for the quality of higher education, change is important and must be gradual. Quality has multiple dimensions that remain in constant motion. It is a profile based on multiple measures. It embraces, in general, three broad aspects: the goals defined (mission or purpose), the process for achieving the goals, and the output, i.e., the extent to which the goals are achieved (Frazer, 1994). Finally, different groups have different views on the quality of education. Each of them brings different priorities to discussions of quality. For example, the focus of attention of students and lecturers might be on the process of education, while the focus of employers might be on the quality of the graduates. For that reason, quality is also a relativistic concept (Green, 1994; Harvey, Green & Burrows, 1993; Ross & Mahlck, 1990; Westerheijden, 1990). This aspect of quality is described by Green (1994): “Its definition varies according to who is making the

Quality

Assurance

119

assessment, which aspect of the higher education process is being considered and the purpose for which the assessment is made” (p. 114). Quality Assurance Systems As there is no general agreement concerning the meaning of quality in higher education, it is not surprising that there is confusion about the terms used to describe the systematic procedures aimed at monitoring and enhancing quality. These procedures are often called quality assurance systems (QAS). Quality assurance provides users of the system with a guarantee that institutions, courses and graduates meet certain standards (Melia, 1994). Barnett (1992) defines quality assurance as follows: “Quality assurance looks to developing processes so rigorous that imperfections are heavily reduced and ideally eradicated” (p. 117). This definition has been taken from the context of industrial processes. However it is clear that education is a human process where error cannot be eliminated as in industrial processes. Therefore, in educational settings, the concept of quality assurance refers to the intention and activities planned to assure quality. In industry quality control refers to a system checking whether the raw material used, the products made, or the services provided, meet minimum predefined standards. The measurement here is the core activity (Frazer, 1994). But for quality assurance to be effective, any QAS must include three interdependent components: monitoring, measurement and improvement (Jessee, 1984). However, a major problem of most quality assurance programs is the underdevelopment of the monitoring and improvement aspects of the system. Systematizing quality assurance implies establishing a regular cycle of review (Barnett, 1992), and we propose the cycle for quality assurance presented in Figure 1. Choice of evaluative levels and forms

Evaluation of the

Construction and execution of an innovative plan

Determination of goals; stipulation of valid indicators

assessment and formulation of consequences

Figure 1: Cycle for Quality Assurance

A first set of questions to be answered is: On which level will the evaluation process take place, the institutional level, program level or course level? Will we choose for input-output measurement and/or peer review? What are the main goals (functions) of

120

M. Segers and F. Dochy

the evaluation? Quality assurance should not only be associated with decisions about funding. It can serve validation or accreditation purposes or constitute an early warning system (monitoring) (Dochy, Segers, & Wijnen, 1990). The above questions refer to the choice of the evaluative level(s), form(s) and functions. Secondly, quality can only be understood in the context of an institution’s aims and objectives. Therefore, these have to be defined clearly as a reference for the choice of valid indicators (Green, 1994). After this stage of making choices and decisions concerning the level, evaluative form, objectives and indicators, evidence must be collected. This leads to the description of the strengths and weaknesses of the institution, program or course and the development of a plan to enhance quality. Next, the implementation of the proposed innovations is evaluated. Finally, the conclusions are also the start of a new process of quality assurance. The proposed quality assurance process must be composed of at least three separate stages: a stage of monitoring activities, a measurement stage and a stage of improvement. Monitoring activities aim at enhancing quality during the process. They must serve as an early warning system for triggering comprehensive assessment of the causes of deficiencies, which are either immediately observable or are revealed early on in the process. The assessment stage implies the collection of empirical evidence about the previously established indicators. We chose the term assessment because measurement implies a greater degree of precision than intended and than is realistic to aim for (Nutall, 1994). Subsequently, conclusions Cjudgements) are drawn and recommendation are formulated in accordance with the function of the evaluation (Green, 1994). In case planning is the main function of the evaluation, reallocation of funds can be a recommendation. In case of monitoring intentions, warnings can be formulated. The improvement stage refers to the planning and implementation of innovative activities in order to reduce the deficiencies. Performance Indicators, Management

Statistics and Management

Information

With the increased attention paid to quality and quality assurance in higher education, indicator systems were developed in many European countries. The term performance indicator itself is still the subject of a lively debate. A literature review (Segers, 1993) indicates that performance indicators provide information about the activities of an institution. They provide a profile of performance levels attained by a particular institution at a particular time (Allsop, Findlay, McVicar, & Wright, 1989; Cave, Hanney, Kogan, & Trevett, 1988; Dochy, Wijnen, & Segers, 1987; Frackmann & Muffo, 1988; Kells, 1976; Linke, 1992; Nutall, 1994). Lucier (1992) points out that: “Performance indicators introduce a new element of objectivity into a world which, otherwise, could easily become a closed and self-perpetuating world” (p. 168). The following questions have received a variety of answers: Are performance indicators quantitative and/or qualitative? Are performance indicators signals or guides rather than absolute measures? Are they synonyms of the concepts of management statistics and management information? In this section, we analyze the concept of performance indicators by answering these questions. We start with the latter question. It is clear that not all the information that can be gathered within an institution for higher education is useful to assess the quality, i.e., the

Quality

effectiveness

with

which

it achieves

Assurance

its objectives.

121

Although

for cost analysis

and

budgetary purposes, the amount of paperclips used has to be calculated, this statistic is of minor relevance when gathering information on the quality of the higher education institution concerned. This is why performance indicators are distinguished from management statistics and management information (Nutall, 1994; Sizer, 1992). Management statistics are numerical data, i.e., numerical characteristics of empirical descriptions. Therefore, they are absolute measures. Mostly, statistics are used in comparison with other numerical data. For instance, an analysis of the trends in student enrollment by discipline and academic level may be used to determine how the proportion of the funding that is based on these trends should be allocated. The data are used for a basic management task. In that sense, a combination of management statistics is used as management information. It is the use to which they are put of management statistics and management information that determines whether or not they function as indicators (Selden, 1994). If they give relevant information on the achievement of institutional goals, they are defined as performance indicators: “Statistics qualify as indicators only if they serve as yardsticks of the quality of education” (Shavelson, McDonnell, Oakes, Carey, & Picus, 1987, p. 5). Figure 2 illustrates the definition of the three concepts. Management statistics: quantitative data Management information: quantitative -or qualitative- data which are related to each other and structured as management information. Performance indicators: empirical data -quantitative and qualitative- which describe the functioning of an institution, the way the institution pursues its goals. This implies that they are context- and time-related. Indicators are more general signals which can be translated into a combination of more specific characteristics, called variables. Figure 2: Definition of the Three Concepts

An example may clarify this scheme. The total number of student enrollments is a quantitative datum and therefore a management statistic. The total number of graduates in relation to the total number of student enrollments can be described as management information. This ratio is a performance indicator if improving it is pursued as an institutional goal. If not, it is a management statistic. The existence of a relationship between performance indicators and goals implies that certain data for one evaluator turn out to be management statistics or information while they are interpreted as performance indicators or variables by another evaluator. For instance, from a governmental point of view the men/women ratio within a student population is, in terms of gender equality goals, a clear indication for the performance of an institution (Higher Education and Research Plan, 1988). This governmental goal is not

122

M. Segers and F. Dochy

necessarily au institutional goal and can therefore be seen as a management statistic or information, i.e., a quantitative datum which in itself is no indicator for the performances or the functioning of an institution. Many data relating to costs are considered to be management statistics. They can be used for management and control functions but do not pretend to assess performance in a comprehensive sense (Cave et al., 1988). Performance indicators are empirical data of either a qualitative or a quantitative nature. The quality of higher education cannot be adequately described in numbers or quantitative objective scores only. Data of a qualitative nature are essential if performance indicators are to address the heart of the matter, i.e., the quality of the educational process (Barnett, 1992; Findlay, 1990; Frazer, 1994; Lucier, 1992; Sizer, 1989). For instance, the congruence between the educational program and the institutional goals may be expressed in numbers. But the assessment of its quality is still a subjective judgement (Green, 1994; Findlay, 1990). Data (quantitative or qualitative) become significant as performance indicators if they express the contemplated goals (Cave et al., 1988; Findlay, 1990; Green, 1994; Lindsay, 1981; Sizer, 1990). This means they have a contextual as well as a temporal importance. First, performance indicators have to be interpreted within the context of the institution concerned (Stibbs, 1884). For instance, in order to interpret the (amount of) information on graduates, it is important to know when the school started. The quality of institutional performance is not simply a function of output, regardless of how well this may reflect the particular objectives and priorities of the institution concerned. It is also influenced by a variety of input constraints such as the level of financial resources provided and the inherent abilities of staff and students. Any judgement making use of performance indicators should acknowledge and, if possible, take explicit account of these constraints. Linke (1992) advises to define in operational terms the major background or context characteristics that are likely to influence institutional performance and to map as accurately as possible their individual and collective relationship to particular performance. Besides, “since a performance indicator is a communicative device, an analytic structure for conveying relevant information, an essential feature of its definition and nature will be its manner of use in a communication process- ‘who is reporting what to whom and why’ ” (Findlay, 1990, p. 128). Second, performance indicators must not only be discussed contextually but also as time-dependent. In education (as in society at large), change is important and happens gradually. The accumulation of indicators over a considerable period of time will highlight the quality of education as an ongoing process. This implies that performance indicators must reveal developments instead of presenting snapshots. Indicators are general signals. They have to be operationalized by a set of variables (Cullen, 1986; Johnstone, 1982). These are specific characteristics. Compare, for example, a doctor who asks his patients about their medical history. The concentration of plasm in the blood of a cardiac patient is not to be confused with the indicator ‘the state of the blood’. If one wants to conclude anything about this state, other variables have to be taken into account, like, for example, the concentration of corpuscles. Tables 2 and 3 provide more examples of the difference in level of abstraction between indicators and variables.

Quality

Assurance

123

The functioning of an institution for higher education (like any other social institution) can hardly be expressed in absolute measures. Therefore, performance indicators must be interpreted as signals or guides. This implies that they can be of great help in reaching a judgment but can never be seen as a substitute for it. Performance indicators act as a warning system that something may be going wrong, in the same way as the display on the dashboard of a car can alert the driver to a problem or reassure him or her that everything is functioning smoothly ( Nutall, 1994). Linke (1992) explains this as follows: In the case of student progress and graduation rates, as the indicator values increase beyond what might be regarded as a normal range’ (which in turn is likely to depend on academic entry standards and other student background characteristics), there comes a point where suspicions are aroused about the standards of intellectual rigour applied to student assessment. Similar suspicions may also be raised if the graduation rates appear too low for students with a relatively good record of prior achievement. While both situations might well have a reasonable explanation, such as transfers into and out of the course, changes in student motivation relating to graduate employment prospects, exceptionally good or bad teaching staff and so on, there is no necessary inference that higher or lower indicator values mean better or worse performance respectively. (P. 126)

The Dutch External Quality Assurance System The main goal of the Dutch external quality assurance system (QAS) is twofold. It intends to help universities, i.e., faculties, to gain insight in the way they pursue their goals and to what extent they reach them. Secondly, for the purpose of accountability, the results of the quality assurance system inform society about the way public money is spent (Acherman, 1988). This also implies that the clients (students, labor market) are informed about the mission of every school/faculty and the way they operationalize this mission. The result of the QAS is not accreditation, but a description of the functioning of the faculty/school, including both its strong and weak aspects. The monitoring function of the QAS is stressed. Experience has shown that the warning and improvement phase of QASs tend to be underdeveloped (Jessee, 1984). It is very important that faculty members be informed about the main goal since this enhances an efficient, effective and reliable QAS (Kells et al., 1991). This point is often stressed as a particular weakness of accreditation (Kells et al., 1991; Miller, 1981). The Dutch QAS for higher education can be described with reference to the following characteristics (Vroeijenstijn & Acherman, 1990): External and internal quality assurance are complementary: pivot between external and internal quality assessment.

the self-study report is the

124

M. Segers and F. Dochy

The visiting committee is formed by external experts, who have no relationship with the school or faculty visited. In this way, the Association of Universities who initiated the QAS tried to avoid criticism concerning self-service (Millard, 1983a & b). The university is not visited, but rather the faculties or schools within a university. One committee visits all Dutch schools of one discipline. In this way, comparison is possible. Since it is hard to formulate sets of standards especially for qualitative performance indicators like motives of drop-outs (Stauffer, 1981), presenting data of all faculties/schools with respect to a set of indicators is a useful way of informing the clients as well as the faculty about their functioning. Especially insofar as the QAS is considered as a monitoring instrument, it is important for the faculty/school not only to be assessed as either “good or bad” (‘accredited or not’), but to receive a concrete description of weaknesses and strong points in its functioning. Only the teaching process is envisaged. The research process is currently undergoing a separate but similar QAS. The QAS is nationwide. All Dutch universities have committed themselves to the QAS as initiated by the Association of Co-operating Dutch Universities. This implies they all take part in the QAS: thus there is no need to turn to each university separately with a request for a visit, as in the case of the regional or specialized accreditation in the US. The Association of Universities alone is responsible for organizing the external QAS. By doing this, the Ministry of Education and Sciences and the Association of Cooperating Dutch Universities tried to avoid a chaotic situation with many autonomous agencies and schools being visited forty times over a period of three years (Young, Chambers & Kells, 1983). It is a cyclical process of 5 to 6 years; the 8-year process sometimes used in the accreditation system does not fit the idea of a cyclical evaluation. With such a long process, it is quite common for a period of non-activity to occur between two cycles (Jessee, 1984; Kells, 1976; Miller, 1981). This QAS covers the whole university: all faculties of one discipline and all disciplines take part in the QAS during the 5 (or 6) year period. The report of the visiting committee is made public by the Association of Co-operating Dutch Universities; the accreditation reports, in contrast, are not made public. If the QAS intends to attain its goals, the committee reports should not be confidential. Considering the experiences with quality assurance systems in the USA and the Netherlands, it can be argued that each system has its own characteristics but that there are a number of similar elements that can be combined into the core of a general higher education quality assurance system. A comparison with Semrow’s (1977) description of accreditation reveals the similarities between the Dutch and the US system: Accreditation decision making is built upon a two-tiered system which encompasses a peer review process. On the one hand the institution is expected to conduct a study and analysis of its operations in the light of its mission statement, its goals and purposes, and its program objectives...On the other hand, the Commission provides for an on-site evaluation review by a team of persons independent and outside the institution. This constitutes the

Quality

Assurance

125

peer review which utilizes as its basis the institution’s self-study. This process is carried out both substantively and procedurally... .” (P. 3) The Dutch Quality Assurance System Revised: A Case Study The research presented here is part of a project which aimed to find performance indicators that are valid operationalizations of the effectiveness of an institution for higher education in achieving its goals. For this purpose, different studies were conducted (Segers, 1993). First, the different contexts of the development of performance indicators in different Western European countries, the USA and Australia were summarized. Second, a conceptual framework was developed, summarized in a previous section. Third, potential indicators were collected, described and categorized. Fourth, the face-validity of the extended inventory of performance indicators was measured. As a final phase in the development of performance indicators (Linke, 1990), the practical experiences with performance indicators were analyzed. A case study was conducted to collect and analyze empirical data. The results of this study are presented in this section. Some recommendations for the use of indicators are proposed. We refer to the main problems as described with respect to the American accreditation system (Cook, 1989; Crow, 1994; Harcleroad, 1980; Jessee, 1984; Kells, 1979, 1991; Mills, 1984). Research questions The core questions of the present study were: l

l l

What data does the Dutch quality assurance system collect and use, especially in the case of the Dutch schools of economics and business administration? What is their face-validity? In what way are the indicators reported: analytically or descriptively? As independent elements of the educational process or in a comprehensive, coherent manner?

Methodology A case study was conducted. We chose this research strategy for two reasons. First, because this was the first implementation of the Dutch QAS. Secondly, the case study method makes repeated review possible, is unobtrusive and results in exact information (Yin, 1994). .4s the researchers had access to all relevant documents, there was no biased selectivity. Multiple embedded cases were used in which each individual case is a revelatory case. Data-collection relied on the self-study report of the seven Dutch economics faculties visited in 1991, and on the report of the visiting committees. The schools of economics and business administration were chosen as a unit of analysis because the researchers conducting the analysis were familiar with the context of these schools, which was a prerequisite for reading the documents. The analysis relied on the theoretical assumptions formulated in previous studies that were part of the project (the face-validity study and the conceptual framework developed).

126

M. Segers and F. Dochy

The content analysis of the documents was conducted by two independent researchers in order to avoid bias (Yin, 1994). The analysis was first conducted within each case ( self-studies of the seven schools of economics and business administration and one committee report). The patterns for each of the cases were then compared across cases, using the replication mode for multiple cases. Finally, the conclusions drawn for the multiple cases served as the conclusions for the overall study. For the content analysis of each case, four categories were defined (Meerling, 1984; Segers, 1977): the level of operationalization (indicators or variables), face-validity (more or less valid according to the face-validity study), manner of description (analytic or descriptive), manner of presentation (as a set of independent data or in a coherent and comprehensive way). The content analysis was carried out in six steps (Meerling, 1984): Compiling the inventory of indicators and variables presented in each self-study report and in the report of the visiting committee, Analysis of the documents on the basis of the criterion of face-validity. Analysis of the style of description of the indicators and variables: descriptive (summary of quantitative and qualitative data) or analytic description (summary of data with an argumentation of possible causes, context influences, planned innovations). Analysis of the manner of presentation: a summary of independent data with or without argumentation or presentation in a coherent and comprehensive way (relating different sources of information to each other and relating conclusions to the information presented). Comparison of results across cases. Formulation of conclusions and recommendations. Results Table 2 summarizes the main results of the content analysis of the self-study reports the seven Dutch economics faculties. The indicators with the more concrete characteristics (variables) which are presented in more than half of the self-study reports were indicated. We pointed out whether the data for the indicators and variables are presented in descriptive (D) or analytic (A) way. In a fourth column, we reproduce the variables which are valid according to our validity study (see Segers, Wijnen, & Dochy, 1990). It can be concluded that most self-study reports present data in a descriptive way, without a critical analysis. Kells (1991) stated that this is also the case with the accreditation system in the US: in almost 80% of the accreditations, a critical self-analysis is missing. Most Dutch economics faculties present data individually, neither in a coherent way, nor by discussing the strong points and areas of concern in their functioning. Valid and invalid indicators and variables are presented in the same way, alongside each other, without indicating their relevance to the quality assurance system. Three indicators are reported most extensively: student in-flow, student through-flow and the curriculum.

- percentage of staff with given number of years of working experience outside the present institution - staffs motivation to teach - average number of uublications - staff Fecruitment poiicy

A A A

- number of teaching hours by tutuor - staff uolicv -

*D - descripttve, **A - analytrc

- relevance of education m relation to professional activities x-years after graduation - number of staff wrth completed drssertatrons (PhD)

student-outflow (graduates) staff

student flow (how do students perform in the program)

Table 2Icont.

- drop-out percentage - drop-out motives - ratio normative length of study/empirical

A D D A

D

- through-flow

A

length of study

- percentage of students passing the ‘doctoral’ exam

D D A

A**

D*

Valid Variables

D

student mflow (students starting a program)

Presentation

- total number of enrollments bv facultv. full-time and part-time - preceding educational and professional experience of students - percentage distribution of male/female students - geographical origin of students - percentage of students passing the first year final exam - percentage of students passing the ‘doctoral’ examination - through-flow - drop-out percentage - percentage switches by field of study - ratio normative length of study/empirical length of study - relevance of educatron m relation to professtonal activities x-years after graduation - educatronal qualrftcattons of staff through teachers training, etc. - staff recruitment policy

Indicators

Variables

Table 2: Indicators and Variables as Presented in the Self-study Reports in Comparison with Valid Indicators and Variables

- average square teachma-snace bv faculty - amount of hardware by faculty . .

innovation

- amount of time, means and results of institutionalized activities towards educational

orientation

- relevance of the goals of the program with regard to professional reality - structure of the educational process in terms of subject matters, teaching methods and teaching and learning facility - correctness, tenability, completeness and level of goals of the curriculum - correspondence between goals and content of the curriculum - overlap or gaps, less fortunate sequence and planning of the courses in the program - realization of the curriculum functions -orientation and selection during the first year) - average size of classes (lectures, tutorials) - examination procedures - structure of the curriculum . - assessment methods - degree of mnovatton or degree of mnovatrve climate

mfrastructure and material facilities

educatronal

curriculum

- clarity In and clear descnptton of the goals of the

progiam

Indicators

Variables

Table 2 (cont.)

A A

D

D

A

A D A

A

D-A

D

A

D

D

D

Table 2lcont.

- academrc and personal freedom for all mdtvrduals at the institute - atmosphere at the institution - educational philosophy especially concerning the institutional goals and general structure - degree of innovation or degree of innovative orientation

- clarity m and clear descnptton of the goals of the program relevance of the goals of the program with regard to professional reality structure of the educational process in terms of subject matters, teaching methods and teaching- and learning facilities correctness, tenability, completeness and level of the goals of the curriculum correspondence between goals and content of the curriculum distribution of time over theory, practical skills and (academic) research skills congruence in teaching methods and assessment methods

Presentation Valid Variables

y’

i: ie 0-a 9 3” a. ? tr x

mstrtutional managerial structure of relationship

quality assurance system structure the faculty a an organization between institutional board and faculty POllCY

educatronal

- range of graduatmg optrons - scope of freedom-of-choice curriculum (modules, courses) in relation to the volume of the compulsory curriculum - possibilities for part-time training, in-service training, contract education, post-academic training

-

Indicators

Variables

Table 2/ (cont.)

supply

D A D D

D

D A

Presentation

- range of graduating optrons - supply of fields of study: regular, free, experimental - scope of freedom-of-choice curriculum (modules, courses) in relation to the volume of the compulsory curriculum - possibilities for part-time training, in-service training, contract education, post-academic training - mstitutronal quality assurance system - agreements with other institutions concerning distribution and concentration of tasks

Valid Variables

E s 6 z!

f3 gE C?

130

M. Segers and F. Dochy

It is surprising how much attention is paid to data concerning student intake, although this is not a valid indicator. Probably, faculties’ student administration gather this information and, because the indicator is on the list of guidelines of the Association of Co-operating Dutch Universities, the faculties offer the data, no matter whether they are valid or not. Although this indicator can provide relevant context information for the interpretation of, for instance, problems with the student flow, it does not directly indicate the extent to which the institution pursues its goals. There are some striking differences between the variables presented in the selfstudy and those which seem to be valid according to the validity study. The motives of drop-outs is a valid variable but seldom present in the reports. Probably the faculties do not gather data concerning this variable. The variable “percentage of staff with a given number of years of working experience outside the present institution” is another valid variable which is not presented in the reports. Educational qualifications of staff members through teachers training is a variable often mentioned in the self-study reports, but strangely enough, it is not a valid variable according to the validity study. Staff motivation seems to be a valid variable for the quality of the staff. This variable is never mentioned in the self-study report. Of course, this variable is hard to measure, or even to describe, in a qualitative way. Table 3 summarizes the indicators and variables presented in the report of the visiting committee and the way they are presented (descriptive or analytic). Comparison of the content of the self-study reports with the comments of the visiting committee show some differences. The indicator “curriculum” and especially the description and discussion of the goals are important aspects of the self-study report, but only marginally mentioned in the committee report. The Inspectorate of Higher Education, which conducted a meta-evaluation of the external QAS as implemented so far, stressed the importance of concentrating on the main goals of the curriculum and the way they are pursued (“the heart of the matter”). Only in such a way, the visiting committee can reach its goal of giving a comprehensive description of the state-of-the-art of the teaching process in the Dutch economics faculties. Although the Association of Universities was familiar with the criticism of accreditation concerning the acceptance of institutional goals instead of a critical analysis, the visiting committee of the economic faculties did not succeed in avoiding this criticism. On the other hand, the committee looked into the subject of structural and organizational aspects of the curriculum and of policy aspects like the existence of committees with clear and well defined responsibilities. These are not the most relevant indicators for the purpose of this QAS, which is to provide insight in the state-of-the-art of education in the Dutch faculties of economics. The emphasis on these variables in the committee report is partly due to the way the visiting process was conducted. The committee of external experts mainly discussed the self-study report with members of faculty committees. The teacher who was most experienced with the real teaching and learning process seldom took part in the dialogue with the external experts. The visiting committee did not pay attention to the indicator “graduates”, although most self-study reports presented data on this indicator. This is striking for different reasons. The results of the validity study indicate that this indicator is valid.

Quality

Assurance

131

Table 3: Indicators and Variables Presented in the Report of the Visiting the Dutch Economics Faculty

Committee

of

Student intake

- total number of enrollments by institution

A*

Student flow

- number of students passing (the first year final examination and the ‘doctoral’ examination) - flow - percentage distribution normative length versus empirical length of study

A A A

Staff

- number of teaching hours by tutor - staff policy

A A

Curriculum

- structure of the instructional process: literature, teaching methods - distribution of time over theory, practical skills and (academic) research skills

2

- realization of the curriculum function (orientation and selection for the first year) - examination procedures

A

i

- assessment methods

- structure of the curriculum

A

Financial matters

- allocation of funds

A

Infrastructure and material facilities

- average square teaching space - infrastructure offices staff

A A

Education supply

- range of graduating options - scope of freedom-of-choice curriculum in relation to volume of compulsory curriculum

D** D

Policy

- internal quality assurance system

- democratic implementation of policy - structure of the faculty as organization - management structure - evaluation systems for student flow Cooperation between - contacts between staff of different institutions institutions - exchange of students *A - analytic, **D - descriptive

The Association visiting

of Co-operating

committees

to comment

Dutch Universities

described it as a major task of the

on the level of the graduates.

For this reason,

two

representatives from the labor market participated in the visiting committee. Although in the self-study reports little attention was paid to the indicators “financial matters” and “cooperation between institutions”, and they do not seem to be valid according

to the validity

study, the visiting committee

These indicators are important as background indicators such as “curriculum” or “graduates”.

information

explicitly

comments on them.

in order to interpret

valid

132

M. Segers and F. Dochy

In comparison with the faculties’ self-study reports the way the committee presents its comments is much more comprehensive, although the extent of the report and the relevance of discussions about governmental policy might be open to questions. Finally, the visiting committee criticized the faculties for being not critical enough; the self-study reports were in most cases descriptive instead of analytic. On the basis of these results, some recommendations can be formulated. The internal process of quality assurance and the visiting process could be more effective and efficient if the faculties described their strengths and areas of concern in a more comprehensive, coherent way and by indicating the relevance of the presented data as either performance indicators or context data. This would enable the visiting committee to concentrate immediately on “the heart of the matter”, i.e., the goals and content of the curriculum. In this respect it is important to notice that if universities experience the real significance and the consequences of the justification of their functioning to society through this QAS, there may be a greater willingness to present a critical analysis. The further development of institutional information systems will have a positive influence on this. The report of the committee could be more functional in the context of a rolling review process, if it contained more detailed comments with a short summary of the main comments for every faculty visited. A stronger focus on indicators as the curriculum and variables as its goals and content could be useful for attaining the main goal of this QAS. For this purpose, a dialogue with the teachers who are responsible for the daily teaching and learning process is necessary. Finally, only comments which the faculties can directly or indirectly influence are relevant topics in the committee report. Conclusions Quality assurance is an important topic for everyone who is involved in education: the academic staff, the students, as well the tax payers. In Western Europe (and for instance, in Australia), there is a growing interest in effective quality assurance systems in the context of accountability and as a move towards more efficiency, Some countries, like the Netherlands, tried to transfer some of the ideas of the US accreditation system to their situation. They tried to learn from the main problems with and pitfalls of the accreditation system. The Netherlands have introduced an external Quality Assurance System, a cyclical process (5 to 6 years), nationwide, on the level of disciplines with self-study and peer review (visiting committees) as complementary phases. They stress the monitoring to hold up a mirror to faculties/schools and function of the quality assurance system, i.e., to make the functioning of faculties/schools transparent to society. The quality assurance system resulted in two reports: a self-study report and a report of the visiting committee with its main comments. No quality mark, in the sense of accreditation, is given. Comparison of these reports with the results of the validity study show some remarkable findings. One striking finding is the faculties’ stress on data on student inflow although the validity study reveals that this is no performance indicator. It can be used as valuable background information but does not express the way a faculty pursues its goals. The curriculum, its goals and the way they are translated in the curriculum design, are

Quality

Assurance

133

extensively described variables in the self-study reports. Given the main goals of the quality assurance system, it is strange that the visiting committee should have paid more attention to structural aspects of the curriculum and policy aspects than to those variables. In that sense, they missed an opportunity to offer insight into “the heart of the matter”. The way the data are presented in the self-study reports resemble most American selfstudy reports (Kells, 1991): they are descriptive instead of analytic. Besides, the data are not presented in a coherent and comprehensive way with respect to the main strong and weak points of the faculty/school. In contrast, the report of the visiting committee describes its comments in a coherent way, even if some of its contents are open to debate. References Acherman, J.A. (1988).

Quality assessment by peer review.

Utrecht:

VSNU.

Allsop, P., Findlay, P., McVicar, M., & Wright, P. (1989). Towards identifying performance indicators in an English polytechnic. International Journal of Educational Management, 14 (3), 1 lo-

117. Bamett, R. (1992). Improving higher education. Toru/ quality care. Milton of Research into Higher Education/Open University Press. Batten, C., & Trafford, Lockwood., & J. Davies (Eds.), Nelson.

Vught,

Keynes: The Society

V. (1985). Evaluation: An aid to institutional management. In G. Universities: The management challenge. Guildford: SRHE & NFER-

Bijleveld, R.J. (1989). The two-tier structure in university education. In P.A.M. Maassen & F.A. van (Eds.). Dutch higher education in transition. Culemborg: Lemma.

Cave, M., Hanney, S., Kogan, M., & Trevett, G. (1988). The use of performance higher education. A critical analysis of developing practice. London: Jessica Kingsley.

indicators

in

Conrad, C.F., Pratt, A.M. (1985). Designing for quality. Journal of Higher Education, 56, 601622. Cook, C.M. (1989). Reflections on the American experience. In Verslag van de conferentie kwaliteitsbewaking hoger onderwijs [Proceedings of a conference on quality control]. Zoetermeer: Ministerie van Onderwijs en Wetenschappen. Crow, S. (1994). Changing emphases in the USA. In A. Craft (Ed.), International in assuring quality in higher education. London, Washington, DC: The Falmer Press

developments

Cullen, B.D. (1986). Performance indicators in UK higher education: Progress and prospects. International Journal of Institutional Management in Higher Education, I1 (2), 102-124. Daalder, H. (1982). The Netherlands: Universities between the ‘new democracy’ and the ‘new In H. Daalder & S. Shils (Eds.), Universities, politicians, and bureaucrats (pp. 214 management’. 236). Cambridge: Cambridge University Press.

134

M. Segers and F. Dochy

Dochy, F.J.R.C., Wijnen, W.H.F.W., & Segers, M.S.R. (1987). Over de relatieve betekenis functies en keuze van performance indicatoren voor kwaliteitsbewaking van het onderwijs [Functions and choices of performance indicators for quality assistance in education]. Universitei: & Hogeschool, 34, 73-83. Dochy, F.J.R.C., Segers, M.S.R., & Wijnen, W.H.F.W. (1990). Preliminaries to the implementation of a quality assurance system based on management information and performance indicators. Results of a validity study. In F. Dochy, M. Segers & W. Wijnen (Ed.), Management information and pe$ormance indicators: An international issue. Assen: Van Gorcum. Dutch Higher Education and Research (1988). Ministry of Education and Sciences, Directorate General for Higher Education and Research. The Hague: State Printing Office. Findlay, P. (1990). Developments in the performance indicator debate in the United Kingdom. In L.C.J. Goedegebuure, P.A.M. Maassen & D.F. Westerheijden (Eds.), Peer review and performance indicators. Quality assessment in British and Dutch higher education. Utrecht: Lemma. Ferris, J.M. (1991). Competition and regulation in higher education: Netherlands and the United States. Higher Education, 22, 93-108.

A comparison

of the

Frackmann, E., & Muffo, J.A. (1988). Quality control, hierarchies and information. The context of rankings in an international perspective. In Proceedings of the 10th EAIR Conference. Bergen: University of Bergen. Frazer, M. (1994). Quality in higher education: An international perspective. In D. Green (Ed.), What is quality in higher education? Milton Keynes: The Society for Research into Higher Education/ Open University Press. Goedegebuure, L.C.J., Maassen, P.A.M., & Westerheijden, D.F. (1990). Quality assessment in higher education. In LCJ. Goedegebuure, P.A.M. Maassen, t D.F. Westerheijden (Eds.), Peer review and performance indicators. Quality assessment in British and Dutch higher education. Utrecht: Lemma. Green, D. (1994). What is quality in higher education? Research into Higher Education/ Open University Press.

Milton

Keynes:

Harcleroad, F.F. (1980). Accreditation: History, process, and problems. Research Report No.6 American Association for Higher Education.

The Society

Higher

Education

Harvey, L., Green, H., & Burrows, A. (1993). Assessing quality in higher education: transbinary research project. Assessment and Evaluation in Higher Education, 18 (2), 143-148. Hoger Onderzoek en Onderwijsplan (HOOP) [Higher Education Ministerie van Onderwijs en Wetenschappen. Den Haag: Staatsuitgeverij.

for

A

and Research Plan] (1988).

Jessee, W.F. (1984). Quality assurance systems: Why aren’t there any? Quality Review Bulletin, 408-411. Johnstone, J.N. (1982). Three useful but often forgotten education Economical Planning Science, 16 (4), 163-166.

system indicators.

Socio-

Quality Assurance (l),

Kells, H.R. (1976). The reform of institutional 24-28.

onderwijs

135

accreditation

agencies. Educational

Kells, H.R., Maassen, P.A.M., & Haan, J. de (1991). Kwaliteitsmanagement [Quality management in higher education]. Utrecht: Lemma.

Record, 57

in het hoger

Levine, A.E. (1982). Quality in Baccalaureate program: What to look for when David Riesman can’t visit. Educational Record, 63, 13-18. Lindsay, A. (1981). Assessing institutional perspective. Higher Education, IO, 687-706.

performance

Linke, R.D. (1992). Some principles for application education. Higher Education Management, 4 (2), 120-129

in higher education:

of performance

Lucier, P. (1992). Performance indicators in higher education: debate. Higher Education Management, 4 (2). 164-175.

A managerial

indicators

Lowering

in higher

the tension of the

Melia, T. (1994). Inspecting quality in the classroom: An HMI perspective. In D. Green (Ed.), What is quality in higher education? Milton Keynes: The Society for Research into Higher Education/ Open University. Meerling, R. (1984). Methoden en technieken van psychologisch and techniques in psychological research]. Amsterdam: Meppel-Boom.

onderzoek. Dee1 1 [Methods

Millard, R.M. (1983a). The accrediting institutions. Change, 15 (4) 32-46.

the quality

Millard, R.M. (1983b). Accreditation. standards. San Francisco: Jossey-Bass.

association:

In J.R. Warren

Ensuring

(Ed.), Meeting

of programs

and

the new demand for

Miller, RI. (1981). The assessment of college performance: A handbook measures for institutional self-evaluation. San Francisco: Jossey-Bass.

of techniques and

Nutall, D.L. (1994). Choosing indicators. In K.A. Riley & D.L. Nutall (Ms.), Measuring quality: Education indicators - United Kingdom and international perspectives. London & Washington, DC: Falmer. ROSS, K.N., & Mtihlck, L. (1990). Planning the quality of education. The collection data for informed decision-making. Oxford: Pergamon.

and use of

Segers, J.H.G. (1977). Sociologische onderzoeksmethoden: inleiding tot de structuur van her onderzoeksproces en tot de methoden van dataverzameling [Research methodology in sociology]. Assen: Van Gorcum. Segers, M.S.R. (1993). Kwaliteitsbewaking in het hoger onderwt’js. Een exploratieve studie naar prestutie-indicatoren in theorie en praktijk [Quality assurance in higher education]. Utrecht: Lemma.

136

M. Segers and F. Dochy

Segers, M.S.R., Wijnen, W.H.F.W., & Dochy, F.J.R.C. (1990). Performance indicators: A new management technology for higher education? The case of the United Kingdom, the Netherlands and Australia. In F.J.R.C. Dochy, M.S.R. Segers, & W.H.F.W. Wijnen (Eds.) Management information and performance indicators: An international issue. Assen: Van Gorcum. Selden, R. (1994). How indicators have been used in the USA. In K.A. Riley & D.L. Nutall (Eds.), Measuring quality: education indicators - United Kingdom and international perspectives. London, Washington, DC: Falmer. Semrow, J.J. (1977). Institutional assessment and evaluation for accreditation. Arizona: Center for the Study of Higher Education, University of Arizona. Shavelson, R.J., McDonnell, L., Oakes, J., Carey, N., & Picus, L. (1987). Indicator monitoring mathematics and science education. Santa Monica, CA: Rand. Sizer, J. (1981). European perspectives suggest other criteria. In R.J. Miller assessmentfor self-improvement. San Francisco: Jossey-Bass.

Tuscan,

systems for

(Ed.) Institutional

Sizer, J. (1990). Performance indicators and the management of universities in the UK: A summary of developments with commentary. In F.J.R.C. Dochy, M.S.R. Segers, & W.H.F.W. Wijnen (Eds.), Management information and performance indicators: An international issue. Assen: Van Gorcum. Sizer, J. (1992). Performance indicators in government-higher Lessons for government. Higher Education Management, 4 (2), 156-163. Stauffer, Th. M. (Ed.). (1981). Quality: DC : American Council on Education, 123.

Higher educntion’s

principal

institutions

challenge.

relationships:

Washington,

Vroeijenstijn, T.I., & Acherman, J.A. (I 990). Control oriented versus improvement oriented quality assessment. In L.C.J. Goedegebuure, P.A.M. Maassen, & D.F. Westerheijden (Eds.), Peer Review nnd Performance Indicators. Utrecht: Lemma. Westerheijden, D.F. (1990). Peers, performance and power. In L.C.J. Goedegebuure, P.A.M. Maassen, & D.F. Westerheijden (Eds.), Peer review and performance indicators. Utrecht: Lemma. Young, K.E., Chambers, C.M., Kells, H.R., & Associates (1983). Understanding San Francisco: Jossey-Bass.

accreditution.

Yin, R.K. (1994). Case study research: Design and methods. London: Sage.

The Authors MIEN SEGERS is Associate Professor at the Department of Educational Development and Research, School of Economics and Business Administration, University of Maastricht, The Netherlands. She received her PhD in the field of quality assurance in Higher Education. She has been a member of the Dutch Research Group on Performance Indicators. She is currently investigating alternative assessment methods within problembased curricula.

Quality

Assurance

137

FILIP DOCHY is Research Manager at the Center for Educational Technology of the Open University of Heerlen, The Netherlands. His main field of research is assessment and evaluation, i.e., assessment software and evaluation systems in particular. He is a member of the Executive Committee of the European Association for Research on Learning and Instruction (EARLI) and co-ordinator of the Special Interest Group on Assessment & Evaluation of EARLI.