Evaluation and Program Planning, Vol. 14, pp. 113-122, Printed in the USA. All rights reserved.
1991 Copyright
0
0149-7189/91 $3.00 + .OO 1991 Pergamon Press plc
A DISCREPANCY-BASED METHODOLOGY FOR NUCLEAR TRAINING PROGRAM EVALUATION
A.
JEFFREY
Lehman
College,
CANTOR
City University
of New York
ABSTRACT This paper describes a process for training program evaluation based upon Provus’ Discrepancy Evaluation Model and the principres of instructional design. The Discrepancy-Based Methodology for Nuclear Training Program Evaluation was developed for use in nuclear power utility technician/operator training programs. It facilitates systematic and detailed analyses of multiple training program components in order to identtfy discrepancies between program specifications, actual outcomes, and industry guidelines for training program development. This evaluation is a three-phased process. Phase One analyzes utility program standards which define the program (what should be). Phase Two analyzes the programmatic data (what is). Phase Three synthesizes the multiple discrepancy analyses, culminating in interpretation and reporting of the evaluation findings,
INTRODUCTION This paper describes a comprehensive process for commercial nuclear power training program evaluation. I initially developed this process for use at The Three Mile Island nuclear power facility to review and evaluate the training department and its training programs, which were redesigned to meet the facility’s personnel training needs after the accident in 1979. Prior to the utility’s start-up of the affected reactor, 1 was employed
as a consultant to conduct a program evaluation of the critical technician/operator training programs (including licensed control room operator, chemical technician, etc.). That accident highlighted a need for rigorous review and evaluation of the utility’s personnel training programs for licensed and nonlicensed control room operators and related critical operations technician training.
NEED outcomes based upon a matching of empirical observations to program standards is necessary. At TM1 a need existed to document technician/operator readiness for duty and to provide for ongoing formative program evaluation to ensure technician competence and the utility’s compliance with Pennsylvania, federal, and Institute for Nuclear Power Operations (INPO) guidelines, the industry’s self-regulating body. Such formative program evaluation needs to identify areas where either technician competence, the training program design, or its outcomes were deficient in meeting the utility’s needs. This evaluation process has to be capable of producing data for use by external
Nuclear power utility technician/operator performance and competence is a subject of enormous importance. Recent events including Three Mile Island (TMI) and Chernobyl attest to the devastating consequences of a single operator error. Accidents such as Bhopal and the oil spill off of Prince William Island, Alaska, underscore the need for technician training evaluation in other critical performance areas as well, which potentially affect the health and safety of the public. Training program assessment is of interest to evaluators not only as professionals but also as citizens. It is for these reasons that a description of a process for analysis of complex technician training programs and program Requestsfor reprintsshouldbe sent College,
25 Judith
Drive, Danbury,
to Jeffrey A. Cantor, CT 06811.
Associate
Professor
113
of Corporate
Training
and Business Education,
CUNY, Lehman
114
JEFFREY A. CANTOR
review teams for periodic TM1 utility accreditation reviews (as was the case for which this process was originally designed). The process which I ultimately developed to assess TMI’s technician/operator training program needed to parallel the instructional systems procedures already in
place in the utility in order to effectively and adequately assess its many component processes and outcomes. It was for this reason that I chose the Discrepancy Evaluation Model (Provus, 1973) as a conceptual basis for the evaluation design.
THE MODEL - A CONCEPTUAL FRAMEWORK The ultimate objective of training program evaluation in the nuclear power industry is to ensure that the program produces competent technician/operators capable of safe reactor operation. To meet this objective an evaluation process must: provide a means to systematically collect, review, and analyze critical employee (technician/operator) performance data; match the data against utility personnel procedures and engineering specifications against industry (INPO) and government (NRC) guidelines and policy; and, ultimately, match the data against operator performances and the utility’s own overall performance record. Nuclear Utility Training
Technicians need to be familiar with the overall control room function and be competent in their respective technical specialties. They need to be able to react instantly to emergency situations such as signals and indications. The instructor is the primary professional in nuclear power training. Thus, the following discussion concerning training program evaluation centers around the instructor, and is provided to highlight the training function in the utility and the manner in which training decisions are made using evaluation data from utility plant operations and technician performances. The TM1 training program is an integral part of the overall utility operation. All training staff including instructors are appointed from senior technical and engineering ranks within the utility. The training staff makes all decisions about what kinds of training will be provided. Training needs assessment data are continually collected to determine formal training requirements. Data are collected (and computer archived) from incumbent technician/operators, engineering documents and requirements, and utility operating procedures to determine each required job/task function. For instance, if evidence exists that training is needed on the operation of a control valve, engineering specifications data are provided to the training department by an engineering operations technician. Together, the engineering technician and instructor analyze the data to determine the knowledge and performance requirements needed to operate the valve. This forms the basis for instruction on valve operation. Feedback regarding technician/operator on-the-job performance problems is also obtained from personnel such as senior technicians and operators, utility operations managers, and technician supervisors. This occurs
in regularly scheduled committee meetings composed of engineering, management, and training personnel. The training and development process also includes review of technician/operator performance records which are analyzed and compared to existing job performance requirements. At these meetings, problems related to technician errors on the job are surfaced for discussion with recommendations for training as necessary. Data are also obtained from the training program records concerning student (technician/operator) performance within formal courses. Utility training is designed as a closed-loop process in which data derived from all of these sources form the input to a systematic procedure for both technician/operator and training program review and evaluation. The Instructional
Systems Design Process
It is important to understand that TM1 utility technician/operator training is designed and developed by utility instructors using an instructional systems design approach (ISD) (Cantor, 1986a) and according to specific objectives which describe the responsibilities, personnel requirements, and training design requirements and methods. These ISD processes include a structure for training needs analysis, training program design, development, implementation, and evaluation. ISD is a logical systematic process for defining worker competency requirements, developing worker performance objectives, instructional delivery media, and training and trainee evaluation strategies (Fig. 1). Within the nuclear power industry, ISD is detailed in an industry-promulgated standard for instructional development and training program operation which is endorsed and used by all licensed nuclear power utilities. The Institute for Nuclear Power Operations (INPO) maintains these industry-developed guidelines and procedures to complement the Nuclear Regulatory Commission’s statutes and regulations. Included are specific performance requirements for the personnel who are responsible for the operation and maintenance of the reactor, control room, peripheral controls, and instrumentation. These ISD standards were used as a foundation for development of the comprehensive discrepancybased evaluation process. In the case of TMI, a multifaceted training evaluation was conducted. This process had to provide both formative evaluation findings for immediate program attention, and summative findings for periodic review
Discrepancy-Based
115
Training Program Evaluation CONSTRAINTS
TRAINING
DLVELOP OBJEClIVtS AND 1ESlS
RtQUlRfMENlS
v PLAN. OEVILOP. AN0 VALIDATE INSTRUCIION
CONSTRAINTS
LEGEND
CURRlCUlUY fEtOBACl
Figure 1. The instructional
lOOPtB> AN0
INlLRACllON
LOOP
-
systems design process.
and reporting to external governmental and policymaking organizations. Initially, the process needed to be implemented by my external review team for the accreditation visit, and later institutionalized into an ongoing internal personnel evaluation process.
1. 2. 3. 4. 5.
The Discrepancy
The content categories - Input, Process, and Output provide for an in-depth review and analysis of the respective stages in terms of:
Evaluation
Model
I believe that a primary purpose of program evaluation is to determine whether to maintain, improve, or terminate a program or portion thereof. The training evaluation process described here incorporates the conceptual bases of decision-oriented evaluation models including the Discrepancy Evaluation Model (DEM) (Fig. 2) (Provus, 1971; Stufflebeam, et al., 1971) with the concepts and elements of instructional design. Decision-oriented evaluation is appropriate for nuclear power training evaluation because as a product-oriented process, it can be applied within the framework of ISD. Furthermore, it permits analysis of relationships between program outcomes and program objectives; and relationships between these outcomes and the context, input and process evaluation data. Thus, it facilitates: (a) stipulating program standards; (b) determining whether a discrepancy exists between some aspect of the nuclear training program and the standards governing that aspect of the program; and (c) using “discrepancy” information to identify the weaknesses of the training program or its components (“SPD”). Discrepancy analyses provides information leading to appropriate decisions, both immediate or formative as well as long-range summative decisions (Rog & Bickman, 1984). By way of review, DEM is conceptualized as a fivestage process with three corresponding major content categories. The stages are:
S
-
P
Figure 2. The discrepancy
= evaluation
Design; Installation; Process; Product, and Program Comparison.
1. Design Adequacy; Installation Fidelity; Process Adjustment; Product Assessment, and Cost-Benefit Analysis.
2. 3. 4. 5.
The flowchart (Fig. 3) illustrates this process. Here S is Standard, P is Program Performance, C is Compare, D is Discrepancy Information, A is Change in Program Performance or Standards, and T is Terminate. Stage 5 represents the Cost-Benefit option available to the evaluator only after the first four stages have been negotiated. The use of discrepancy information always leads to a decision to either: 1. go on to the next stage; 2. recycle the stage after there has been a change in the program’s standards or options; 3. recycle to the first stage, or 4. terminate the project.
T
T
T
D model.
Figure 3. Comparison
at stages flow chart.
T
116
JEFFREY A. CANTOR
DEM permits a stipulation of desired program outcomes in sufficient detail to be recognizable and measurable. According to Provus (1971): At each stage of the model, performance information is obtained and compared with a standard that serves as the criterion for judging the adequacy of that performance. At stage 1, a description of the program’s design is obtained as “performance” information. This performance is compared with the design criteria postulated as a standard. . . . At Stage 2 the standard for judging performance is the program design arrived at in Stage 1. Program performance information consists of field observations of the program’s installation. Discrepancy information may be used by the program manager to redefine the program or change installation procedures. At Stage 3 performance information is collected on a program’s interim products. The standard used is part of the program design that describes the relationship between program processes and interim products. Discrepancy information is used either to redefine process and relationship of process to interim product or to improve control of the process being used in the field. Standard in Stage 4 is the part of the program design that refers to terminal objectives. Program perfor-
mance information consists of criterion measures used to estimate the terminal effects of the project. At this point in time, if decision makers have more than one project with similar outcomes available to them for analysis, they may elect to do a cost-benefit analysis to determine program efficiency. (p. 185) An important characteristic of the DEM and the reason for its selection as the conceptual base for design of the TM1 evaluation methodology is its ability to provide information to address several layers of training program design and operation. Data are needed which will permit training program managers to immediately correct individual operator training courses or component parts (simulator, on-the-job training, laboratory, etc.) of the program; to assess overall outcomes of the program (in terms of technician job performance) over time; and/or to redefine aspects of the program’s conceptual framework (instructional design procedural standards, performance objective formats, item writing standards, etc.). Therefore, formative and summative evaluation components are necessary.
THE DISCREPANCY-BASED METHODOLOGY FOR NUCLEAR TRAINING PROGRAM EVALUATION The Discrepancy-Based Methodology for Nuclear Training Program Evaluation is designed in three phases. Phase One of the methodology permits an evaluator to systematically review the overall TM1 utility training process including training standards and pinpoint any lack of congruence with accepted industry standards (INPO). Phase Two of the methodology assesses individual TM1 training components and their outcomes in order to determine congruence to its respective program development standard. Phase Three permits a synthesis of Phase One and Two discrepancy analyses findings, and a description and discussion of the overall indications. In operation, the evaluation team at TM1 consisted of a nuclear power engineer, training evaluators, and a nuclear power technician. They visited the utility and conducted the evaluation. In the case of The Discrepancy-Based Methodology for Nuclear Training Program Evaluation, identification of discrepancy information might lead to a change in the operation of the nuclear training program or in the training and development specification under which the utility training program operates (Braun, 1981; Cantor, 1986a). Each stage of the program goes through a series of “SPD” cycles in attempting to provide the necessary information to address these questions: 1. 2. 3. 4.
Is the program defined? Is the program installed? Are the enabling objectives being met? Are the terminal products achieved?
Discrepancy information can also be used to redesign the utility training standard and process, its relationship to the utility organization as a whole, or to better control the process in the training environment (Cantor, 1988; Cantor & Hobson, 1981; Montague, Ellis, & Wulfeck, 1983). The TM1 methodology will specify: 1. discrepancies between industry (INPO) training program development standards/guidelines and the utility’s (TMI) own training and development process and standard; 2. discrepancies between the utility’s training standard and the utility’s actual training program; and 3. discrepancies between the utility’s training program goals and objectives and the nuclear control operator performances. Phase One-A Utility Standard TMI’s training program standards are based on INPO specifications. These standards define the training design processes and procedures as well as the training organization’s relationships and authority to the utility management. Evaluators first analyzed the TM1 utility’s management and operation standards against INPO guidelines and criteria and NRC procedures. In actual operation this process was conducted by the review team prior to arriving on the scene at TMI. At this phase of the review process, the team members were not known to each other. The team independently and individually examined evidence of procedures developed
Discrepancy-Based
Training Program Evaluation
by the utility training organi~t~on against INP~/NRC guidelines and noted any and all discrepancies. Each member of the team was familiar with INPO and NRC guidelines, statutes, and procedures. Individual and independent review is aimed at capitalizing on the expertise of each team member’s individual ability to arrive at appropriate conclusions without external influence from the other team members (Cantor, 1986b). These standards include a training program management plan which stipulates policy and procedures for training management decision making, composition of the training organization, and interorganizational relationships. The standards specify elements of program administration to be reviewed. They include assignments and responsibilities, numbers, qualifications, training and retraining of instructors, support personnel, length of program, program entry,criteria, facility and media requirements, simulator and on-the-job training, training schedules for initial and continuing training, course descriptions, lesson plans, and so forth. These standards provide a very specific process for a training needs assessment, including job and task analysis specifications, performance objective development, procedures for data collection, taxonomy and coding, and specific frequency of updates and sign-off protocols. They serve as a guideline for TMI training development and training evaluation. A major part of this documentation is the maintenance of records both as an evaluative database source and as a legal requirement. Evidence of such a record-keeping system for trainee records and instructional files should be in place. Once the inde~ndent review was completed, the team assembled on site at the utility and began a round-table discussion, reviewing and deliberating on their findings. The evaluation team also interviewed training instructors and managers to determine their individual assessments of the training standards used by the utility, and their suggestions for improvement or changes to make the standards more useful. Moderated by the team chair, the objective of this 2day session was to come to a group consensus on the First Phase review findings of the utility training standard. It should be noted that depending on whether this is to be a routine review or an external accreditation visit review, the report generated will contain recommendations for formative changes, or alternatively, be a part of a summative report on the state of the utility training organization training standard. Phase One of The Discrepancy-Based Methodology for Nuclear Training Program Evaluation verifies existence of these utility standards very precisely, noting any deviations from the industry requirements. The INPO/NRC criteria will remain a functional baseline over the course of the evaluation together with the utility’s own standard. Discrepancies which are identified
117
in the utility’s standard will be noted in accordance with the INPO/NRC criteria. Phase Two - A Utility Review Phase Two of the evaluation process reviews individual training courses, instructional and classroom processes, training plans, student records, including samples of trainee performance outcomes as well as other training program components, against both the utility standard and INPO standard (if a Phase One discrepancy exists). Figure 4 graphically describes the Phase One and Two procedures. To assess the effects of a utility’s training processes, this evaluation tool also must be capable of analyzing the engineering operations data for indications of personnel problems, and for indications of needs for training to alleviate te~hnician/operator performance deficiencies and provide prescriptive formative findings of program outcomes (Cantor, 1988). It also must be capable of providing overall summative assessments of overall program success in order to address reporting requirements to utility management as well as state, court, and federal agencies. A significant aspect of this phase is conducted through the individual training course evaluations and meetings and discussions with technical instructors and course developers. The evaluator listens to the plant instructor, notes concerns expressed, constraints, suggestions, and so forth, at this point. To carry out the objectives of this phase, The Discrepancy-Based Methodology for Nuclear Training Program Evaluation incorporates aspects of the Instructional Quality Inventory (Montague et al., 1983), a tool developed for U.S. Navy courses. Also built upon the principles of instructional systems development IQ1 provides an empirical methodology for both instructor and course evaluation using preidentified and stated behavioral objectives. This process permits a comprehensive auditing of courses (and programs) against the learning requirements for the course(s), within the conceptual framework of DEM. Data are collected in the classroom visit. A Task Evaluation Algorithm (Cantor, 1985) ensures that curricula includes up-to-date job analysis data (Fig. 5). This process systematically codes all tasks by grouping them into departmental requirements, and reviewing them against existing lesson plans. This model component provides a means for auditing curricula and lesson plans against specific tasks required on the job. Comprehensive auditing of the entire training program, including multidimensional learner requirements such as appropriate training media, and multiple performance objectives across numerous courses is accomplished through the Training Effectiveness Algorithm (Cantor, 1988). Complementary to the IQI, it permits
118
JEFFREY A. CANTOR Pint
S$mclflc Barth
Standud
Organiration Description and Awpon#ihility
fa
Imtttute euw8l
M8nagomont Phn
Nuckw Power Opuatiom studadD and Crtmrl~
Needs Assessment
FOr
Pwform8nce Objoctivos Curricula Dovelopmont Procosus Rocordr Maintonanco
PHASE
ONE
AHALYSIS
Facility Requirements Staff Dwolopmont
Ptue
Spulftc
Bmdln
Inatructhnal Dellvary
St-
Program Evtlnutlon and Rovtrion
l Organization Description and Responslhiiltv
0
8&8M$WWlt
Ptln
v
l Neodr Assessment 0
Porkmmco
0 Curlculr
PHASE
Objectivas
Dovolopmont Processes
TWO
PLANT ON-SITE Q
Q
l Records Maintonanca
REVIEW - ANALYSIS
0 Plant Records
l Facility Requirements l Staff Dwolopmont
* Plant Organixrtion
l Instructional Dotivery
0 F8cllities
l Program Evaluation and Revision
l
Curricuta and l,esson Plans
*
Tralnlng Rocordr
0 Training Organization * Chrrroom Visits
DISCREPANCIES1
l Staff PorsormatRecords
Q DfSCREPANCIEfr Figure 4. The phase one and two procedure.
identification of the cause(s) of reported training problems within the training system and ongoing utility organization. Figure 6 presents the Training Effectiveness Algorithm. It uses t~h~ci~/o~rators and supervisors in a formal process to analyze and identify training-related problems in a two-step procedure. In Phase Two, we analyzed each of the areas of the TM1 utility training process and program against the established standards. We then analyzed each of the following areas: Needs assessment data to determine if the processes are followed, files maintained and key personnel are involved, and so forth. Processes for development of performance objectives exist (including writing styles, formats, etc.). Curricula development, lesson plans, media selection methods, and so forth, are verified using lesson plan files selected randomly from program files. Facilities, classrooms, labs, simulators are all personally observed. Instruction is formally observed and evaluated. Staff qualifications and development are reviewed
l
and randomly selected staff records are selected and reviewed. The training evaluation program is carefully studied and individual data trails followed to determine the means of data utility for program revision.
The outcomes of Phase Two are written and! gresented as indications for corrective action (formative findings) and later presented as summative findings in court, state, and federal assessments. Phase Three-A
Synthesis
Phase Three of The Discrepancy-Based Methodology for Nuclear Training Program Evaluation involves synthesizing the findings of the Phase Two reviews and analyses. Figure 7 graphically represents Phase Three of this comprehensive discrepancy-based program evaluation. In Phase Three, all data supporting the in-utility analyses are compiled and reviewed. At this point, we noted the discrepancies in each of the evaluation component categories. Careful attention was given to review of data concerning course and program congruence to
Figure 5. The task evaluation
algorithm.
JEFFREY
I acn*ccucw c0*vc*s10w IRAWWIC IWYcsllt*1IOY
A. CANTOR
I
I AOVAMCCD TrrlwlwG IMVCSlIC*TIOW
I
CREW IYvcsTlcATIoN
OpuatioMt
Ptant Tdning
OYIOARO IWVDRUAL IIAWUNC INVtsllCAllOY
IIAROWAICI oOCUMc*lAlIo* IWVfsllc*lIO*
s+cCIrIc I*vtsTIc*IIoY
Figure 6. The training effectiveness
I
I
algorithm.
AnalyniS
Proqrm
““‘!ZZ”
System Spocific~tions Porformrncr Cvricula Facility
Objectives
Dovolopmont Support
Documentation Clrssroom/Labs Staff
and Simulations
Qualifications
Evaluation PHASE
THREE
Analyze Report l INPO
Standards
Plant Specific Training
Standards
Program
Ops.
DISCREPANCIES? Figure 7. The phase three procedure.
Q
REVIEW
DATA
Discrepancy-Based
Training Program Evaluation
standards requirements and to course and program outcomes. Data analyses of course and lesson plans are reviewed, lesson observation notes are studied, facilities reviews considered, and final conclusions reduced to writing. The final TM1 report was actually two-fold: (1) a report of discrepancies between INPO/NRC standards and utility specific standards; and (2) a report of discrepancies between utility specific standards and actual utility training operations and outcomes. Emphasis was placed on those areas and problems which utility management should prioritize and incorporate into pro-
121
gram revision. Inasmuch as these evaluations were to become evidence in licensing hearings for the utility, as well as court subpoenaed information, detailed findings, discrepancies, and problems needed to be provided. The reporting procedure involved an extensive briefing to corporate officials, training management, and public utility and INPO/NRC officials. These briefings, generally days long, were accompanied by visual presentations. A full written report was also prepared and provided.
CONCLUSION This article has discussed The Discrepancy-Based Methodology for Nuclear Training Program Evaluation developed for use in the nuclear power industry. This methodology was commissioned, financed, and designed to solve an immediate problem-the need for an empirical, multifaceted evaluation tool, capable of use in a complex, highly technical, and politically visible organization-a public utility. As a result of the design and development of the Discrepancy-Based Methodology for Nuclear Training Program Evaluation, a systematic methodology for identifying standards against which to assess program operation and successes, and a formative-summative discrepancy-based process to review ongoing programs was installed in the TM1 utility. This methodology now offers the evaluation community at large a new and refreshing tool with which to make positive inroads in the world of large scale organizational evaluation as well as other multifaceted policy-setting environments. Further, this methodology allows an evaluator to gain a useful and unambiguous “big picture” in heretofore difficult kinds of organizational evaluations-including large and complex engineering organizations such as nuclear power, and so forth, and in climates affecting large budgets and sociopolitical constituencies. Applicability to Policy Development From a public policy perspective, The DiscrepancyBased Methodology for Nuclear Training Program Evaluation provides sound data for decision making affecting “big picture” critical issues such as nuclear power utility location, construction, organization, and licensing and regulation. This evaluation process has proven to provide a framework from which to use largescale program evaluation data to make these “big picture” decisions about: the cost-effectiveness of multimillion dollar programs; the overall manpower and budgetary needs of large-scale organizations, management and administrative competence; and, in the case of organizations such as nuclear power utilities, societal well-being and needs as well. Within a power utility organization, decision making based on training program evaluation data includes indications for revisions in manpower planning and devel-
opment activities including manpower logistics decision making relating to work crew planning, staffing, reorganizing, and so forth. Discrepancy-based evaluation provides program managers and policymakers an empirical basis for decision making about these critical employee performances. Applicability to the Evaluation Discipline I have found that all too often training program reviews amount to nothing more than cursory notations of individual perceptions and biases. However, as has been seen in the nuclear power-generating industry, significant policy decisions are based on the findings of substantive personnel and training outcomes data. The Discrepancy-Based Methodology for Nuclear Training Program Evaluation described here is a fresh use of an evaluation paradigm, an empirical approach to training evaluation. It is a rather unique blend of both positivist and naturalistic methods in order to make evaluation productive, and to ensure a rigorous and systematic framework in analyzing individual program components against recognized standards of measurable program objectives and constructs. The use of an expert team approach to the process is another plus. Experiences at TM1 suggest that no one voice can unduly influence the program evaluation outcomes and findings. The process incorporates human research activities such as in-depth interviews and on-site observations linked together, providing a counterpart to intense and hard engineering data and theoretical procedures. In essence, this provides a service to engineering and training managers as well as policymakers. While this process was designed for nuclear utility training evaluation, it holds promise for any critical skills training area in either military or paramilitary environments. It can even prove useful for new and emerging large-scale evaluation needs such as teacher certification. I submit that The Discrepancy-Based Methodology for Nuclear Training Program Evaluation has the potential to prove useful in many other organizational applications affecting hard and soft sciences. I welcome the comments and inputs from other evaluators and researchers who attempt to implement the methodology in various environments.
122
JEFFREY
A. CANTOR
REFERENCES BRAUN, F. (1981, November). The strategic weapon system training program part I-description. Paper presented at the 23rd Annual Conference of the Military Testing Association, Washington, DC.
role-S WS personnef
and training evaluaf~onprogram. Paper presented
at the 23rd Annual Washington, DC.
Conference
CANTOR, J.A. (1985). Task evaluation: comparing existing curricula to job analysis results. Journal of Educationat Technology Systerns, 14, 157-163.
MONTAGUE, W.E., ELLIS, J.A., & WULFECK, W.H. (1983). The instructional quaMy inventory (IQ@: A formative evaluation toolfor inst~ctionaI systems development. Monograph: Navy Personnel Research and Development Center, San Diego, CA.
CANTOR, J.A. (1986a). The strategic weapon system training program. Journal of Educational Technology Systems, 14, 229-238. CANTOR, J.A. (1986b). The Delphi as a job analysis tool. Journal of Instructional Development, 9, 16- 19. CANTOR, J.A. (1988). The training effectiveness aigorithm. Journal of Educational Technology Systems, 16, 201-229.
Testing
Association,
PROVUS, M . (197 1). Discrepancy evaluation for educational program improvement and assessment. Berkeley, CA: McCutchan Publishing Corporation. ROG, D., & BICKMAN, L. (1984). The feedback research approach to evaluation. Evaluation and Program Planning, 7, 169-175.
STUFFLEBEAM, E.G.,
CANTOR, J.A., & HOBSON, E. (1981, November). The sfrategic weapon system training program part N-executive steering group’s
of the Military
HAMMOND,
D.L., FOLEY, W.J., GEPHART, W. J., CUBA, R.I., MERRIMAN, H.O., & PROVUS, M.M.
(1971). Educational evaluation und decisionmaking. ltasca, IL: F.E. Peacock.