THE MYSTERY OF EXECUTIVE E D U C A T I O N Effectiveness requires evaluation HERBERT H. H A N D
Mr. Hand is a faculty member in the School o f Business, Indiana University. He wishes to express appreciation to Paul J. Gordon for criticism in the preparation o f the article.
Every practicing manager is engaged in a race "between his obsolescence and retirement." To help him, industry, government, and nonprofit institutions spend millions annually on executive and management development programs. Thought must be given to the value of these programs. There are three major problems: defining their parameters, verbalizing the basic assumptions, and evaluating the results. For evaluation, objective criteria should replace subjective criteria, and multiple raters should replace self-evaluation. The program should provide an opportunity to define objectives with respect to change; the systematic evaluation of the value of executive development expenditures; and a basis for revising future programs. A major problem facing all executives and managers in complex organizations has been accurately and succinctly stated by Mee: Changing conditions are c o n f r o n t i n g practicing managers in e c o n o m i c , political, social, and technological areas. No manager w h o lives a n o r m a l life span will die in the same historical era in which he was b o r n as a result of the rapidity and i m p a c t of change. C o n s e q u e n t l y , it is easier to get e d u c a t e d than to stay educated. Change results in the obsolescence of products, m e t h o d s , markets, and ways of thinking. Because of
JUNE, 1971
changing conditions, every practicing manager is engaged in a career race b e t w e e n his obsolescence and his retirement. Usually the m o s t one can e x p e c t is a p h o t o finish if he is enrolled in a 'lifetime learning league' and attends professional programs.1
Each year millions of dollars are spent by industry, government, and nonprofit institutions on both executive and management development programs. The cost of such programs may be measured not only in terms of actual dollar costs, but also in the intrinsic costs associated with the manager being off the job throughout the process. Imbued as we are with cost-benefit analysis, some thought must be given to the value of executive and management development programs. 2 If we can, in fact, identify and measure those benefits anticipated from each program, we will have taken a major step. A thorough analysis of the benefits and costs associated with development and training would provide a basis for, first, estimating the effectiveness with which the organization, in the long run, is adapting to corporate obsolescence and, second, estimating the
1. John F. Mee, "Renewal for Practicing Managers," Miami Business Review, XXXVII (April, 1966).
2. James E. Barrett, "The Case for the Evaluation of Training Expenses," Business Horizons, XII (April, 1969), pp. 67-72.
35
HERBERT H. HAND
General Definition of Professional Education Program
36
Develo )ment
Short-Time Duration
Cognitive Instruction
ConceptuallyOriented Content
On-the-Job Training
In-House Instructors
Training
Long-Time Duration
Experimental Instruction
TechniqueOriented Content
Off-the-Job Training
Out-of-House Instructors
efficiency with which individuals in the organization are being prepared, in the short run, to cope with the needed organizational change. Livingston notes that Many business organizations are cutting back their expenditures for management training just at the time when they most need managers who are able to do those things that will keep them competitive and profitable . . . . It is a belated recognition by top management that formal management training is not paying off in improved performance. 3 This article assumes the objective position that executive development programs are needed, b u t that we must evaluate their effectiveness. Further, any organization can satisfactorily resolve the complexity of the problem.
THEPROBLEMS Unraveling the mysteries of the value of executive and manager development presents several formidable obstacles. The initial problem is to actually define the parameters of professional education programs. Second, it is necessary to explicitly state our assumptions underlying the purposes of such programs. Last, we must deal with the problems associated with evaluating the 3. J. Sterling Livingston, "Myth of the Well-Educated Manager," Harvard Business Review, XLIX (JanuaryFebruary, 1971), pp. 78-89.
degree to which the educational program's purpose is attained. A general definition of professional education programs must consider simultaneously severalvariables (see accompanying figure). Each variable may operate on a continuum b e t w e e n polar points. One variable has as its end points Development and Training. Briefly, management development attempts to improve performance within the executive's present position and should provide vertical impetus within the organization. 4 Development is thought to provide knowledge through which the attitudes of executives may be formulated. Management training, in contrast, is generally intended to p r o m o t e an improvement in levels of a specific skill. Stoltz points out a concurrent m e t h o d of classifying professional education programs, s On-the-Job, J o b Changes, and Off-the-Job types of educational programs compose this variable. While On-the-Job normally refers to the training end of the first continuum, it could also be applied to the Development portion. J o b Changes and Offthe-Job types of professional education Can fall at almost any point on the Development4. Wendell French, The Personnel Management Process (Boston: Houghton Mifflin Company, 1964), pp. 195-96; Dale Yoder, Personnel Management and Industrial Relations (EnglewoodCliffs, N.J.: Prentice-Hall, Inc., 1962), p. 386. 5. Robert K. Stoltz, "Executive Development: the New Perspective," Harvard Business Review, XLIV (May-June, 1966), pp. 133-43.
BUSINESS HORIZONS
The Mystery of Executive Education
Training continuum. A third classification of professional education programs is an In-House Versus Out-of-House presentation. Often, this is a dichotomous variable, although m a n y organizations use some combination of the two. Three additional variables of professional education programs are Content, Method 0f Instruction, and Training Time. Content, while often subjective, varies with the preceding three variables. Method ranges from purely" cognitive to purely experiential, and Time varies from a few hours to weeks and, occasionally, months. The second problem associated with determining the value of professional education programs is the verbalizing of basic assumptions regarding the objectives of a specific program. This is a major problem area because it represents a c o m m i t m e n t on the part of those responsible for allocating the organization's funds for the educational program. Commitment, in this area, is a precarious avocation in view of the statement by Livingston. In addition, a recent research study found that nearly 40 percent of the chief executives sampled were either doubtful or downright skeptical of the "impact in practice" of the "best" professional education programs available. 6 The findings of Livingston and Mant notwithstanding, the key to evaluation of the value of professional education programs is in a clear statement of purpose. Unfortunately, experts in semantic jungle warfare exist and must be recognized at this juncture. Clearly defined objectives must be stated in terms of anticipated changes in knowledge, attitudes, skills, and/or performance levels. Success or failure of the program can then be measured in terms of the change that occurred. The alternative to establishment of program goals is a frank admission that the educational program is not goal oriented. The third area associated with appraising the efficacy of professional education
programs is that of establishing a program of evaluative action. The first subproblem is whether or not to evaluate, a major decision which produces discord from several quarters. Such a decision may be made on the basis of the trade-off between the additional cost to the organization and the anticipated value of the program to the company. "What" to evaluate would, in practice, be answered by the previously stated objectives of the program. The question of " w h o " should evaluate is critical. Barrett reflects that "the improvement of results and a better way of quantifying training value are the responsibility of the professionals in the education field." Burke states that managers must "demand rigorous research in industrial training," but also notes that the "trainer, if left alone, will not voluntarily perform this service." Obviously, conflicting opinions are reflected in these statements. 7 However, most viewers would agree that an objective viewer is more likely to make an objective evaluation. A recommendation of this article is, therefore, to select an evaluator who is associated with neither the organization for which the educational program is operated nor those conducting the training.
6. Alistair Mant, The Experienced Manager--a Major Resource (London: British Institute of Management, 1969), p. 42.
7. Ronald J. Burke, "A Plea for a Systematic Evaluation of Training," Business Horizons, XII (April, 1969), pp. 67-72.
JUNE, 1971
A PROGRAM OF ACTION At this point, one is reminded of the cliche, "It works fine in theory, but how does it work in practice?" Few field research designs have been published using well-controlled conditions with which to evaluate development or training programs. A distinction should be made, at this point, between a " s t u d y " and "research." For purposes of this discussion, a study might be a historical recount of facts and/or opinions associated with a particular program or series of programs. Andrews' study of management development programs is probably the most
37
HERBERT H. HAND
38
comprehensive yet available. 8 Unfortunately, the executives self-rated the programs. It is now suggested that objective criteria replace subjective criteria and that the use of multiple raters replace self-evaluation. Two terms need to be defined. The variables to be changed by a professional education program will be called dependent variables. They depend, hopefully, on the program. Thus, the program is one of several independent variables available for analysis. In order to isolate the change effected by the program, it is necessary to control factors that may destroy a critical view of the desired results (dependent variables). Therefore, it is necessary to define a comparable counterpart of the executive or manager being trained. If third-level production managers are being trained, comparable third-level production managers who were not trained should serve as a basis of comparison for the dependent variables. Measures of the selected dependent variables for both the trained and the nontrained managers must be taken both prior to training and at some specified period after the program. Numerous statistical techniques are readily available to analyze the effect of the development or training. Two precautions are suggested, however. First, the evaluator should, in practice, be an individual thoroughly familiar with field research design. Second, adequate time must be allowed for change to occur. The value of educational programs is far from clear, and instant results are unlikely. Research in industrial firms using the design suggested in the preceding paragraphs has been successfully tested. 9 A field experiment was conducted in a large specialty steel plant to evaluate the effectiveness of a 8. Kenneth R. Andrews, The Effectiveness of University Programs (Boston: The Harvard Graduate School, 1966). 9. Herbert H. Hand and J o h n W. Slocum, Jr., "Human Relations Training for Middle Management: a Field Experiment," Academy of Management Journal, XIII (December, 1970), pp. 403-10, and "A Longitudinal Study of the Effect of a Human Relations Training Program on Managerial Effectiveness," in press.
Management
middle-manager human relations training program. Two groups were selected for the evaluation process. One group (the experimental group) was composed of 21 line-andstaff managers who were to receive human relations training. Another group of 21 line-and-staff managers was selected from the same hierarchical level in the organization. The second group (known as the control group) did not receive ,training. The research design required the measurement of human relations knowledge, attitudes, and behavior at three different points in t i m e - o n e prior to training, one shortly after training, and one eighteen months after training. In addition, corporate performance records of the 42 managers were compared as of the first and third measurement. Knowledge of human relations and specific relevant attitudes toward human relations were measured by questionnaires administered to the 42 managers. Behavior of the managers was measured by a questionnaire administered to their subordinates. Performance of the managers was, of course, evaluated by their bosses. A statistical comparison of the experimental and the control group for the various time periods for each of the dependent variables (Knowledge, Attitudes, Behavior, Performance) produced substantive evidence regarding the effectiveness of this professional education program. Several pay-offs would appear to be associated with the suggested program of action. First, an opportunity would be presented to clearly define objectives with respect to organizational change. Second, the value of executive development expenditures could be evaluated on a systematic basis. Third, and perhaps most important, thorough evaluations would provide the agenda for revising the content and/or process of specific future programs. Each passing year makes it more apparent that we are truly in a lifetime learning league. Thus, it would appear to be increasingly worth-while to assure ourselves that our human resources expenditures are effectively spent.
BUSINESS HORIZONS