Mental health program evaluation: Where you start?

Mental health program evaluation: Where you start?

Journal of Eualuatron and Program Plonnmng, Vol. 1, pp. 31-40 t 19781. Pergamon Press. Printed I” U S.A MENTAL HEALTH PROGRAM EVALUATION: WHERE YOU...

1MB Sizes 0 Downloads 72 Views

Journal of Eualuatron and Program

Plonnmng,

Vol. 1, pp. 31-40 t 19781. Pergamon Press. Printed I” U S.A

MENTAL HEALTH PROGRAM EVALUATION: WHERE YOU START?

DONALD A. LUND New York

State Department

of Mental

Hygiene

ABSTRACT Seduced by the elegance, sophistication and jargon characteristic of new developments in applied research, practicing evaluators are in danger of becoming distracted from achievement of their evaluative goal: that ofproviding timely, reliable and useful data to program management to facilitate rational data-based decision-making. To avoid such seduction, the author advocates adaptation of simple, inexpensive and efficient evaluative methods tailored to organizational needs and presentation of findings in language readily understood by constituent groups. Likewise, insistance by evaluators upon maintenance of a strict role boundary-that is, limiting role functioning exclusively to those tasks considered evaluation - may be dysfunctional. Rather than detract from accomplishment of his goals, the author maintains that such participation allows development of skills, insights and staff relationships which enhance the evaluator’s effective functioning within the organizational context and contribute to organizational acceptance and use of his evaluative results.

INTRODUCTION The opportunity to discuss “first steps” in the development of a program evaluation service allowed me the luxury of indulging my reminiscences. It was with great pleasure that I reexamined those early experiences of struggling to meet ambiguous expectations and to create a clear and vital role within a developing community mental health center. I remember, vividly, the conflict and frustration of dealing with an emerging specialty, evaluation; of trying to make it distinct from research; of attempting to meet clearly incompatible data requirements of a number of external constituencies; of being asked to plan, to develop grant proposals, to write continuation applications, to examine clinical records, to count client services and movements, to aggregate data concerning staff activity in order to determine rates of productivity, and to develop program budgets, when Z was supposed to be “revaluating.” And I kept saying, “As soon as I get the program budget done, I’ll begin to evaluate!!!” To supplement these reminiscences about my early experience as an “evaluator,” I sought out others whose experiences might contribute insight to this discussion of “first steps” or “auspicious beginnings.” I determined to catalog their successes and failures, and the factors contributing to them, to provide valuable insights that come only from the experience of doing. Allow me to share the fruits of my excursions and of my discoveries.

IN THE COMMUNITY

MENTAL

After making several appointments, only to have them broken by the press of other business, I arrived at the first site that I had proposed to visit - a community mental health center in western New York - and asked the receptionist, who greeted me quite cordially, to guide me to the program evaluation offices. She acquiesced to my

HEALTH

CENTER

request, and led me, enthusiastically, through a maze of corridors. We entered a large room and when I saw the autographed photographs of Broskowski and Sorensen and the poster depicting Attkisson, McIntyre and Hargreaves on the wall, I knew that I had arrived. The receptionist directed me to a chair and politely asked whether I

A version of this paper was presentedto the Florida Mental Health Program Evaluation Training Workshop, Tampa, Florida, July 20, 1976. Request for reprints should be sent to Donald A. Lund, Assistant Commissioner for Mental Health, NYS Department of Mental Hygiene, 44 Holland Avenue, Albany, NY 12229.

31

32

DONALD

would like an “evalu-treat” while I waited. The “evalutreat” sounded fattening and being on a diet, I declined. As my gaze wandered about the room, I noticed the most recent dymaxion (20~sided) projection of evaluative levels, substantive areas, roles of the evaluator, and utility of data for decision making on a table across the room. Needless to say, I was impressed for the most elaborate model I had previously examined was a mere cube. After a brief wait, the Director of Program Evaluation Services appeared offering me a tour of the premises. I accepted and he quietly escorted me past closed doors indicating in hushed, knowing, tones that the rooms beyond contained ATGON, SCL-90, and GAS. I knew what GAS was and suspected that SCL-90 and ATGGN were highly volatile additives to increase the potency or octane of the GAS. But why would these dangerous substances be stored at a community mental health center? I thought that I had much to learn. As we passed another room, my guide mentioned

“MENDING

A. LUND

“multi-attribute utilities.” I knew about utilities - telephone and electric - but had never heard them referred to as multi-attribute before. Then he led me to the computer room, his pride and joy. where I was introduced to the technician responsible for his center’s ‘System 3” computer. Its capabilities, augmented by virtual memory and a process of dynamic reallocation, were explained. I scratched my head, looked about and acknowledged my substantial confusion. “But how,” I stuttered, “are these capabilities related to decision-making about program effectiveness and efficiency in this center?’ My guide blanched and turned white. Had I made a faux pas or asked a question bordering on heresy? He was silent. Taking his silence as an ominous sign and being highly fearful that the tanks of GAS, SCL-90 and ATGON would explode, I determined to leave this center as rapidly as possible, hopeful that guidance for the beginning evaluator would be found elsewhere.

MINDS”

Less than enthusiastic about my experience in the community mental health center, I decided to try another type of facility: a state mental hospital. I had heard that the program evaluation service at “Mending Minds” Hospital was excellent and that it had made many innovations. Asked what level of demonstration would be appropriate, I answered that I would like to witness all oftheir activity. Mending Mind’s program evaluation staff responded that they would be happy to oblige. They asked that I be seated in the bleachers on the ninth hole of the golf course precisely at 10 a.m. I did as I was told and arrived in plenty of time. A pert, attractive hostess escorted me to a seat of honor and the music began. A convoy, led by a large van with “Mending Minds Program Evaluation Service” emblazoned on its side pulled onto the golf course. Members of its crew activated the van’s computer as others began unloading complex and sophisticated mental health program evaluation gear. To one side, I observed the PATS team preparing its PEP procedures and a squad of program auditors marshalling their forces to undertake a PASS Review. Meanwhile, back at the van, several large hoses were removed from their racks and connected to receptacles in nearby wards. The equipment began to whir and grind as vacuum suction pulled in some ward atmosphere. Immediately, complex analyses were performed producing a score on the Ward Atmosphere Scale. Highly callibrated sensors provided measures of organization climate and levels of staff empathy. Quietly and efficiently “Mending Mind’s” patients were led to the entrance of interconnected semi-trailers in

HOSPITAL

which they passed through an optical scan device (laser based and similar to those new-fangled checkout counters in the supermarket) which quickly read their identity from the small lines printed upon their wrist bracelets. They were, then, seated in comfortable arm chairs attached to a conveyor belt (just like in the Haunted Mansion at Florida’s Disney World). Gradually, they moved through a series of stations at which their physical health and mental status were examined. As they exited, results of the multiphasic screen, measures of their functional status, adaptive behavior skills, level of psychopathology and an instant diagnosis were miraculously produced. I was overwhelmed. The Director of Program Evaluation Services at Mending Minds and her staff approached - all wearing identical T shirts-and presented me with a complete evaluation, 278 pages of printout, of the “Mending Minds” ward examined. “What am I to do with this?” I asked. “Oh, that’s not our problem. We just produce comprehensive reports,” was the answer. I realized immediately that the experience of visiting “Mending Minds,” like that of visiting the community mental health center, would not provide generalizable lessons to illuminate those evaluative starting points. Or would it? Perhaps both experiences provide insights worthy of sharing for, in point of fact, each methodology or technique subject to this parody, is useful and worthy in its own right. The techniques aren’t to blame. Rather, it is what we as evaluators have done with them. Seduced by their elegance and sophistication, we have forgotten the essence of our role and mission.

JARGON Let us take a brief look at the parody’s targets: GAS, for instance, is an acronym used with reference to the Global Assessment Scale (Spitzer, Gibbon, Endicott, 1975) or as a generic label applied to a number of goal attainment scaling strategies including Kiresuk’s (1973) “Goal At-

tainment Scaling,” Wilson’s (1974) “ATGON” (Automated Tri-Informant Goal Oriented Progress Note) and “Eval-U-Treat” developed by Benedict (1973). Rather than being a highly volatile substance, “SCL-90” is a 90 item symptom check list (Derogatis, Lipman and Covi,

Mental Health Program Evaluation: Where You Start?

1973). PATS (Psychiatric Audit Team) members perform the JCAH’s Performance Evaluation Procedure for Auditing and Improving Patient Care (PEP) (Joint Commission on the Accreditation of Hospitals, 1975) while other teams utilize Wolfensberger’s Program Analysis of Service Systems methodology (Wolfensberger and Glenn, 1973). Certainly, these are all appropriate and welldeveloped methodologies. Likewise, examination of ward atmosphere (Moos, 1974) and its consequences or the impact of organization climate (Levinson, 1972) on treatment outcome are important elements of program evaluation. Cubistic models of evaluation strategy, including those published by Schwab, Warheit and Fennel1 (1975), Bell (1973), and Attkisson and Hargreaves (1976), also have their value. This value, may, however, be hidden behind the jargonistic verbage and cliqueish nature of the acronyms used in their descriptions. Especially distracting is Edwards, Guttentag and Snapper’s (1975) use of “multi-attribute utility analysis” as a description of their concept labeled “A Decision Theoretic Approach to Evaluation Research.” For the beginning evaluator to be faced with jargon, acronyms and the wisdom of the “gurus” who pull in their chins and say, “I know what evaluation is all about” is frightening, forboding, overwhelming and senseless. This is not to say that those techniques and methodologies explicitly or implicitly referenced in this parody are not useful, practical or reliable. Rather, they are, with due recognition of both their assets and deficits, respectable components of the evaluative technology. It is the obfuscation created by their acronyms and the technical complexity with which the techniques are presented which detract from their ready implementation. They may be applicable to many issues that clinical managers may wish to address and can be utilized without reference to the aura of reverence of complexity which surrounds them. In point of fact, evaluators should be encouraged to utilize existing instrumentation and technology rather than to expend valuable resources in development ofwhat is often duplicative. While evaluators have damaged the credibility of individual techniques by surrounding them with an aura of complexity, sophistication, and uniqueness that mitigates against their use, trappings, jargon and inordinate concern with issues more appropriate to clinical research, distract from the practitioner’s commitment to his central role and from his view of evaluation as “determining the degree to which a program is meeting its objectives, the problems it is encountering and the side effects it is creating.” (Southern Regional Education Board, 1973). McCullough (1975) has provided a dynamic definition of evaluation as ” . . . the systematic collection of information about programs in order to provide data about their operation, effectiveness and efficiency. The information should be useful in decision-making and program development. In this context, evaluation is a management and organizational tool, not simply a research activity.” Smith (1974) maintains that “in practice, evaluation services usually go beyond this definition, to concern themselves with the need for the program, its scope in relation to the needs, and the internal processes of program functioning. This extension is necessitated by the

33

fact that programeffectiveness often can be discerned only through examination of the program’s context and its operational processes.” Edwards, Guttentag and Snapper (1975) emphasize that “evaluations exist (or perhaps only should exist) to facilitate intelligent decision-making.” Simply, evaluation is a tool which can be used to allow the clinical manager to make rational data-based decisions and to alert him to situations in which management intervention may be necessary. As Van Maanen (1973) puts it, “evaluation may provide data which will reduce uncertainty as to what’s really happening inside the program and begin to clarify the pluses and minuses of various decisions.” The common thread in all of these definitions is use of evaluation for decision-making. It is vital, to assure that use, to make evaluation consumable by targeting presentation of both methods and results toward the intended constituency. Disguised and jargonized, revered and touted, masked and obfuscated, or made overly complex, the findings will not be used and the evaluator will not meet the criteria explicit in McCullough’s (1975) definition: “The information should be useful in decisionmaking and program development.” How is it done, and where does one begin? Allow me to address these issues from my own experience -the substance of my reminiscences - and later to return to more global issues of use. I shall begin descriptively and then generalize prescriptively to “Where you start.” My arrival as Director of Research and Evaluation at the Alachua County Mental Health Clinic - which within several days, by virtue of funding of a stalling grant, would become the North Central Florida Community Mental Health Center -was greeted with great enthusiasm. The great State of Florida, through its Division of Mental Health, had just required each clinic and center to provide admission and discharge data concerning individual clients to a central information management system. (Forgive the play on words, but I refuse to dignify that Florida system by calling it a Management Information System, for in my two years with the North Central Florida Community Mental Health Center, no management information was received from that system). The mandated forms were not designed for direct collection of these data. Rather, the core information had to be collected using another document - not provided by the State - converted to numeric code and then transcribed onto the state forms for transmittal to Tallahassee. And guess who got the job. Right! The new Director of Research and Evaluation spent his first days designing a new intake document and procedure for insuring accurate and timely transmittal of required information to the State. Following that chore, I thought about the tasks at hand: Research and Evaluation. How should I begin? No sooner had I begun to contemplate that issue, than I was assigned another task. Our Director wanted to submit a grant to the National Institute on Alcoholism and Alcohol Abuse for purpose of serving the alcoholics of the catchment area. Who got the assignment? Right! The Director of Research and Evaluation who knew little about grant writing and less about alcoholism. I quickly learned, and with the help of staff from Avon Park (Florida’s model

34

DONALD A. LUND

alcoholism facility), that process was begun. I soon discovered much about the ten county catchment area through examination of census statistics and more through visits to various service delivery sites. Finally, that grant was submitted. Next came the problem of the client record!!! A new community mental health center just had to have an adequate clinical record keeping system. And Lund had designed the intake form so why not give him a crack at the rest of the record? I could have enscribed the totality of my knowledge about clinical records on the head of the proverbial pin - in very large letters. Again. however, quick learning and the help of several very competent colleagues resulted in a “treatment contract” and “goal oriented progress note.” Perhaps after completion of this chore, I could begin to evaluate? No way! Just as the records project reached a degree of completion - record oriented problems never quite disappear I received a call from our new Director (the second Director within the first month of the Center’s existence): “Would you help the business manager get things under control? We need new equipment, we must make arrangements for health insurance, retirement benefits and we must develop some personnel policies!” At that point, I could see my dream of evaluating services provided by this new community mental health center begin to fade. As we purchased equipment and arranged employee benefits, I received a call from the state Division of Mental Health in Tallahassee. The voice at the other end of the phone line informed me of the requirement to provide a monthly report of client movement. “Oh,” I responded, “I thought the admission. discharge and modality transfer

INFORMATION And so, I began the task of designing an information system, frustrated with my inability to get to the task of evaluating and unaware that I was in the process of developing a basic tool vital in enabling my future evaluative endeavors. As might be expected, the first steps I took in my new role as systems designer were false and led down a blind alley. Trained as a researcher, rather than as an evaluator, I attempted to design a staff activity system that captured every ‘nice to know” datum concerning the content of a staff member’s day and the services he provided to the clients in his care. I examined a number of systems and chose to elaborate on the most complex, one designed by VanHoudnos ( 1971) in Illinois. The resulting form was a researcher’s dream and a clinician’s nightmare. It required continuous entry of detailed information and encoding by use of an accompanying manual. I quickly learned about the clinician’s wrath, about editing procedures and about the logistics of data preparation for processing. To staff the system, I.had envisioned a single clerk editing and keying data to a computer file. Editing took incredible time, and by the time we were ready to key these data we had a file cabinet full of rapidly aging forms. I became overwhelmed with difficulties in programming and quietly abandoned the project. Not one form was keyed and not one report generated!!! I continued to fudge reports and to hope that I would not be caught.

forms would provide those data.” No, I was informed, it would be necessary to provide the aggregate monthly statistics until the “system” was operational. O.K., I thought, how will I do that? The first monthly report was fudged. So was the second and the third. I became quite proficient at completing these reports through use of a sophisticated and elegant new technique devised at the Center: The Data Acquisition and Retrieval Technique. Eventually I knew that I would have to provide more than an approximation. But for now Other diversions, preventing my pursuit of evaluation, presented themselves. Continuation grant applications, new applications for Children’s Services and for “growth” had to be written. The NIMH inventory-affectionately known as Rosalyn Bass’s “green monster” - arrived and I knew I would have to go to the D.A.R.T. board again since the information required for completion of that document was quite different from that demanded by the state. And then there was the budget. Our District (governing) Board, requested a “program budget” as a vehicle for allocating resources. (What else?) With a ten county catchment area served by five clinics and associated satellites, the program budget was seen as a vehicle for allocating dollars on the basis of both past productivity and future commitment to productivity objectives. The idea was fantastic -the workload even more so -but the experience made clear the need to develop an information system which would capture staff activity and allow empirical derivation of productivity standards against which to compare actual performance.

SYSTEM In order to regain my pride, I launched a study of consumer satisfaction and sought to collect follow-up data concerning “no shows” or service dropouts in order to feedback, to clinicians, reasons for the failure of their clients to reappear. Working on a sample basis, I discovered that I could mount a simple project, gather, analyze and feedback data in a short period of time - and that this process could have an impact in modification of professional and organizational behavior. I realized that the KISS (Wilson, 1976) principle, Keep It Simple Stupid, had validity and, even more significantly, that there were substantial personal and professional rewards in doing quick, simple and timely projects that addressed issues of importance to managers or clinicians and that had an impact on organization structure and process. Another discovery was also significant. I realized that untapped sources of existing data were available and that, they could be exploited simply and beneficially. From these existing data, I could determine much about the state hospital experience of residents of our catchment area and could examine longitudinal trends in admissions and readmissions. For example, I found that state hospital admission trends for nine of our ten counties were dropping and that I could suggest a change in our pattern of service delivery as a possible intervention toward reversal of that aberrant county’s experience. (That intervention worked!!!) Sociodemographic characteristics of

Mental Health Program Evaluation: Where You Start?

residents of the catchment area, derived from census data, could be compared with the sociodemographic characteristics of clients receiving service to determine which population groups were underserved by the center. For example, I learned that our case load was composed of larger proportions of the elderly and of minority group members and smaller proportions of children than were represented in the catchment area population. Believe it or not, useful data could even be abstracted from the carbon copies of those “state” admission/termination forms. Needless to say, other crises intruded into the time available for my functioning as an evaluator. Suddenly, an inspector from the Board of Pharmacy decided that the Center’s storage procedures for state-provided medications were inadequate. As a result, several weeks were

35

spent, by the “evaluator” contracting for an alternative means of dispensing medications through local pharmacies. The ultimate crisis, however-the one that catapulted me to the status of full fledged evaluator - was one of billing. Emphasis had been placed upon the need to collect fees for service from the very beginning but issues concerning grant requirements and matching funds increased awareness of the need to bill and resulted in our realization that fee collections were quite poor. Members of the clinical staff were quite resistant to performing fee collection functions maintaining that they were employed to provide service and that collection of fees was an unnecessary intrusion into their professional relationship with clients. As the process of developing a billing system began, I

locA? 0 GROUP/AGENCY DATE

MO.

OF SERLlCl

1 DAY1YH

SERVICE

PERFORMED

n

IRIUING-IIYPflCI

*AGENCY1 GROUP

q

OR

Al.ACL

F

M;v;,‘,

mz

n

COLUNIEIA

DIXIE

qi a [i31

n

qI m

mHA&\lLlON

&

GlLCHRIST

36

DONALD

recognized its similarity to my previous attempt to develop an information system. Indeed, the mechanism devised to gather billing information could be linked to a method for gathering information concerning both direct and indirect services delivered. By so doing we would have an information system that could serve multiple purposes. I greeted that prospect enthusiastically and tried to recall the “lessons” I had learned through that previous experience. I knew that I had to keep the system simple, that the computer programs had to be developed and in place before receipt of the first data forms, that feedback to clinicians and administration had to be both useful and timely in order to assure continued cooperation and that the input document itself had to be simple, parsimonious and efficient to coniplete. That input document would have to contain all necessary information for clinicians could not be expected to carry manuals and to look up codes each time they made an entry on the form. The result of this experience and knowledge was the “porta punch” system which met our multiple purposes and the criteria of simplicity, efficiency and parsimony. The basic documents in the system were two computer cards (see Exhibit 1) with prepunched tabs. One card was designed to record direct services; the other indirect services such as consultation and education. Preprinted on the cards were the relevant bits of information to be recorded. The information was recorded by knocking out the appropriate “tab” with a ball point pen. For example, if training was provided to the local police department, the staff member would punch out the date, his

A. LUND

number, the nature of the service: “training” and would write Gainesville Police on the card. The code for the “Gainesville Police” would later be punched by program evaluation staff. The same principle applied to the Direct Service card which had space for the client’s number, the staff member’s number, the charge for the service, the type of service, the date of service and its duration. Following simple editing for completeness and consistency, these cards were fed into a card reader and the information stored in computer files for instant analysis. Within a short time, reports were no longer fudged. The Center had developed the ability to generate accurate and timely data for all of its relevant constituencies. Management knew what was happening! That was a big step forward. Development and operationalization of this system, which gave us valuable data, generated bills and contributed to “accountability” within the Center was not the conclusion of the evaluative effort. With the financial and collegial assistance of the “Florida Consortium for the Study of Community Mental Health Evaluation Techniques” (1974) the Center had resources to experiment with Schwab and Warheit’s needs assessment methodologies (Lund & Josephson, 19741, with Nancy Wilson’s ATGON System (Lund & Aiken, 1974) and with Ciarlo’s Multi-Dimensional Outcome Measure. (Thompson, Thompson EZ Lund, 1974). But these more sophisticated efforts are beyond consideration of beginning steps in mental program evaluation for they require greater investment of program resources, access to computer systems and greater technical expertise.

SUMMARY To summarize the principle learnings of this beginning experience, I can reflect upon a number of diverse assignments I received and upon the consequences of those developments implemented and of those aborted before their completion. Principle among the learnings from these experiences was acknowledgment of the diverse roles I was called upon to play. I spent less of my time doing evaluation and much more engaged in the roles of planner, grant writer, trouble shooter and manager. When a problem or systemic difficulty existed, I was called upon to solve it or t,o devise a means for its solution. These factors delayed implementation of a full fledged evaluation process for more than a year. But the advantages outweighed the disadvantages. Based upon the consequences of this experience, it would seem that a periodic crisis could have a beneficial effect on the functioning of the “would be” evaluator. As a key figure in Center administration, I recognized the Center’s data and information needs. Being involved in multiple roles, I could insure that all available data were considered as decisions were being made. That involvement also allowed me to assure that evaluative considerations were built into new plans and implementation of developing programs. Performance in a multiplicity of roles gave me legitimacy within the Center, kept me involved rather than isolated, and allowed me to evolve a

meaningful role as an evaluator. While diversion from evaluative tasks was frustrating, the opportunity to impact upon Center management, policy and decisionmaking was invaluable. Several conclusions can be drawn, as a consequence of these experiences. Evaluators should be encouraged to be involved and to accept diverse responsibilities as offered to them. I believe it the norm, rather than the exception, for the evaluator to provide a multiplicity of services in the Center, program or facility rather than to maintain the strict role boundary of evaluator. Also vital is the realization, derived only in retrospect, that much that seems diversionary and extraneous is really vital to the evaluative endeavor. An evaluator does not simply arrive within an organization and begin to evaluate. He must evolve those tools which allow him to have sufficient information to do so. Those assignments that seemed diversionary when I received them -clinical records, billing, information systems, state and Federal reporting, even grant writing - were not. They were valuable opportunities to develop the tools that I later needed in order to evaluate. I needed the clinical record to allow me to abstract for subsequent analysis. The format designed and incorporated into that record made that process more efficient. New evaluation requirements for community mental health centers (Windle & Ochberg, 1975) require the implementation of a process of utiliza-

Mental Health Program Evaluation: Where You Start?

tion review. Much of this process is dependent upon the content of the clinical record and thus the evaluator’s involvement with the clinical record - as a basic tool of his trade - seems almost inevitable. Information systems, too, are basic tools of the evaluator’s trade. To allow those to develop without the evaluator’s input seems to deny his need for those data in examination of client movement (body counts), indicators of process, cost allocation and, ultimately, in examination of the outcome of treatment intervention. Beginning steps, then, in my experience, include performance by the evaluator in a number of service roles within his program and the development of the basic tools necessary to more sophisticated applications. As important, is his increasing involvement on the management team. Only through, that involvement can the uses of evaluative information in decision-making be assured. There are multiple levels of evaluation which have been well described by Attkisson and colleagues (Attkisson, McIntyre, Hargreaves, Harris & Ochberg, 1974). Evaluations at many levels can be accomplished without high resource expenditure, computer access, or sophisticated technical expertise. The apparatus and dollars are not necessary, desirable or even important, especially when they remove the basis of responsibility for decision-making from the clinical manager and evaluation from the context of the service delivery system. That

37

cost is too high! To insure its use in ongoing program management, evaluation, its results and the corrective interventions prescribed, should be directed to the level of sophistication of those making programmatic, managerial or clinical decisions. Biegel (1974) in his paper, “Evaluation on a Shoestring,” articulates this well. “The inability of a majority of. . directors and administrators to utilize these complex computer-based evaluation systems does not negate, however, the importance of ‘evaluation’.” Thus far, I have endeavored to strip the jargon and mystical aura from evaluation, to encourage simplicity, to reassure evaluators diverted from pure evaluation to other administrative and management tasks of the value of their experience and of the similarity of that diversion to the experience of other evaluators, and to encourage attention to the basic building blocks of the evaluative enterprise. Allow me to turn from elaboration of my own experience to a more prescriptive approach. I will take the role of advocate (Gardner, 1975) and encourage consideration of several “beginning steps.” It is my contention that availability of technical experts with the infusion of state of the art technology and almost unlimited resources can often allow achievement of the evaluative “nirvana” (as a state of self-delusion). However, the lack oftechnocrats, computer access, and unlimited resources is no excuse for failure to evaluate.

OBJECTIVES Basic to the capability to evaluate is a context in which program objectives (Morrisey, 1970), either explicit or implicit, exist and are understood. These can, and should be stated in terms of structure, process and outcome (Donabedian, 1969). A rather simple “verb-noun” approach to setting clear, concise, and easily understood goals can be utilized; for example: “reduce length of stay.” This goal can be operationalized by first determining the present level or baseline (i.e., current length of stay is 30 days), determining a target, neither too easily reached nor too difficult to attain (i.e., reduce length of stay by 4 days) and specifying a deadline (i.e., within 6 months). Remember, these objectives are not cast in bronze; they can and should be revised with experience. Objectives addressing structural program issues, such as adequacy of physical plant, of staff/patient ratio and/or staff qualifications as related to program needs, while the subject of much external review in the form of inspection for compliance with standards, can be the basis for rigorous periodic self-evaluation. The evaluator can compare the current program environment and staffing against his organization’s objectives or J.C.A.H. (Joint Commission on the Accreditation of Hospitals, 1972) standards relevant to his particular facility, and recommend intervention or corrective action where appropriate. Common structural goals such as: decrease patient/direct service staff ratio; humanize wards; and develop outreach clinics accessible to the population at risk can be specified in ways which will make them measurable and allow the evaluator to provide clinical management with real information concerning program structure. Objectives written based on these structural goals sound like this:

Decrease the existing patient/psychologist ratio (100 to 1) to 50 to 1 by the end of the current fiscal year, l Hang drapes meeting 1974 - federal fire resistant regulations in all residential living units within 3 months, and l Open a store front satellite clinic within one year in the identified high risk area of “Freud’s Station.” Within the realm of examination of process, it is possible to employ information already available at the service level. Systematic examination of these data or their recombination into indices or ratios can provide an insight into attainment of process objectives. “Process evaluation looks at the operations of the program, the practices within it, (the patterns which develop), and the ways that program approaches and interacts with the mental health system. . . It is useful as a quality measure when process results are compared against baseline data, norms, standards and realistically determined objectives. Since in most cases process objectives are indirect measures of quality, why look at process? Process evaluation is immediately attainable. It relies on a basic data set. . . which may already exist or can, with little effort (and additional cost) be developed within a program. Measurable and observable process objectives can be set for areas and population groups to be served, modalities of service to be used, equity and availability of services, and continuity of care within the system based on needs assessment, census data and baseline information on current operations.” (Koroluk, 1975). Within an inpatient setting, for example, a goal of “rel

38

DONALD A. LUND

ducing injury producing incidents” has been set. This has been operationalized as “reduce by 50% the frequency of injury (average of 8 per month) to patients over the next 6 months.” This objective can then be monitored by using mandated incident reports which are completed when an injury causing event occurs. Typically, these incident reports are reviewed on a case-by-case basis. Rarely, however, does the clinical manager maintain a continuous record of these incidents to determine whether patterns emerge as to time, place, type of incident or individuals involved. Yet, this is exactly the information he needs to determine the Bchievement of this stated objective. These data can be provided, simply and inexpensively, during the “beginning steps” of an evaluative process. Similar uses can be made of aggregate morbidity and mortality data, medication profiles, or, even through examination of changing patterns of utilization of isolation rooms, restraints, em_ergency ambulance services, and intramuscular injections (especially when used for behavior management). Still within the context of readily accessible data sources are treatment documents yielding information about the process of patient care. A system of explicit objectives, or criteria, is necessary to focus review of treatment documentation and to select information elements to be abstracted; e.g. “each therapist will develop at least 3 observable behavioral objectives for each of his patients within one month of the patient’s admission.” Simple examination to ascertain whether documentation is completed within prespecified time constraints, whether indicated risks are managed in accordance with criteria for risk management, whether services are provided within a temporally appropriate sequence, and whether adequate justification and documentation exist for continued maintenance at the current level or frequency of care is possible based upon accepted prespecified criteria. A methodology for such review using existing treatment documentation, variously termed psychiatric audit or medical care evaluation studies has been proposed by the J.C.A.H. (1975) under the title Performance Evaluation Procedure. Such studies are required under mandates of the evaluation amendments to the Community Mental Health Center Act (PL 94-63) and PSRO (PL 92-603). In ambulatory settings aggregation of data concerning “no shows” and “service dropouts” in terms of frequency of occurrence, and time ofday and day ofweek ofoccurrence, can reveal patterns of utilization and accessibility. To achieve the goal of “improving temporal accessibility,” these data can be used by the clinical manager to schedule services and staff time at high utilization periods, and conversely, to schedule administrative functions, such as staff meetings, during periods of low utilization. Through such analysis, objectives of “move staff meetings to Monday morning by March” and “open the outpatient clinic three nights a week by April” might be established. Simple comparisons of sociodemographic data concerning the population served with distributions present in the general population can provide a wealth of information concerning service consumers and how well the program is doing in terms of its own assumptions and/or its community’s expectations of who should be served. Read-

ily available sources for these data include publications of the Bureau of the Census and the National Institute of Mental Health Series C publications (including such titles as “A Model for Estimating Mental Health Needs Using 1970 Census Socioeconomic Data,” Rosen, 1974) which provide a working model for the clinical manager concerning evaluation, accounting, cost finding and information systems. As technocrats and resources become available, sociodemographic comparisons can be made not only between the population served and distributions within the general population, but also between the population served and the population-at-risk as defined using some sociotechnologic approximation of prevalence of treatable disorder. Outcome evaluation is generally considered the highest level of evaluation and therefore it is assumed to require the greatest infusion of resources for its support. A major reason for its cost is that it is often geared toward assessing the condition of the individual and attempts to measure change in that condition over time. Measuring change of the human psychopathologic, symptomatic, or functional condition generally requires standardized instruments or techniques. These, in turn, require training to administer, ability to analyze, and at least two administrators (pre- and post) for purposes of interpretation.(Ciarlo [1972]. has proposed a “post” only methodology which may be effective as well.) These techniques generally require analysis of multiple items or specific individualized goals. The goal attainment scaling techniques previously mentioned rely, in one form or another, upon development of observable patient specific objectives and upon monitoring progress toward achievement of those objectives on a prespecified continuum. However, it is possible to examine a limited set of conditions or goals to which the program is specifically directed. For instance, in a mental retardation program, a program goal can be “to toilet train the residents.” Following review of baseline performance data, this goal can be operationalized by stating, “75%~of the residents on unit X will toilet independently after completing a six week bowel and bladder training program.” In other settings, such as a chronic geriatric unit, a maintenance goal may be established which targets maintenance (rather than progress or regression) of a functional state as a successful outcome. An example might be “maintain present level of functioning for 97% of patients on unit Y over the next six months.” Others, like Binner (1974), advocate use of a simple unidimensional measure of “level of impairment” at admission which can be compared to a”leve1 of response” at discharge. An easily implemented process, it has the benefits of parsimony and simplicity and provides valuable data concerning program effectiveness. The effects of utilization review and the development of PSRO’s have already led to a more well-defined data base in inpatient programs which may be useful for outcome evaluation. Within the utilization review process, it is necessary to establish, through a consentual process involving peer judgment, explicit criteria with which to determine whether admission and continued stay are appropriate. It is also possible to measure symptom relief, improvement or change in function based upon those same criteria. At this level of evaluation, instruments

Mental Health Program Evaluation: Where You Start?

such as the Global Assessment Scale or the SCL-90 become useful in measuring progress along pre-determined and standardized dimensions. These examples have not addressed the universe of available techniques. Rather they represent an attempt to divert the evaluator’s attention from highly sophisticated approaches to those practically conceived as “beginning steps.” It is possible to conduct evaluation without becoming mired in the jargon of the field, it is possible to do so without expending an inordinate proportion ofthe service delivery systems resources, and it is possible to do so without technologic sophistication. Common sense, coupled with an understanding of the management process, may suff52e. I have referred to program evaluation as a “service.” I mean that quite literally. The evaluator is and should be in a service role providing needed information to clinicians, managers and other program constituencies. If he fails to actively provide accurate, timely and interpretable information to management, he fails in the achievement of his own core mission as an evaluator. Gardner (19751, in his discussion of the “rights and responsibilities” of the evaluator states it this way: “The evaluator should be responsible for providing a

39

comprehensible interpretation of the assessment to all relevant groups: program personnel, funding sources, clients, and the appropriate community or the general public. The publication of journal articles, books, or the distribution of reports is not adequate. The evaluative findings must be disseminated in a manner that assures the attention of the appropriate groups and must be understood by all parties. This too assumes an active educative role for the evaluator. The failure of most program evaluators to assume this responsibility for diffusion of research findings has been a major deterrent to realization of the potential influence of evaluative research. The dissemination of evaluation findings overlaps with the translation of the assessment into recommendations for program change. As an active participant in the organization or system . . . the evaluator also should be responsible for translating the assessment into recommendations for necessary program change.” Guttentag (1976) puts it even more strongly: “If evaluative results are not used, the failure is clearly the evaluator’s.”

ANALYSIS If I convey any message, I hope it is that we as evaluators do ourselves and our clients a disservice by allowing technical shorthand to infiltrate our communications to the detriment of our concern for rational data-based management toward the goal of effective human service delivery. We must learn to communicate with our constituenties and take an active role in enabling their use of our findings. This may require adaptation of a number of roles, beyond the traditional, to the evaluator’s repetoire. It also calls for reorientation away from the symbols of academic training. Talking about Chi Square and

analysis of variance with consumer groups just doesn’t facilitate communication of the evaluator’s message. Another point bears reemphasis. We, as evaluators should expect to be diverted from evaluation to meet the ongoing needs of our organization or system. Such diversion is both annoying and frustrating. It may seem that administrators and clinician invent crises as a means to coopt the evaluator and thus deal with the threat he poses. Take heart! These conditions are benign and can be turned to advantage. Don’t give up! Sooner or later even you will get your chance to evaluate.

REFERENCES ATIKISSON,C. C. and HARGREAVES,W. A. A Conceptual Model for Program Evaluation in Health Organizations. In H. C. Schulberg and F. Baker (Eds.), Program Evaluation in the Health Fields (Volume II). New York: Behavioral Publications, 1976. ATTKISSON,C. D., MCINTYRE,M. H., HARGREAVES,W. A., HARRIS, M. R. and OCHBERG,F. M. A Working Model for Mental Health Program Evaluation. American Journal of Orthopsychiatry, 1974,44, 741-753. BELL, R. Systematic Dimensions of a Comprehensive Evaluation Model. In Southern Health and Familv Life Studies. Winter Haven: Community Mental Health Center,‘1973. BENEDICT,W. Eval-U-Treat, A Unified Approach to Evaluation and Treatment in a Residential Treatment Center. Stoughton, WI: Lutheran Social Services of Wisconsin and Upper Michigan, _ __ 1973. BIEGEL, A. Evaluation on a Shoestring: A Suggested Methodology for Evaluation of Community Mental Health Services Without Budgetary and Staffing Support. In W. A. Hargreaves, C. C. Attkisson, L. M. Siegel, M. H. McIntyre and J. E. Sorensen (Eds.),

Resource Materials for Community Mental Health Program Evaluation (Part 1). San Francisco: National Institute of Mental Health. 1974. BINNER, P. R. Output Value Analysis: An Overview. Paper presented to the Information and Feedback Conference, York University, Toronto, Canada, 1974. CIARLO, J. A Multi-Dimensional Outcome Measure for Evaluating Community Mental Health Programs. (unpublished), Denver, CO: Denver General Community Mental Health Center, 1972. DAVIS, H. Four Ways to Goal Attainment. 43-48.

Evaluation,

1973,1,

DEROGATIS,L., LIPMAN, R. and COVI, L. SCL-90: An Outpatient Psychiatric Rating Scale (preliminary report). Psychopharmacology Bulletin, 1973. DONABEDIAN,A., ed. Medical CareAppraisal: A Guide to Medical Care Administration (Vol. 2). New York: American Public Health Association, 1969.

DONALD

40

EDWARDS, W., GUTTENTAG, M. and SNAPPER, K. A DecisionTheoretic Approach to Evaluation Research. In E. L. Struening and M. Guttentag, (Eds.), Handbook of Eualuation Research (Volume 1). Beverly Hills, CA: Sage Publications, 1975. Florida Consortium for the Study for Community Mental Health Evaluation Techniques. Final Report. Washington, DC: National Institute of Mental Health, 1974. GARDNER,E. A. Responsibilities and Rights of the Evaluator in Evaluation of Alcohol, Drug Abuse, and Mental Health Programs. In J. Zusman and C. R. Wurster, (Eds.l,Progrum Eualuation: Alcohol, Drug Abuse and Mental Health Services. Lexington, MA: Lexington Books, 1975. GUTTENTAG,M. Future of Evaluation in Human Services. Presented to Summer Institute on Program Planning and Evaluation of Human Services. State University of New York at Buffalo, 1976. Joint Commission of Accreditation of Hospitals. Accreditation Manual forPsychiatric Facilities. Chicago, IL: Joint Commission on Accreditation of Hospitals, 1972. Joint Cdmmission on Accreditation of Hospitals. The PEP Primer for Psychiatry. Chicago. IL: Joint Commission on Accreditation of Hospitals, 1975. KIRESUK,T. J. Goal Attainment Scaling at a Community Health Service. Eualuation, 1973, (Special Monograph 12-18.

Mental No. 11,

KOROLUK,I. Process Evaluation. Ripple. 197&Z, 6. (Newsletter of the Bureau of Program Evaluation, New York State Department of Mental Hygiene, Albany, NY). LEVINSON, H. Organizational Diagnosis. vard University Press, 1972.

Cambridge,

MA: Har-

LUND, D. A. and JOSEPHSON,S. L. Needs Assessment: Practicality, Utility and Reliability. In Florida Consortium for the Study of Community Mental Health Evaluation Techniques. Final Report. Washington, DC: National Institute of Mental Health, 1974. LUND, D. A. and AIKEN, J. F. The ATGON System at the New Dawn Partial Hospitalization Program of the North Central Florida Community Mental Health Center. In Florida Consortium for the Studv of Communitv Mental Health Evaluation Techniques. Final Report. Washington, DC: National Institute of Mental Health, 1974. MCCULLOUGH,P. Training for Evaluators in J. Zusman and C. R. Wurster, (Eds.), Program Evaluation: Alcohol, Drug Abuse and Mental Health Service. Lexington, MA: Lexington Books, 1975. Moos, R. Evaluating Wiley, 1974.

Treatment Environments.

New York: John

A. LUND

MORRISEY,G. Management by Objectives MA! Addison-Wesley, 1970.

and Results.

Reading,

ROSEN, B. A Model for Estimating Mental Health Needs Using 1970 Census Socioeconomic Data. Department of Health, Education and Welfare Publication No. (ADM-74-63). Washington, DC: U.S. Government Printing Office, 1974. SCHWAB, J., WARHEIT, G. and FENNELL, E. An Epidemiologic Assessment of Needs and Utilization of Services, Eualuation. 1975,2, 65-67. SMITH. J. Manual of Principles and Methods ation. In W. A. Hargreaves, C. C. Attkisson, McIntrye and J. E. Sorensen (Eds.1, Resource munity Mental Health Program Evaluation cisco: National Institute of Mental Health,

for Program EvaluL. N. Siegel, M. H. Materials for Com(Part 1). San Fran1974.

Southern Regional Education Board. Definition of Terms in Mental Health, Alcohol Abuse, and Mental Retardation. (Department of Health, Education and Welfare Publication No. (ADM 74-38) Washington, DC: U.S. Government Printing Office, 1973. SPITZER, R., GIBBON, M. and ENDICO’IT,J. Global Assessment Scale. In W. A. Hargreaves, C. C. Attkisson, L. M. Siegel, M. H. McIntyre and J. E. Sorensen, (Eds.),Eualuating theEffectiveness of Seruices. Rockville, MD: National Institute of Mental Health, 1975. THOMPSON, L. E., THOMPSON, B. and LUND, D. A. Evaluation Report on the Managers Mean Rating of the Importance of Selected Program Priorities. In Florida Consortium for the Study of Community Mental Health Evaluation Techniques, Final Report. Washington, DC: National Institute of Mental Health, 1974. VANHOUDNOS,H. M. An Automated Community Mental Health Information System. Decatur, IL: State of Illinois Department of Mental Health, 1971. VAN MAANEN, J. The Process ofProgram Managers. Washington, DC: National mental Service Press, ,1973.

Evaluation: A Guide for Training and Develop-

WILSON, N. An Information System for Clinical and Administrative Decision-Making Research and Evaluation. In J. Crawford, D. Morgan and D. Giantruco, (Eds.), Progress in Mental Health Information Systems. Cambridge, MA: Ballinger, 1974. WILSON, N. The Kiss Principle: State Level Program Evaluation in Colorado or What It’s Like to Move From Service Based Evaluation to the State Level. Presented to the HEWiNIMH State Level Program Evaluation Conference, Windsor, CT, 1976. WINDLE, C. and OCHBERG,F. M. Enhancing Program Evaluation in the Community Mental Health Centers. Evaluation, 1975.2, 30-36.