Using multiattribute utility theory as a priority-setting tool in human services planning

Using multiattribute utility theory as a priority-setting tool in human services planning

Evaluation and Program Planning, Vol. 16, pp. 295-304, Printed in the USA. All rights reserved. 1993 Copyright 0 0149-7189/93 $6.00 + .OO 1993 Perg...

1MB Sizes 35 Downloads 45 Views

Evaluation and Program Planning, Vol. 16, pp. 295-304, Printed in the USA. All rights reserved.

1993 Copyright

0

0149-7189/93 $6.00 + .OO 1993 Pergamon Press Ltd.

USING MULTIATTRIBUTE UTILITY THEORY AS A PRIORITY-SETTING TOOL IN HUMAN SERVICES PLANNING

MICHAEL

Rutgers

J. CAMASSO

University,

and JANET DICK

New Brunswick,

New Jersey

ABSTRACT Multiattribute Utility Theory (MAUT) is a normative theory of choice that facilitates the segmentation of complex decisions into value dimensions and alternative positions and guides the recombination of these values and alternatives into an overall judgement. MAUT has proven to be helpful in resolving difficult public policy problems under conditions of uncertain outcome. This paper examines the feasibiIity of applying the approach to the needs assessment and services priority setting activities of county-wide human services planning councils. MA UT was applied to the 1989-1991 cycle of the Essex_County (Newark), New Jersey Comprehensive Human Services Plan with some success. The decision-making and information-filtering processes uncovered by the approach are discussed as are the questions about community planning which it raises,

INTRODUCTION

has received the most extensive coverage in human services planning. The nominal group as portrayed by Delbercq, Van de Ven, and Gustafson (1975) is a blend of group interaction and the mathematical aggregation of values. Simply put, individuals identify and rank their values then rerank them after “round-robin” group discussion. The group interaction stage is used ostensibly to facilitate the coalescence of values which, in turn, leads to a simpler and more manageable set of priorities. The attainment of clarification through NGT is not without apparent cost. Theoreticians of decision-making processes have long noted two unattractive features of group interaction involving individuals of unequal social status, that is, the voice that dominates by virtue of position or persuasiveness and the group pressure for conformity (Keeney & Raiffa, 1976; Von Winterfeldt & Edwards, 1986). Seaver’s (1978) experimental work with over a half dozen group assessment procedures, including NGT, has prompted him to warn that the interaction among assessors produces only a feeling of satisfaction and not any overall improvement in the quality of the assessment probabilities. He remarks that “the result of interaction among assessors is quite clear, it produces more extreme and less well-calibrated assessments.

Since its inception in the late 1950’s, human services planning has been conceptualized as a process that necessarily requires the inclusion of community and client values as well as professional assessment. The emphasis on broad-based participation is especially evident in what many social work texts view as the landmark attempts at planned social change, namely: Mobilization for Youth, Community Mental Health Centers, and the Model Cities Program (e.g., see Gilbert & Specht, 1977, Kahn, 1969; Mayer, 1972). This mixture of values and data has proven to be a volatile one, however, and at no time is this more apparent than in the goal setting phase of the planning process. It is here that problems, services, and at-risk populations are ranked by “those [individuals and groups] who can best reflect the community’s values” (York, 1982, p. 92). Needless to say, eliciting a community value structure from a diverse group of stakeholders has proven to be one of the most difficult and controversial components in planning for human services. Although a variety of approaches have been applied to priority setting, the nominal group technique (NGT)

Work reported in this paper was completed under State of New Jersey, Department of Human Services Contract No. G0433C. Requests for reprints should be sent to Michael J. Camasso, Assistant Professor of Social Work, Rutgers University, Bldg. 4087, Livingston Campus, New Brunswick, NJ 08903.

295

296

MICHAEL J. CAMASSO and

If all of the members of the group agree . . . , the individual assessments tend to become more extreme. Apparently, subjects treat the information provided by other group members’ assessments as somewhat independent of their own information, rather than as redundant” (1978, p. 52). Planners, moreover, like Lauffer (1978), describe the value blurring that often results from applications of NGT in community settings and York (1982) bemoans the method’s limited potential for resolving group conflict. More recently, the technique of focus groups has been employed by planners to obtain information from community residents and agency clients regarding services utilization and perceptions of need (Krueger, 1988; Morgan, 1988). Though designed as a data-generating tool, focus groups have been used in human services planning to build consensus. When employed in this way, they share all of the problems inherent in NGT and exhibit some additional ones as well. The most important of these is the inability to easily analyze the relative importance of individual and group effects (Krueger, 1988). Notwithstanding these criticisms, few in the human services call for abrogating or even limiting the role of the group in priority setting, Such a call would be ideologically inadmissible and politically naive. And groups do possess some real advantages over the unitary decision maker. If carefully formed, groups can expand the factual and value bases of goal setting. There are also some compelling mathematical benefits. Von Winterfeldt and Edwards (1986) point out that the averages of individuals’ probability assessments are more accurate than individual assessments, and they offer as proof the considerable data which has accumulated on strictly proper scoring rules.’ The charge to human services planners seems quite clear: Apply a priority-setting method which makes use of groups but which also minimizes the potential hazards of group interaction. It is to one of these methods that the discussion now turns. Multiattribute

Utility Theory as a Planning

Tool

Human services planning as we have characterized it is as much a form of community participation as it is an attempt to rationally allocate public and private resources. Community involvement provides a kind of adhesive which appends local definitions of need and want to state-mandated programs and to the social indicators used by planners to describe populations at risk. The success of such planning hinges on the capability of local,

JANET

DICK

state and professional stakeholders to make decisions by incorporating multiple and, very often, competing criteria into their deliberations. As Steuer (1986) notes, multiple criteria decision making has two distinctive components. The first, multiple criteria optimization, is most often applied to deterministic problems in which the number of feasible alternative is large and controversy over selection is minimal (Malakooti, 1989; Steuer, 1986). The second component, multiattribute decision analysis, has been most useful in addressing difficult public poIicy problems {nuclear power plant siting, location of a shopping center) where the number of alternatives is small and contention is widespread. Multiattribute Utility Theory (MAUT) as it has been developed by Edwards and his associates (Edwards & Newman, 1982; Guttentag & Snapper, 1974; Von Winterfeldt & Edwards, 1986) is one approach to multiattribute decision analysis that has proven helpful in policy evaluation and program planning. The theory is based on the assumption that decision makers want to follow optimal decision rules but are unable to because of insufficient knowledge or because of the information overload that often arises in complex situations. Hence, MAUT is a rational or normative theory of choice, that is, if you accept MAUT axioms you should follow the preferences specified by the method even if they conflict with intuitive preferences (Carroll & Johnson, 1990; Wright, 1984). The principles of MAUT (i.e., connectivity, transitivity, dominance, solvability*) are embedded in a ninestep process: I.

2. 3. 4. 5. 6. 7. 8. 9.

Identify the organization whose utilities (values) are to be maximized. Identify the issue or issues to which the utilities needed are relevant. Identify the entities (alternatives) that are to be evaluated or prioritized. Identify the relevant dimensions of value (attributes). Rank the dimensions (of value) in order of importance. Rate dimensions in importance, preserving ratios. Sum the importance weights; divide each by the sum. Measure the position or location of the entity being evaluated on each dimension. Calculate utilities for entities using iJ I

‘Proper scoring rules refer to probability assessments which possess this mathematics property: As an assessment gets smaller for an event which actually occurs, the penalty score to the assessor for not having assigned the event a probability of 1.O increases in a nonlinear fashion. Averages of probabilities then represent true centers while averages of actual scores do not. As Von Winterfeldt and Edwards (1986) note, the property underlies much of the thinking about using groups rather than individuals to perform probability evaluations.

=

CWjUfj .i

where U is the aggregate utility for the ith entity; wj is the importance weights of the&h dimension; Uij is the location of the ith entity on thejth dimension.

‘For a detailed discussion of these axioms, Edwards (1986) or Wright (1984).

see Von Winterfeldt

and

Multiattribute Premised on the recognition that complexity in value dimensions or alternatives leads to uncertainty and expedient solutions in problem solving, MAUT adopts a divide et impera strategy. The task of prioritizing, in effect, is decomposed into manageable parts. Value dimensions are often arrayed in hierarchical form using the mechanisms of value trees and phased weighting procedures (Edwards & Newman, 1982; Von Winterfeldt & Edwards, 1986). Alternatives, too, are broken down using the device of location matrix presentation. Documented consequences of this decomposition include reduction in selective information processing (Pitz, Heerboth, & Sachs, 1980; Larichev & Moshkovich, 1988), diminution of disagreements (Eils & John, 1980), and greater accuracy than holistic procedures (Adelman, Sticha, & Donnell, 1984; Eils & John, 1980; Von Winterfeldt & Edwards, 1986). The decision technologies emanating from MAUT have been designed primarily to meet the needs of individual decision makers. Despite such emphasis, MAUT has on occasion been utilized by groups to resolve contention over planning and policy direction. Gardiner and Edwards (1975), for example, demonstrate how the approach was used to create a group value model for evaluating coastal zone development permits. Beach and Barnes (1983) describe the way in which a MAUT-based preference questionnaire was used by a group of 35 respondents in the planning of a large county’s park development. Stillwell, Seaver, and Edwards (1981) illustrate the approach’s applicability to the deliberations of Los Angeles School Board as that body grappled with a series of school desegregation proposals. In each of these instances an aggregate utility for the group was found by computing mean importance weights and then evaluating each project or program alternative on the basis of these averaged values. Other examples of MAUT as a group level decision tool are documented by Guttentag and Snapper (1974) and Von Winterfeldt and Edwards (1986). Three versions of MAUT have appeared in the decision-making literature. Each is distinctive in the techniques used for weighting and in the models underlying the aggregation of single-attribute evaluations. The Simple Multi-Attribute Rating Technique (SMART) creates attribute weights by direct numerical estimation methods (e.g., ranking, direct rating, or ratio estimation) and aggregates by employing a weighted additive model (Gardiner & Edwards, 1975; Edwards & Newman, 1982). The conjoint measurement approach, described by Louviere (1988), computes weights from strength of preference judgements and combines attribute evaluations with either additive or multiplicative operations. The Subjective Expected Utility (SEU) method employs variable probability (or certainty equivalent) weighting and multiplicative aggregation (Pitz & McKillip, 1984; Von Winterfeldt &Edwards, 1986; Wright, 1984). Com-

Utility Theory

291

parisons between the more complex multiplicative or nonlinear procedures and additive models such as SMART reveal that real differences are much more the exception than the rule (Seaver, 1978; Von Winterfeldt & Edwards, 1986). Moreover, simple additive aggregation of weights and utilities has a documented criterion validity that is superior to more intuitive or holistic methods (Meehl, 1954; Pitz et al., 1980; Stillwell et al., 1981). While MAUT has demonstrated its utility as a sectoral or strategic planning mechanism, there is little evidence of the model’s performance under conditions of too many values, too many alternative and too many status and interest groups, that is, the conditions inherent in county-wide human services planning. On its face, MAUT would appear to provide a practical structure for the needs assessment and priority-setting activities that have become synonymous with such planning. McKillip (1987) has advocated the use of MAUT in just such complex planning settings. Entities (Step 3) in such plans may be conceived of as problems, target populations, or geographic areas. In most Untied Way or county planning endeavors, however, the principal entities facing prioritization are existing services and planned interventions. Dimensions of value (Step 4), on the other hand, are typically a set of social problems requiring some kind of ameliorative action. The importance weight, Wj, (Step 7) can be utilized to capture a community’s level of problem preoccupation, while the location coefficient, ujj, (Step 8) can be employed to describe the community’s assessment of service performance in critical problem areas. Case Application: Essex County Comprehensive Human Services Plan Every 3 years the New Jersey Department of Human Services requires that each of the state’s 21 counties engage in “coordinated and comprehensive planning for the vulnerable populations served by both the county and state” (New Jersey Department of Human Services, 1987a, p. 1). The primary vehicle for community input into this process is an advisory council comprised of the general public, service consumers, and providers of service. The number of providers on the council is prohibited by executive order from exceeding 49% of the total group membership. Among its tasks, the advisory council is charged with conducting a needs assessment of atrisk (target) populations and with preparing a list of service priorities that have relevancy for community problems and target populations. In July, 1988, the Department of Human Services asked Rutgers University School of Social Work to assist the state’s most populous county, Essex, in the formulation of its human services plan. A county with many affluent suburbs, Essex contains a number of the state’s poorest municipalities, namely: Newark, East Orange, and Irvington. Both the state and the county were

298

MICHAEL J. CAMASSO and JANET DICK

especially interested in improving the methods being used to elicit community problems and to prioritize services for funding consideration. In this latter instance, NGT - the state’s recommended mechanism - had proven inadequate in several respects. First, planning officials expressed a concern over what they viewed as the advisory council’s utilization of only a small part of the available information. These officials were also critical of the accounting procedures that were employed to track decisions. As one county planner put it, we are hardly given a clue as to what information has been used or discarded, the relative importance of information, and the manner in which decision stalemates are broken. Of the several MAUT versions available, Rutgers staff elected to use SMART. The principal objective of the application was the formulation of a service priority listing for Essex County. Specifically, the council was given the task of delineating and ranking the 15 service alternatives most likely to enhance the health and well-being of 18 target populations.3 This listing, the council was informed, would be used by the state and county planning agencies as one input, along with budgeting and statutory considerations, into the formulation of county service initiatives. Fifty-one advisory council members were introduced to SMART at an all-day session designed to establish community services priorities for the 1989-1991 planning cycle. Twenty-two (43%) of the participants described themselves as service providers and represented such diverse interests as mental health, primary and secondary medical care, nursing homes, alcohol/drug treatment, child care, vocational education, public welfare, child welfare, family/juvenile court, information and referral, and housing interests. Nineteen (37%) were categorized as interested citizens, although about half of these individuals were either former providers or clients. Four participants listed themselves as clients, while six individuals did not choose to characterize their status. Each participant was randomly assigned to a small group of six to eight members and was instructed to follow the direction given by the staff facilitator assigned to their group. Participants were guided through a fivestep process.

risk/target populations.4 Preferences were indicated on a five-point scale that ranged from much lower priority to much higher priority. The sampling frame employed in the citizen survey was the Essex County Suburban and Essex County Newark household directory purchased from the National Telephone Directory Corporation. Residences were clustered by zip code, and proportion-to-size random samples were then drawn. Questionnaires were mailed to each of the selected residences (N = 665), resulting in a 40% response rate. The sampling frame for the human services professional and client surveys consisted of a list of 712 public and private agencies serving Essex County. A multistage, purposive sampling design was used to select samples of 376 professionals and 1110 clients. Large agencies, that is, agencies with 20 or more full-time staff, were each mailed six questionnaires for professionals and 25 client instruments. Smaller agencies received three questionnaires for professionals and eight chent survey forms. The response rate for the professional survey was 37%; for the client survey it was 26%. Inasmuch as the questions on these questionnaires permitted magnitude scaling, it was a relatively simple matter to calculate rankings from the score sums of the items. By taking the transpose of these data matrices and then ranking the column vector of sums, it was possible to produce the top 20 rankings, which appear in Table 1. In addition, participants received summary information from a social indicator analysis that organized indicators into five major groupings, groupings that are similar to the composite indices employed by Ross, Bluestone, and Hines (1979) in their study of well-being in United States counties-economic well-being, health status, family functioning, crime, and education. Data for each measure were presented at the municipality level and, whenever possible, were displayed in a time series format. In order to place these data into context, measures were then contrasted with statewide rates, and a county ranking was calculated. A Dictionary of Standardized Target Problem Definitions (New Jersey Department of Human Services, 1986) was also provided; this, too, was organized by the five composite indices.

Step 1. As a first step, participants were given the results from three opinion surveys that are mandated in the planning process, namely: surveys of informed citizens, clients, and human services professionals. These instruments asked the respondents, among other things, to assign a priority to 62 problems/issues facing 18 at-

Step 2. Participants were given 2 h to study the survey information, the social indicator data, and the problem definitions, after which time they were presented with the Problem Cluster Tree that appears as Figure 1. The five base branches of the tree stem from the classification scheme employed in the indicator analysis. Subbranches were restricted to six in an effort to set limits on the task complexity.

3The target population groups included children (ages O-l I), youth (ages 12-17), the elderly, persons with development disabilities, victims of abuse and neglect, persons with serious emotional disorders, substance abusers, the visually impaired, the physically disabled, low income families, low income individuals, the homeless, single parents, blacks, Hispanics, veterans, juveniles in crisis, and families in crisis.

4Problems listed on the instruments were derived from the Department of Human Services’ Dictionary of %andurdized Target Problem Definitions (New Jersey Department of Human Services, 1986) while the target populations were taken directly from the Department’s Comprehensive Human Services Planning Guidelines (New Jersey Department of Human Services, 1987a).

Multiattribute

Utility Theory

TABLE 1 TOP TWENTY PROBLEM RANKINGS IN ESSEX COUNTY CITIZEN, PROFESSIONAL,

299

AND CLIENT SAMPLES

Priority Rank

Citizen Sample (N = 265)

Professional Sample (N = 140)

Client Sample (N = 284)

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20

Crime prevention Attention by government to citizen concerns Environmental protection AIDS prevention Communication of government decisions Repair of rundown houses Equality among people of all races Teenage drug and alcohol abuse Recreation programs for youth Police services Job opportunities for adults Child abuse and neglect Literacy programs Better police/community relations Repair of local streets Services for senior citizens Citizen participation in government Job opportunities for youth After-school programs for youth Housing for the elderly

Drug abuse Overcrowded/substandard housing Alcohol abuse Neighborhood crime Not enough money for basics Homelessness Childcare Teenage pregnancy Unwanted pregnancy Abuse/neglect of child/others Emotional problems Difficulties at school Difficulty with spouse/partner Lack of nutritious food Unemployment/underemployment Difficulty caring for elderly Medical care Dental care Speaking/writing English Lack of affordable legal aid

Neighborhood crime Not enough money for basics Medical care Recreation Dental care Unemployment/underemployment Emotional problems Overcrowded/substandard housing Physical illness Lack of affordable legal aid Inadequate public transportation Childcare Lack of nutritious food Homeless Difficulties at school Difficullty with spouse/partner Drug abuse Trouble with other people Physical disability Abuse/neglect of child or others

Instructions

for completing

the Tree took

this form:

First, divide 100 points among the five principal interest areas making sure that the most points are used to label the issue(s) that you view as most critical or important. This done, use your social indicator data, survey rankings, listing of 18 target populations, and summary list of target problem definitions to fill-in up to six subbranch entries for each issue. Please note that the subbranch entries can include problem aspects and affected target populations. Divide five points among subbranch entry sets again making sure that the most points label the item(s) you believe are the most important.

was used to insure that the total weights did not exceed 500 - an arbitrary ceiling that the council had applied in some earlier planning iterations to rank problem importance. Examples on how to fill in the sub-branches were discussed with the group. Participants, however, were cautioned to complete the Tree without group discussion and to return it to the facilitator. It should be noted that the “points approach” (Beach & Barnes, 1983; Carter, Beach, & Inui, 1986) was selected over the more complex ratio method recommended by Edwards (Edwards & Newman, 1982; Von Winterfeldt & Edwards, 1986).5 Seaver (1978) and Stillwell et al. (1981) have demonstrated that most of the inThis

scoring

scheme

‘In ratio weighting, attributes are first placed in rank order of importance so that the most important is at the top of the list, and the least important is at the bottom of the list. The least important attribute is then assigned a value of ten. The rater then assigns numerical weights so that the next from the bottom gets a value depicting how much more important that attribute is relative to the least important. The rater works up through the list assigning numerical values in the same fashion (Edwards

& Newman,

1982).

formation in ratio weights can be captured rank orders.

by simpler

Step 3. During an extended lunch break, staff computed total weights for each participant by multiplying down the branches of the Problem Cluster Tree. For example, the total weight for subbranch 1.1 is obtained by multiplying the value of economic opportunity issues by the value for the problem aspects and target population listed for 1.1. Step 4. Once the staff calculations were completed, the Problem Cluster Trees were returned to the participants along with a matrix for assessing county services6 (Figure 2). Participants were given this guidance for filling out the Service Ranking Matrix: Take the total weight computed for you on the Problem Cluster Tree and transfer that weight to the corresponding column space on the Service Ranking Matrix. For example, if the total weight for branch 1.1 is 35 on your Problem Cluster Tree, write in 35 above 1.1 on the Service Matrix. If a weight for a subbranch on the Problem Cluster Tree has not been calculated, then leave the corresponding column blank on the Service Matrix. Now using a scale of 1 to 100, with 1 signifying that the listed services meet the problem completely and 100 signifying that services do not at all meet the problem, describe how well 30 existing county services meet the challenges of each problem aspect.

Staff reviewed each service matrix to insure correctness. %ervices listed here were taken from the Department of Human Services’ Dictionary of ServiceDefinitions (New Jersey Department of Human Services, 1987b). All participants had received a copy of the definitions prior to the planning meeting.

MICHAEL

300

J. CAMASSO and JANET DICK

PROBLEM CLUSTER

TREE IOTAL

PROBLEH

1.

ECONOMIC

CLUSTER

OPPORTUNITY

PROBLEM

ISSUES

(

ASPECTS

TARGET

POPULATION

WEIGH1

AFFECTED

1.1

d

(

1.2

&

(

j1.2

1.3

&

(

)1.3 )1.4

I---1.4

)1.1

8.

(

1.5

8

(

)1.5

1.6

c

(

Jl.6 5.0

2.

HEALTH

ISSUES

8

(

2.2

&

(

12.2

2.3

&

(

j2.3

__.-----2.4

).._______

(

2.1

j2.1

&

(

)2.4

2.5

&

(

j2.5

2.6

a

(

j2.6 5.0

3.

EDUCATION

ISSUES

(

3.1

&

(

j3.1

3.2

&

(

)3.2

3.3

&

(

13.3

&

(

j3.4

3.5

8

(

)3.5

3.6

&

(

j3.6

__.. ..__3.4

)______

5.0

4.

FAMILY/CHILD

DEVLOP

ISSUES

4.1

(

)4.1

4.2

(

j4.2

(

(

)4.3

(

14.4

(

j4.5 14.6

( 5.0

5.

CRIME

8 DELINPLJENCY

ISSUES

(

5.1

(

5.2

(

15.2

(

j5.3

-5.4

(

j5.4

5.5

(

15.5

5.6

(

(100)

Figure 1. Problem

j5.1

j5.6 5.0

500

cluster tree used to elicit attributes and attribute weights.

The use of a 1 to 100 scale to locate alternatives is purely conventional (e.g., see Guttentag & Snapper, 1974 or Edwards & Newman, 1982). Scaling aside, this stage in the process proved to be the most arduous, taking the average individual approximately 2.75 h. It also raised questions among the group facilitators about the internal validity of the ranking process. Especially nettlesome were the issues of instrumentation bias due to fatigue or carelessness. Efforts were made to combat these potential sources of nonrandom error by instructing participants to take a short break after each 90 min of work if they felt this rest period was necessary. Step 5. Completed Service Matrices were returned to staff who then computed a service score (utility) by multiplying the total weight of a problem aspect/target population by the appropriate service impact rating (i.e., location assessment).

The five-step process outlined here differs in two respects from what has been the typical application of MAUT to services or program priority settings (e.g., see Beach & Barnes, 1983; Edwards & Newman, 1982; McKillip, 1987). In Step 2 participants are instructed to identify, organize and weight subattributes after weighting more general attributes. McKillip (1987) initiates the weighting process at the lowest (most specific) level of the value tree with aggregated values defined by or with the help of experts. The revised structure was deemed prudent given the multiplicity of problem areas and target populations generated from mandated data collection sources and planning guidelines. At Step 4 participants were allowed to use their own attribute weights to rate service options. More commonly, aggregate (averaged) weights computed from individual utilities by experts are applied in an attempt to reduce confounding between weights and attributes (Pitz

Multiattribute

Utility Theory

301

SERVlCE RANKING MATRlX ( 1 f000000( f SERVKX T~T~~lG~ f 5.6 TYPE BRAhcHw9tz 1.1 l-2 1.3 1.4 1.5 1.6 2.1 . _ . -_-_ IL====E========EJ=======_===.13===============~==~=======~====~=========_=====~=-=___-~=====-===~=~== ___

___

___

_-_

---

___

___

_____-

.__

___

___

--.

_--

-_-

___

.__

_-__-_

.*.

MAtNJSERV

___

___

_._

__-

---

--_ ___

___---

..-

nlD/EDUC

___

___

--_

---

---

-__

___

_____-

._*

___

___

___

._-

_--

--_

___

__----

.--

EMPLOY/PROCURE EMPLOY/YOC INCOME FfNA%lAL

SERV

TRNG SERV

MEDICAL TROTHED

SERV

MEKTAL IiLTH TRT SERV

___

___

--_

--*

---

-__

___

_*_---

-_.

SUaST ABUSE TRT SERV

___

___

s..

.--

---

_- _

_ __

” _- - - -

..L

HOME CARE SERVICES

.._

___

.._

.._

_--

--_

___

_.-_--

I__

EMERG BASIC NOS SERV

. . .

___

.__

--.

---

__.

.__

..___.

. .. I..

FOOD SERVICES

.__

___

___

_--

__-

___

___

..__--

UOUSlNG SERVICES

.**

___

.__

_.*

---

--_

I*_

--..

T~SFORTA~

___

___

___

--*

---

___

___

_* .---

ALT LIVING ~~GE~E~

.__

___

A.0

---

---

_-_

___

_----

t3ASJC UFE SKJU EDUC

___

__.

.__

_--

---

_-_

I_*

.____-

__.

PARENT EDlJCATlON

_._

___

-*.

--.

---

___

.__

._ __--

.__

C0MMUNi-N

_._

___

. . .

_..

_._

___

.__

..___-

. . .

_._

_._

.._

.__

.._

_-.

. . .

_._..L

.__ .__

SERV

ELXJC

ALT EDUCATION SERVICE FORMAL EDUCATION

___

___

. . .

.--

.-_

--_

__-

__._--

.__

--

_*. _

..*

_._

Eh%lRON PROTECT SERV

___

___

__.

_-.

---

-__

__*

__----

NAT RESOURCE CONSERV

___

___

_.-

.--

---

-__

___

_----

-

___

PROTECTNE SERVlCES

___

___

___

_--

---

___

___

____ __

___

LEGAL ASSISTPNCE SERV

.__

___

..-

-__

---

-__

___

__---_

.. .

LAWENFORCEMENT

.__

.__

_._

._-

---

.__

. . .

.._-_-

._.

DAYGARE SERVtCES

___

___

. . .

__-

_--

-__

___

__._-_

_..

SCCIAL/GRP

___

_._

.._

.-_

.--

___

___

_.___-

__-

SUPP SERV

SERVICE

SCORE (-) (-1

I-I f-f I-1 f-i f-j

I-J I-.-.) I-J f-1 c-1 c-1 (-) t t--i fZI1 C---J i--J i ---I i---j f-1 f-J

-J f-J

(

___

.__

.I.

_-.

---

-__

______

__-

___

___

_..

..-

---

___ ___

___---

___

iNF0 & REFERRAL SERV

___

__.

___

__-

..-

_. _

___

_. - - - -

___

I -1

ASSESS/CASE MGMT SERV

_._

___

.__

.--

---

___

.__

__----

.__

c -h--f

COMMUNTY ORG/PDVGCAcY

___

___

___

-.-

---

___

___

* ___. -

_._

c-1

COUNSELING SERVlCES ~O~P~IONS~IP

SERV

--__=~~il==i--____

_--

c-1

(-1

_-_-------_-rI=============Ic=_________==I===~=====~=~=~=~====================

Figure 2. Utility location matrix used by participants to rate county human services.

& McKillip, 1984; McKillip, 1987). Since advisory council members represent constituency groups, further grouping was perceived by the consultant staff as a serious impediment to participant ownership of an unfamiliar process. Results of the Application Eighty-four distinctive problem aspects and target populations were enumerated by advisory council members in the Problem Cluster Trees. In an effort to consolidate the listings, Rutgers staff performed a content analysis, grouping cognate aspects into 33 categories. For example, the problem selections of “housing for mentally ill, ” “repair of rundown houses,” “substandard dwellings, ” “overcrowding, ” “landlord problems,” “abandoned firetraps” were all combined under the rubric of housing. In order to calculate an overall council (group) value structure, importance weights for each category were averaged across all 5 1 participants. The resulting 33 average weights are presented in rank order of importance in Table 2. The problems of housing, unemployment, homelessness, job training, and school dropouts head the list followed by the plight of low-income individuals, special education needs, mental illness, AIDS and other eco-

nomic problems. Mean importance weights were also summarized for the five principal problem clusters. Economic opportunity issues yielded the highest mean weights (11.12), followed by health issues (8.32), education issues (8.24), child/family development (7.51), and crime/delinquency (6.82). The ranking of problem importance was, of course, a penultimate, albeit critical, product of this MAUT application. The principle objective of the application, however, was the formulation of a ranked set of 15 service alternatives. To obtain a council level service priority ranking, the service location score-adjusted for problem weight - for each of the 30 service types were averaged over the 5 1 participants. The average scores for the top 15 services are displayed in Table 3. It is these scores that were presented to state and county planning officials by the advisory council and Rutgers as representations of how well county services were meeting identified problems.’ As noted earlier, the averaging of individual responses to form a group response has been employed in several MAUT planning applications ‘Five participants did not complete the Service Matrix in a fashion that permitted analysis. Hence, the score of 46 participants on the 30 core services specified by the state planning guidance formed the basis for the service rankings.

302

MICHAEL

and JANET

J. CAMASSO

TABLE 2 MEAN IMPORTANCE WEIGHTS FOR PROBLEM ASPECTS AND TARGET POPULATIONS IN CONSOLIDATED CATEGORY FORM * Problem Aspect

Weight

Housing Unemployment Homelessness Job training Dropouts Low income individuals Special education Mental illness AIDS Other-economic Other-family development Childcare Drug abuse Institutions Teenage pregnancy Recreation after school Health insurance Employee assistance Adult crime Language barriers Jails Education/high school Transportation Single parent families Other - education Other- health Police/citizen relations Child abuse Physical health Foster placements Juvenile crime Courts Teacher pay

12.933

TABLE 3 SERVICE PRIORITY LISTING FOR ESSEX COUNTY 1969-91 PLANNING CYCLE

Priority Rank

Service Grouping

Average Service Location * x Problem Weight Score

1 2 3 4 5 6 7 a 9 10 11 12 13 14 15

Employment/procurement Housing Employment/vocational training Income maintenance Substance abuse Transportation Day care Formal education Medical treatment Mental health treatment Home care Law enforcement Information and referral Emergency basic needs Counseling/therapy

16.7 15.2 15.0 13.1 12.8 12.7 a.4 a.0 7.7 7.2 5.6 4.8 3.6 2.2 2.0

11.750 i i ,578 11.321 lo.812 I 0.080 9.384 9.333 9.250 9.200 9.200 9.176 8.978 6.636 a.1 at a.1 ai a.100 a.000 a.000 7.857 7.785 7.767 7.750 6.727 6.571 6.555 6.500 6.382 6.258 6.000 5.421 5.200 3.515

*The weights are the original weights divided by 10. The division was performed to ease interpretation.

(Beach & Barnes, 1983; Gardiner & Edwards, 1975). Controlled comparisons of MAUT models with holistic, group interaction models, moreover, demonstrate consistency in evaluations, with the MAUT models tending to reduce the impact of disagreement on controversial value dimensions (Gardiner & Edwards, 1975; Von Winterfeldt & Edwards, 1986). If the target problem rankings in Table 2 are employed as a referent, it would appear that the service priority listing in Table 3 is quite reflective of need as defined by the council. Seven of the top 10 target problems identified by the advisory council are closely mirrored in the 15 service priorities listed. To help gain some insight into which source(s) of data might have influenced Advisory Council decisions, problem ratings of the Council were compared with problem ranks obtained from the three opinion surveys and from

DICK

l

Scores in the Table are standardized

on base 1000.

the social indicator data. The 33 problem aspects listed in Table 2 were correlated with the top 33 rankings from these sources using Pearson correlation coefficients. The pertinent associations, in order of magnitude, are as follows: Council - Professional Survey Council-Client Survey Council-Citizen Survey Council-Social Indicators

.527 .269 .132 .096

The only significant relationship is found between the rankings of the council and that of the professional survey. This relatively high level of concordance is not surprising if the disproportionate number of professionals and citizen providers on the Council is considered.

SUMMARY

AND CONCLUSIONS

Multiattribute Utility Theory was used on the project as a means of structuring the important work being undertaken by a human services advisory council in a large, urban county. Specifically, the SMART methodology was selected as a means for helping the council integrate a great deal of information from multiple data sources. The SMART assumption of riskless, additive utility values was accepted on the basis of extant theoretical and experimental work (Carroll & Johnson, 1990; Von Winterfeldt & Edwards, 1986). The methodology was useful in these respects: 1. SMART increased the council’s access to systematically presented information on community problems, target populations, and public opinion.

Multiattribute Utility Theory 2. By segmenting the task of service prioritization into problem valuation and service utility evaluation, SMART also controlled the flow of information. Hence, the method helps protect the decision maker from information overload. 3. The method provided a means of tracking participants’ information use and gauging the relative importance of data sources. Despite an almost total group unfamiliarity with the principles of normative decision processes, SMART proved easy to administer, firmly holding the attention of the participants throughout the application. The SMART approach was not without difficulties, however. Several council members would not participate, terming MAUT constraining and confusing. Nor did the process totally eliminate conflict. The use of problem clusters to organize the application was received by a few participants as a means of manipulating the decision process. Perhaps the most serious weakness of SMART was the method’s apparent failure to increase the amount of information used in the decision process. Council problem rankings correlated significantly with professional survey rankings only, raising the specters of a selective information filtering or of inattention to citizen and client interests. Some examples will serve to illustrate these concerns. In both citizen and client opinion surveys, crime was listed as the number one problem (Table 1). Sociai indicator data also portray Essex County as a high crime county with respect to both property and person crimes. These data obviously exerted little impact on the advisory committee, however. Likewise, opinion and social indicator data in the area of drug abuse apparently were not compelling enough to place this problem on the group’s top ten agenda of problems. The authors are reminded here of Stanton’s (1970) description of the elaborate facades that are often projected by formal organizations to make human services delivery seem more participatory than it actually is. We are not saying here that “clients come last,” only that in this instance clients and citizens have provided what appears to be less compelling information. A restructuring of the council to ensure more client and citizen participation could minimize such selectivity in the future. In summary, the use of MAUT in this case demonstrates how community participation and rational planning can be employed as complementary planning mechanisms. Yet a certain wariness exists in the human services planning community around quantitative methods of decision analysis. A recent social work text by Chambers, Wedel, and Rodwell (1992) summarizes this viewpoint:

303

promise more than it can deliver. The model does have the capacity to integrate relevant variables in order to enhance need assessment outcomes. However, like other rationallybased techniques, it is important to remember that the model does not integrate all relevant variables in any needs situation. Finally, there is a dearth of literature to substantiate the successful application of MAUT (p. 101). that the approach outlined here will allay fears that quantitative decision analysis takes planning out of the hands of individual citizens, clients, and professionals. Structuring participation for the efficient use of information need not place limits on the vital energy of democratic process. In point of fact, such structuring provides insight into ways in which the process can be improved. It is hoped

REFERENCES ADELMAN, L., STICHA, P.J. & DONNELL, ML. (1984). The role of task properties in determining the relative effectiveness of multiattribute weighting techniques. ~~anizational behaviorand~uman Performance,

33, 243-262.

BEACH, L.R., & BARNES, V. (1983). Approximate measurement in a multiattribute utility context. Organizational Behavior and Human Performance,

32, 417-424.

CARROLL, J.S., & JOHNSON, E.J. (1990). L3ecision research: A field guide. Newbury Park, CA: Sage Publications. C~A~ffERS,

DE., WEDEL, K.R., & RODWELL, M.K. (1992). Boston: Allyn and Bacon.

Evaluating social programs.

CARTER, W.B., BEACH, L.R., & INUI, T.S. (1986). The flu shot study: Using multiattribute utility theory to design a vaccina&ion intervention. organizational Behavior and Human Decision Processes, 38, 378-393.

DELBERCQ, A.L., VAN DE VEN, A.H., & GUSTAFSON, D.H. (1975). Group techn~quesforprogram planning: A guide to nominal group and delphiproce~~es. Glenview, IL: Scott, Foresman. EDWARDS, W., & NEWMAN, J.R. (1982). ~u~tiatfr~bute evaiuation. Beverly Hills, CA: Sage Publications. EILS, L.C., & JOHN, R.S. (1980). A criterion validation of multiattribute utility analysis and of group communization strategy. Organizatjonal Behavior and Human Performance,

2.5, 268-288.

GARDINER, PC., & EDWARDS, W. (1975). Public values: Multiattribute-utility measurement for social decision making. In M.F. Kaplan & S. Schwartz (Eds.), Human judgement and decision processes (pp. l-37). New York: Academic Press. GILBERT, N., & SPECHT, H. (1977). Planning for social welfare. Englewood Cliffs, NJ: Prentice Hall. GUTTENTAG, M., &SNAPPER, K. (1974). Plans, evaluations, and decisions. Evaluation, 2, 58-74.

There is a certain degree of razzle-dazzle to this (MAUT) approach which can be a source of suspicion for some constituents in the need assessment arena. MAUT appears to

KAHN, A.J. (1969). Theory andpractice ofsocialplanning. New York: Russell Sage.

304

MICHAEL

J. CAMASSO

and JANET

DICK

KEENEY, R.L., & RAIFFA, H. (1976). Decisions wiih multipIe objectives: Preferences and value tradeoffs. New York: Wiley.

NEW JERSEY DEPARTMENT OF HUMAN SERVICES (1987b). Dictionary of service de~~it~ons. Trenton, NJ: Author.

KRUEGER, R.A. (1988). Focus groups. Newbury Park, CA: Sage Publications.

PITZ, G.F., & McKILLIP, J. (1984). Decision anaiysisforprogram evaluators. Beverly Hills, CA: Sage Publications.

L.ARICHEV, O.I., & MOSHKOVICH, H.M. (1988). Limits to decision-making ability in direct multiattribute alternative evaluation. Organizational Behavior and Human Decision Processes, 42,217-233.

PITZ, G.F., HEERBOTH, J., & SACHS, N.J. (1980). Assessing the utility of multiattribute utility assessments. OrganizationalBehavior and Huntan Performance, 26, 65-80.

LAUFFER, A. (1978). Socialpfanning at the community level. Englewood Cliffs, NJ; Prentice Hall.

ROSS, P.L., BLUESTONE, H., & HINES, F.K. (1979). Indicators of social wefl-beingfor U.S. counties (Rural Development Report No. 10). Washington, DC: U.S. Department of Agriculture, Economics, Statistics L Cooperative Service.

LOUVIERE, J.J. (1988). Anaiyzing decision making: Metric conjoint anai_vsis. Newbury Park, CA: Sage Publications. MALAKOOTI, B. (1989). Identifying nondominated alternatives with partial jnformation for multiple-objective discrete and linear programming problems. IEEE Transactions, 19, 95-107. MAYER, R.R. (1972). Socialplanning and social change. Englewood Cliffs, NJ: Prentice Hall. McKILLIP, J. (1987). Need analysis: Toolsfor the human services and education. Newbury Park, CA: Sage Publications. MEEHL, P. (1954). Clinical versus statisticaiprediction. lis, MN: University of Minnesota Press.

Minneapo-

MORGAN, D.L. (1988). Focus groups as qualitative research. Newbury Park, CA: Sage Publications. NEW JERSEY DEPARTMENT OF HUMAN SERVICES (1986). Dictionary ofstandardized fargef problem de~nifions. Trenton, NJ: Author. NEW JERSEY DEPARTMENT OF HUMAN SERVICES (1987a). Comprehensive hu~~la~services planning guidelines. Trenton, NJ: Author.

SEAVER, D.A. (1978). Assessing probability with multiple individuals: Group inieraction versus mathematical aggregation (Research Report 78-3). Los Angeles: University of Southern Cahfornia. STANTON, E. (1970). Clients come last. Newbury Park, CA: Sage Publications. STEUER, R.E. (1986). Midtiple criteria optimization theory, computation and application. New York: John Wiley. STILLWELL, W.G., SEAVER, D.A., & EDWARDS, W. (1981). A comparison of weight approximation techniques in multiattribute utility decision making. Organizational Behavior and Human Performance, 28,&I-77. VON WINTERFELDT, D., &EDWARDS, W. (1986). Decision analysis and behavioral research. Cambridge, MA: Cambridge University Press. WRIGHT, G. (1984). Behavioraldecision theory. Beverly Hills, CA: Sage Publications. YORK, R.O. (1982). Human service planning: Concepts, tools and methods. Chapel Hill, NC: University of North Carolina Press.