Socio-Econ.
Ph.
Sri.
Vol.
1, pp.
3-18
(1967).
A MATHEMATICAL
Psrgamon
Press
Ltd.
THEORY
Printed
in Great
Britain
OF COST-EFFECTIVENESS
HERMAN ZABRONSKY INFRET Corporation, 80 Wall Street, New York, N.Y. (Received 11 April 1967)
Abstract-Planning-programming-budgeting systems are applied where more precise comparison of alternate plans and programs is desired than can be achieved by traditional methods. Cost-effectiveness analysis is used to make such comparisons by relating resource input and useful output of a program relative to stated planning goals. Proper application of the techniques of cost-effectiveness depends on their mathematical foundation. In this paper a rigorous mathematical theory of cost-effectiveness, based on the Gradient Method, is formulated and developed. Computation of cost-effectiveness measures is considered and procedures for determining optima are derived. 1. INTRODUCTION
THE CHALLENGEof planning and attaining rational objectives in the field of socio-economics has increased interest in a general decision-making framework known as the planningprogramming-budgeting system. Planning identifies and relates over-all goals. Programming formally established how activities are organized to meet planning objectives over a period of time. Budgeting is the process of relating the program to fiscal requirements. One of the major values of a planning-programming-budgeting system is that it encourages a more precise comparison of plans and programs than can be achieved through traditional budgetary devices. This is accomplished by using a technique known as cost-effectiveness analysis. Cost-effectiveness analysis is a procedure for systematically relating resource input and useful output of a program relative to specific goals. It is applied to assist the relative merits (input versus output) of different methods for achieving the desired objectives. The cost-effectiveness analysis consists of three model categories: the cost model, the effectiveness model, and the cost-effectiveness model. The analysis comprises the following: (a) options to evaluate; (b) criteria for judging; and (c) parametric models for predicting and measuring the cost-effectiveness of each option. Despite the tremendous growth of interest in the cost-effectiveness technique, no work of rigorous mathematical nature has appeared in print. The purpose of this paper is to develop a rigorous mathematical theory for the cost-effectiveness technique. This theory, based on the Gradient Method is aimed at the development and application of the Gradient Method towards numerical techniques for quantitative solutions to decision problems arising in the program planning context. 2. DESCRIPTION OF THE GRADIENT METHOD APPLICATION The development of a cost-effectiveness model consists essentially of three distinct steps: A description of the cost function, a description of the effectiveness function, and, the derivation of an optimum relationship between these two via some or all of their common variables in order to derive a cost-effectiveness criterion. A cost function is most conventionally expressed as a statistical, numerical or analytic function of a number of objectively
4
HERMAN
ZABRONSKY
definable and measurable ‘design’ parameters characterizing the given program. The effectiveness or performance aspects of the program relate various performance measures to some or all of the same ‘quality’ variables used in the cost model. These measures have one common characteristic: a number, understood to represent a probability, that may be written as a function of some of the variables characterizing the cost function. Thus, the effectiveness of a system may be described by a set of equations. The connection between cost and effectiveness must be derived via their common variables by elimination and optimization. An approach, which may be called the gradient method, affords a method for accomplishing these two objectives. To illustrate this use of the gradient method, consider the case where the common variables may be eliminated, and the cost represented as a function of n probabilities. The function, cost equal to a constant, represents a family of hypersurfaces intersecting the n-dimensional unit hypercube whose edges represent the values of the probabilities between 0 and 1. If the n performance levels are required to exceed certain values, then the minimum cost is given by the value of the cost function at these values. This is the point that the hypersurface for this cost intersects the n-parallelepiped contained with in the n-cube, and bounded by unit values and the given values of performance. Hypersurfaces for greater values of cost lie on one side, and hypersurfaces for smaller values of cost lie on the other side of a given cost surface. Furthermore, the cost surface for values greater than the minimum allowable cost, has points in common with the n-parallelepiped described above. Consider two neighboring surfaces, each with points in common with the parallelepiped, i.e. for which allowable performance levels are feasible. Let the second be described by an incrementally smaller cost than the first. For a given point along the first surface, the gradient in the direction pointing toward the second gives the minimum degradation in performance for fixed decrease in cost, or, alternately, a fixed degradation in performance gives the maximum decrease in cost. Now the cost increment is equal to the product of the n-dimensional Cartesian distance from a point on the first surface to the point where the gradient terminates on the second surface, by the magnitude of the gradient evaluated at the point on the first surface. Since, for a given cost, the choice of a point on the cost surface lying within the parallelepiped is feasible, the optimum point may be found by solving the equation gradient equal to the maximum for points on the cost surface lying within the parallelepiped. The optimum point is the required set of performance values, from which the design parameters may be obtained by an inversion of the original performance equations. The gradient condition is, moreover, easily expressed in terms of the original functions by standard transformations of partial derivatives, without actually performing any inversions, which may be difficult or impossible in practical cases. The method is also easily adaptable to the case where equations are presented in the form of data, since the first derivatives which appear in the gradient condition are easily written as difference quotients. The gradient method described above is also generalized to cases where the number of performance variables is greater or less than the number of ‘design’ parameters. The latter case, in particular, requires an initial optimization resulting in a functional relationship between cost and ‘maximum’ performance variables. This case is, subsequently, reduced to the n by II case. 2.1.
One variable case
A certain ‘design’ characteristic represented by the value of the parameter s may be accomplished at a cost C. The relationship between C and x may be expressed by the
A Mathcrnatical
Theory
5
of Cost-effectiveness
equation C =f(x). The parameter x may be associated with a certain level of performance p by means of the equation p = G(x). That is, x is required to have a certain value in order that a desired objective may be obtained with probability p. The relationship between p and x may also be written x = G-‘(p) = g(p), where G-’ is the inverse function of G. C may, therefore, be expressed as a function of p: c =f(x> The function
=.f[g(p)]
F(p) has the following
(a) F(p) is an increasing (b) F(0) = 0; (c) F(1) = co.
function
= F(p); 0 52p I 1.
(1)
properties:
of p: 0 I p 5 1;
A typical graph of C = F(p) is presented
in Fig. 1.
-cP
0
FIG. 1.
Differentiating
the cost function
C = F(p), we obtain dC = F’(p)
dP
or, AC = F’(p)Ap, approximately. A problem of choice arises at this point: 1. The cost may be too great to achieve a minimum level of performance. 2. The cost is reasonable for very high levels of performance. 3. In the most interesting and typical case, there is a critical zone of moderate levels of performance at moderate costs running to improved levels at sharply increasing costs. The decision maker faces a crucial choice at this point. The equation AC = P’(p)Ap may guide him in deciding if an incremental improvement in performance is worth the added cost as he moves along the performance axis, and at what point runaway costs begin to take over.
6
HERMAN ZABKONSKY
Consider
the simple example, 1
~=P=-__ 1-P satisfying
the requirements
1
1 -P
for a cost function.
Then,
AP
Ac=(1___. - py. This last relationship is very revealing performance level. It may also be useful to consider cost and performance, i.e.
for it indicates
the rapid
the relationship
between
uptrend
of cost for high
the relative
increment
of
(AC)= L(2). Since F(p) and F(y) are rapidly increasing functions of p near y = 1, it is often more convenient to work with the logarithmic form of the cost function. Consider the example, c =
satisfying
the conditions
for a cost function.
e’il-P
-
(?
Then,
1 ln(C f e) = 1-P and
AP
___ A ln(C + e) = (1 _ P)2. In general, AlnC=$$Ap. 2.2. Two variable case Suppose that cost is a function two ‘design’ parameters x1, x2.
of two independent
c =f(-u,,
performance
parameters
pl, p2 via
x2)
Therefore, ’ =fCgI(PI?
P2)? CI2(PIVP2)l = F(PI, PZ); (0 i Pi I l)*
It would, perhaps, have been simpler to assume that pi = G,(x,), (i = 1 = 2), i.e. that pi depends on xi only. The given formulation permits us to consider more complicated possibilities. The cost function F(p,,p,) has the following properties (Fig. 2):
A Mathematical Theory of Cost-effectiveness
1
D’ Discontinuity F=w
D(O,ll
F=O
FIG.
(a) function (b) (c) (d)
2.
F(p,,p,) is an increasing function of pI for fixed values of p2, and an increasing of pz for fixed values of pl. F(O,p,) = .F(pA = 0; (A P 1). w, PA = RP,, 1) = 4P1 f 0). Qr, p2) is discontinuous for (pi, p2) = (0, 1) and (1, 0).
If the performance levels pi, p2 are required to exceed pl’, p2* (respectively), then the minimum cost C, is given by the equation C1 = F(pl’, pz”). This is the point that the curve F(p,,p,) = Cl touches the rectangle bounded by the lines p1 = pl’, p1 = 1, pz = pz”, p2 = 1. (Fig. 3). Curves F(pl, p2) = C, for C2 > C, lie to the right of F(pl, p2) = Cl and
+-I
FIG. 3.
HERMAN ZABRONSKY
8
have points in common with the aforementioned rectangle in the upper right-hand corner. (Rectangle A) Improved levels of performance are available along the part of the curve F(p1,p2) = C, lying in rectangle A. Once the decision maker has decided that the cost C, is feasible, the choice of a point (pl, p2) along the curve F = C, within rectangle A is indifferent. For a given point (pl, p2) along F = C, the gradient in the direction pointing toward F = C - AC gives the minimum degradation in performance for fixed decrease in cost, or, alternately, a fixed degradation in performance gives the maximum decrease in cost. More precisely, for any change AC=Fp,Ap,
+FP2Ap2(FPi+j
where J(rp12 + 4~~~) is fixed, AC is a maximum are the components of a vector in the direction (Fig. 4).
(-AC is a minimum) when Ap1 and Ap2 of the vector with components F,,,, FPz.
FIG. 4.
The problem of locating the ‘best’ point (PI, p2) along the part of the curve F(pl, p2) = C lying within region A may be solved in the following manner: AC = F,,Ap,
= VF * dS
+ F,,Ap,
VF=i$+j$ 2
1
di=idp,
now AC = IVFl ]dSl since VF is parallel (PI,P,)
F=
+jdp,
to dS for optimal AC
dS = WP,, Thus
choice.
Therefore,
for each fixed
C>
the problem
~2) = m
AC =
CF,,2 + Fp22,1,2.
reduces to that of finding the solution of dS(p,, p2) = minimum, along (F(p,,p,) = C and within A.
FP12 + FPz2 = maximum
or
A Mathematical
Theory of Cost-effectiveness
9
Conversely, the question may be posed in the following manner: what is the ‘best’ choice of a point (pr, p2) on F(p,,p,) = C within A, so that an increase in cost to C + AC gives the best result? If (PI’, p2’) is a point on F(pl, pJ = C and the point (pl”, pZ”) on F = C + AC is taken along the gradient from (pi’, pZ’), then the equation AC = IVFI dS = (F,,’ + Fp3*‘*JC(p2”
- pJ* + (pl’ - pl)*]
shows that the ‘best’ point on F = C within A would occur when FP,* + FP2* = minimum. This is in sharp contrast to the previous criterion for attaining a reduction in cost with minimum degradation in performance, where the condition is FP,* + FP2* = maximum. It must be emphasized that it is not necessary to proceed along the gradient in order to achieve an increase in performance level. Indeed, a different direction may lead to an increased performance for one of the variables, albeit at the expense in performance of the other variable. The new criterion for obtaining increased performance via increased cost merely states that if directions along the gradient are considered exclusively, then F,,,* + FP2* = minimum gives the best result. In contrast to the case in the first part where a decrease in cost was a critical item and, consequently, a certain sacrifice in performance was required, an increase in cost is now permitted, in order to achieve an increase in performance. A diagram (Fig. 5) is helpful in understanding this distinction.
I
F=C
AC=iVFldS
FIG. 5. = Minimum degradation in performance for a fixed decrease in cost. lVF,,,I = Maximum. dS(,, = Maximum improvement in performance for a fixed increase in cost. jVF~,,l = Minimum. The following example is presented in order to clarify the method dS(,,
F(PI,P,)
The equation
=
c1
_
,;;;;_
F(pl, p2) = C is equivalent
p2j
=(&-g+).
to,
(1 - p,)(l
- p*) = y
or,
(PI -&)(p*&) =& in standard
hyperbolic
form.
HERMANZABRONSKY
10
It will be easier to see what is happening
by making
the change of variables:
1 PI x_=----l;y~_&L_, 1 -Pi
then,
1 -Pi
1 -P2
2
-
f’,, = y F(pl,p2)
and the equation
”dP1 =y(x
+ 1)‘; F,, = x2 = x(y + 1)2 aP,
= C may be written,
xy = C.
&I2 + FP12 = y2(x + 1)4 + X2(]’ + 1)4 = y2[x4 + 4x3 -t 6x2 + 4~ + I] + x2[y4 + 4y3 + 6y2 + 4y + l] = x2y2(x2 + y2) + 4x*y*(x + y) + 12xzy2 + 4xy(x + y) + x2 f y2 = P[(x =
+ y)2 - 2C] + 4CZ(x + y) + 12C2 3- 4C(x + y) + (x f y)2 - 2c
(
(C2 + 1) (x + Y)2 + 2(x + Y)
2C(C + 1) +
c2
1
-
2C[C2 - 6C + 1-J
=(C2 + ,)(,+y+L’~~++~‘)i)2-2C[C2-6C+1]-4cc:(c+;1)2, where the equation xy = C has been used extensively to carry out the reductions. Thus, max{Fp12 + Fp2*} along F = C is equivalent to max(.u + y) along xy = C subject to certain boundary restrictions. The curve F = C intersects the rectangle A in the points (p1’, p2’), (PI”, p2”) where C(l - PI’) p2’ = pl’ + C(1 - PI’)
C(1 - P211)
&” = p2” + Therefore,
the region of interest
C(1
_
p2”)’
cc1
-
Pz”)
is: Pl’ 5 Pl 5 p21, + C(1 _ p211)
cc1 - Pl') Pzl)5 P2 5
in the p,p,
PI’ + C(1 - PI’)
plane, and PI’ l-p,‘=
< x < C(1 - Pz”) -
p2”
Pz” ;---Iy<= 1 - p2”
C(l
-
-
P,‘)
PI’
in the xy plane. It is clear from Fig. 6 that x + y is a maximum along xy = C between y, and p2 at either of the endpoints p1 or p2. The line of slope - 1 through p1 intersects xy = C in the second point C(l - PI’) PI’ ‘1. I PI’
A Mathematical
Theory of Cost-effectiveness
Y t
L
I
c ( I-P")
c
xy=
%
p”
92 __ii_z_ 1-p;
[ P*
1
,X FIG. 6.
Thus x + y assumes a maximum at pi if
cc1 - Pl’) > cc1 - P2") Pl'
P2"
and at p2 in the contrary case. However, C(1 - PI’) > C(1 - P2”) PI’ PZl’ if and only if pz” > pl’. Thus, finally, max(x + y) = - Pl’ 1 - PI’ xy = C between p1 and pz
1
=n+
Pz)l
+ cc1
-
PI’
PI’> . LfPz” > PI’
Ccl - Pzl))
if p1’ > pz”.
P2’l
Returning to the original p1p2 plane, it is seen that maxiF,,’
+ Fp,*} is attained at p1 if p2” > pl’
[;,::W]
max{F,,’ + F,,,‘} is attained at p2 if PI’ :, pz” [;%A]
HERMAN ZABRONSKY
12
In many cases it will not be possible to invert the equations expressing the performance parameters as a function of design parameters. It will, therefore, be necessary to rewrite the basic condition, L2
I- F*, 2 = Maximum
in terms of the design parameters,
along F(p,, p2) = C (within A)
n, and x2. Returning
to the notations
previously
used,
Differentiating the equations p1 = GI(x,, .x2), p2 = G,(x,, x2) with respect to p, and p2 respectively, the following equations are obtained :
f _ SG, dx,
(3)
I
JG2
ax, sp2
r7x2
ax2 ZP,
(4) The first and the fourth
by Cramers
Rule.
Similarly
equations
may be solved for
the second and third equations OX, -and>. ah
The following
ax
ap2
results arc obtained ?G2/JS2
dX,
dp,=
a(G,,
I).Yl
-= ap,
iisz
X2)
--dG,/axl
ax2 dp,
G,)lJ(X,,
=
a(G,,
G,,/a
x2)
--;G,/ii.~~ a(G,, G2)/a(X1, x2) ZG,/r?.u,
dp, = ?(G,, G2)/2(x,,
s2)
may be solved for
13
A Mathematical Theory of Cost-effectiveness where a(G,, G,)/a(x,, x2) is the Jacobian notation: x2, in conventional
of the functions
ii(G,, G,) = 8(x,, 4
Substituting
the expression
obtained
Fpr-
-
G, and G, with respect to x1 and
dG, aG, -2x1 d-x, L;G2 dG1 ’ -ax, dx,
above for dxi/c7pj,
\,-L,$+k~-
d(G,, Gd
’ 1
2
qx,,
X2)
.
Therefore,
2.3. n Variable case The discussion in 2.2 for the case of 2 variables Let, c =f(x,, . . . x,)
pi =
The cost function
Gi(xl,.
. . x,); . . . p,);
can be readily extended
to n variables.
i = 1, . . . n
i = 1, . . . Q
Xi
=gi(pI,
c
=f(s,(P, *. . PA **. S.(Pl **. PA)
F(p,, . . . p,) has the following
= F(P,, * ** Pn
properties:
(a) 0, . . . p,) is an increasing function of pi(l 5 i s n) for fixed values of pj(j # i); (b) F(p, , . . ) Pi . . . pn) = Ofirpi = 0; pj # l,i # i; 1 fOrpi= l;Pj#O;j#i; (c) F(PI . . *Pi, . . . p,)= at points (pi . . . p,) where pi = 0 for some i, and (d) F(P, . . . p,) is discontinuous pi = 1 for some j. Briefly, F is defined as a function in the n dimensional unit cube 0 s pi s 1; i = 1, . . . n. which is 0 or 1 on the n - 1 dimensional bounding hypersurfaces according to the conditions given above. If the performance levels pi are required to exceed pi’(i = 1, . . . n), then the minimum cost C1 is given by the equation C, = F(pl’, . . . p,‘). This is the point that the hypersurface the n dimensional rectangular parallelepiped pi’ $ pi 5 1; F(P, . . . p,) = Cl touches (i= 1, . . . n).
HERMAN ZABRONSKY
14
The argument employed in the two-dimensional case may also be applied here, giving the following result: the best performance level on the hypersurface F(p, . . . p,) = C may be obtained by determining the solution of
i$lFPi2 =maximum,
subject to F(p,, . . . p,) = C, pi’ 5 pi s 1.
This solution gives the minimum degradation in performance for fixed decrease in cost, or, alternately, the maximum decrease in cost for fixed degradation in performance. Conversely the ‘best’ improvement in performance for a fixed increase in cost is given by
i$,FPi2 =minimum,
subject to F(p,, . . . p,) = C, pi’ s pi s 1.
2.4. The general case The following is a more general
formulation
than that considered
in the first part:
C = F(x, , . . . x,) Pi=Gi(x,
. ..X.);i=l,...m.
Three cases may arise: (1) m = n; (2) m < n; (3) m > n. Case (1) has been treated extensively in part 2. There is, however, no reason why this case alone will arise. Consider the special case of (2) n = 2, m = 1, as a prelude to achieving a further generalization : C = F(x,, x2) p = G(x,, x2). There are two approaches
to the above problem
(a) For a given cost, maximize
performance;
which turn out to be equivalent: (b) For a given performance,
minimize
cost. Consider (a). Employing the method of Lagrange multipliers to G + I[F - C], and leaving aside the annoying possibility that this method may not always work, the following equations are obtained : G,, + AF,, = 0, G,, + IF,, subject to F(s,,
= 0
x2) = C. Therefore,
and two equations
in two unknowns
are:
F.x,Gx,- FxzGx,= 0 F = C. Solving for x2 as a function in the second,
of x1 in the first equation, C = F(x,,
x*(x,))
= Wx,).
x2 = x2(x1), and substituting
I5
A Mathematical Theory of Cost-etfectiveness
We may also write, P = G(x,, x2(x,)) and, by elimination
of x1(x1 =
= K(x,)
K-'(p)),
c = H(x,)= H[W'(p)] =f(p). This is an equation relating the cost C to the maximum performance p. It must be emphasized that C is by no means a function of p alone, but may be written as a function of It would be better, perhaps, to underline this distinction maximum performance, as above. by writing, C =f(P*). The method minimize,
of Section
2.1 (for C =f(p))
may now be applied
here.
Similarly
in (b)
F + i[G - p] F,, + AC,, = 0, F,, + AC,, = 0 subject to G(x,, XJ = p. 1. = Two equations
in two unknowns
-F -F xl=xz. Gx2 GXI
may be written:
F,,G,, - F,,G,, = O(sam G(x,,
equation
as before)
x2) = p.
Therefore, P = G(x,, -yz(x,)) = I(s,) and C = F(x,, xz(xJ)
=4x,)
C = J{I- l(P)> = g(P)* The last equation written :
connects
the minimum
cost for a given performance,
and is better
C” = g(p). If C = F(x,, x2(x1)) and p = G(x,, x2(x1)) are one-to-one functions, then C = C* = > P = p* and conversely, and the equation C =S(p*), C* = g(P) are equivalent. Another method leading to the same thing: (a) C =
F(x,, x,), C constant;
(b) p = G(x,, x,), maximize P = GCx,,
xzbl)l via(01
dp
-=G,,+Gx2~~~~
dx,
Furthermore,
F,,dx,
+
F,,dx,
= 0 since
1
F = C = constant.
F dx2 G -xl=_2 -= dx, Gx, Fx, giving F,,G,, - F,,G,, = 0 and F(x,,x,)= C as before.
p,
HERMAN ZABRONSKY
16
Consider
the following
example,
illustrating c =x12
p =(I
the foregoing
analysis.
+ x22
x1xz = (1 -&)(I +x,)(1 +X2)
-&-)
maximize,
subject to x1* + xZ2 = C. Differentiating with respect to X, and x2, respectively, is obtained :
the following
system of equations
x2
(1 + x,)*(1 + x2)
(1 + \.,;;I
+21x,
= 0
+ x1) + 23.s2 =O
x,2 + x22 = c. Eliminating
I., .X22(1 + X2) = x,2(1 + Xl)
Since the function (C/2)1/2
x2( 1 + .u) is monotonic
for positive values of .y, x1 = x2. Thus, x1 = x2 =
Cl2 P* = [I + (C/2)1/237 Solving for C in terms of P*, 2P” c = (1 - J(p*)2 and thus the functional is obtained. In general let,
relationship
between
the cost C and the maximum
performance
p*
c = F(X,, . . . x,) pi = Gi(xl, . . . x,);
i = 1, , . . m (m < n).
values and attempt to minimize C. Letp,(i= 1, . . . m) be fixed allowable performance The set of m equations pi = Gi may be solved implicitly for x,+~ _,,,, . . . x,, in terms of X, . . . x,_,: xs=xs(.xI . . . x,_,); S = n + 1 - m, . . . n. Substituting
in the first equation
:
c = F(x,, . . . X#_,, x,+ 1-m(X1 . . . X,_“,}, . . *, x,(x1 Differentiating
with respect to x,, r = 1, . . . n - n?,
. . . X,_}).
A Mathematical These
x1 . . . x,,-,,,.
in the 11- in unknowns,
are II - nl equations
The solutions
are functions
17
Theory of Cost-effectiveness
pl, . . . p,,,. Therefore,
of the 172parameters
II-m
Xi~Xi(pl...p.);i=l,... and also xs=xs{x,(p,...p,)
,...
xn_Jpl ,...
p,)};S=n+l--m
,...
n.
Substituting in the equation C = F(. . . xi(pl . . . p,). . .) =f(p,, . . . p,). Thus the minimum cost C has been expressed as a function of pI . . . pm. The method of part 1 may now be applied to this equation. The ‘overdetermined’ case, i.e. m > n leads to a set of competing problems. Consider, first, the trivial example; n = 1, m = 2.
c = F(x) Pl =
G,(x),
~2 = G,(x).
This reduces to,
and must be treated cost C satisfies,
C = fYG,-‘(~dl
=~I(PI)
C = FlG,-
=.f2(~2)>
as two independent
‘(~2))
problems
following
the method
of part 1. If the
C.(l) L- < C < _L C.(2) on the basis of the equation ditions requires that
C =fi(pi),
i = 1, 2; a simultaneous
realization
of both con-
max Ci(l) 5 C 5 min Ci(*) i=1,2
i=1,2
The next case in order of complexity
is given by the system,
C = F(X1,
x2)
pi = Gi(xl, x2); i = 1, 2, 3. Consider
the two derived subsystems,
(1)
c
= ex,,
(2)
x2)
c
= %,
x2)
PI = GI(XI,
x2>
PI = G,(x,,
xz)
~2 = G2h
x2>
~3 = G361,
~2)
01
c
=_f1hb
P2)
c
=_f,(Pl,
P3)
which are obtained by eliminating the design variables Xi in (1) and Let C be an allowable cost which is consistent with values of meters pi(i = 1, 2, 3), i.e. suppose that arcs of the curves C =fl(pl, within rectangles pl 2 pl’, p2 2 pz’; p1 2 pl’, p3 2 p3’ respectively. B
(2) respectively. the performance parap2), C =f2(p1, p3) lie (Fig. 7).
HERMAN ZABRONSKY
18
c=‘2’sIP3’ \
(b)
(4 FIG. I.
Suppose that the curve C =fr@r, p2) intersects the line p2 = p2’ in (pl”, p2’) and C =f2(p1, p3) intersects the line p3 = p3’ in (pi”‘, p3’) where pr” > pr”‘. Then, only the arc a offi = C lying between the vertical lines pr = pl’ and p1 = pl”’ need be considered since the only acceptable values of p1 satisfy pl’ 5 p1 5 pl”‘. The arc /3 in Fig. 7(b) is the same as before. Consider the problem of finding the maximum decrease in cost for fixed degradation in performance for curves a and p respectively. It has been seen that this amounts to solving,
fi,12+hm2= maximum fZP12 + f2,:
Let the maxima
be attained (fi,I’
Then C
the point
=
maximum
at the points + ~I,,2)maximum
,Y= (p1*,p2*)
subject tofr
= C (curve a)
subject tof2
= C (curve /I)
x and y, respectively,
and that
> (f2P12 + f2,,2)maximum.
is preferred,
and p3* is determined
by the equation
=sz(P1*, P3*). REFERENCES
1. CHARLESJ. HITCH and ROLAND N. MCKEAN, The Economics of Defense in the Nuclear Age. RAND Corporation, R-345 (1960). 2. Weapons Systems Effectiveness Industry Advisory Committee. Rep. (WEISAC), AFSCTR-65-1 ,2 9394 I5 , DDC, Cameron Station, Alexandria, Virginia (1965). 3. H. HEUSTONand G. OGANA, Observations on the Theoretical Basis of Cost Effectiveness, Ops Res. 11 (1966). 4. Planning-Programming-Budgeting System: A Symposium, Puhl. Adm. Rev. 26, NO. 4, 1966. 5. K. ROBINSON,Cost and Effectiveness of Recent Governmental Land Retirement Programs in the United States, J. Fm Econ. 47 (1966).