11 Reliability ideas and applications in economics and social sciences

11 Reliability ideas and applications in economics and social sciences

P. R. Krishnaiah and C, R. Rao, eds., Handbook of Statistics, Vol. 7 © Elsevier Science Publishers B.V. (1988) 175-213 1 1 .lk 1 Reliability Ideas a...

2MB Sizes 1 Downloads 47 Views

P. R. Krishnaiah and C, R. Rao, eds., Handbook of Statistics, Vol. 7 © Elsevier Science Publishers B.V. (1988) 175-213

1 1 .lk 1

Reliability Ideas and Applications in Economics and Social Sciences

M. C. Bhattacharjee*

O. Introduction and summary

0.1. In recent times, Reliability theoretic ideas and methods have been used successfully in several other areas of investigation with a view towards exploiting concepts and tools, which have their roots in Reliability Theory, in other settings to draw useful conclusions. For a purely illustrative list of some of these areas and corresponding problems which have been so addressed, one may mention: demography (bounds on the 'Malthusian parameter', reproductive value and other related parameters in population growth models--useful when the age-specific birth and death-rates are unknown or subject to error: Barlow and Saboia (1973)), queueing theory (probabilistic structure of and bounds on the stationary waiting time and queue lengths in single server queues: Kleinrock (1975), Bergmann and Stoyan (1976), KollerstrOm (1976), Daley (1983)) and economics ('inequality of distribution' and associated problems: Chandra and Singpurwalla (1981), Klefsj6' (1984), Bhattacharjee and Krishnaji (1985)). In each of these problems, the domain of primary concern and immediate reference is not the lifelengths of physical devices/systems of such components or their failurelogic structure per se but some phenomenon, possibly random, evolving in time and space. Nevertheless, the basic reason behind the success of cross-fertilization of ideas and methods in each of the examples listed above is that the concepts and tools which owe their origin to traditional Reliability theory are in principle applicable to non-negative (random) variables and (stochastic) processes generated by such variables. 0.2. Rather than attempt to provide a bibliography of all known applications of Reliability in widely diverse areas, our purpose in this paper is more modest. We review recent work on such applications to some problems in economics and social sciences--which is illustrative of the non-traditional applications of Reliability ideas that is finding increase use. In Section 1, 'social choice functions' and * Work done while the author was visiting the University of Arizona. 175

176

M. C. Bhattacharjee

the celebrated 'impossibility theorem' of Arrow (1951) are considered as an application of 'monotone-structure' ideas. Section 2 considers 'voting games' and 'power indices' which are among the best known quantitative models of group behavior in political science, to show they can be modeled via the theory of structure functions. Besides providing new viewpoints and alternative proofs of well known classic results which these situations illustrate, reliability ideas can also lead to new insights. Sections 3 and 4, which exploit appropriate parametric and nonparametric 'life distribution' ideas, are in the latter category. Section 3 considers alternatives to the traditional Lorenz-coefficient and Gini-index for measuring 'inequality of distribution' in economics by exploiting mean residual life and TTT-transform concepts. Section 4 describes an approach to modeling some aspects of the 'economics of innovation and R & D rivalry' by considering the 'reliability characteristics' of the time to innovation of a technologically feasible product or process among a competing group of entrepreneurs or firms which are in the race to be the first to innovate. In each of the four themes, a summary of the problem formulation and basic results of interest precedes the reliability analogies and arguments which can be brought to bear on the problems. No detailed proofs are given except for Arrow's theorem (Section 1.2) from an unpublished technical report whose succint arguments are reviewed to illustrate how the reliability approach can be constructive in clarifying the role of underlying assumptions and an alternative insight. The role of interpretation of appropriate reliability theoretic concepts and results for such an interplay cannot be minimized and are interspersed throughout our presentation. The format is mainly expository in nature, although some results are new. In each section, we also indicate some possible directions of further development that would be interesting from the point of view of the themes addressed and that of reliability theory and applications.

1. The 'Impossibility Theorem' of Arrow 1.i. Arrow (1951) considered the problem of aggregating 'individual perference orderings' to form a 'social preference ordering'. In the conceptual framework of social decision making and particularly in the context of voting theory, his celebrated 'impossibility theorem' is a landmark result which essentially states that there is no social preference ordering which obeys two reasonable axioms and four conditions that one would expect all reasonable ways of aggregating individual preferences to a collective one to satisfy. Pechlivanides (1975) in a paper investigating some aspects of social decision structures, has given an alternative proof of Arrow's theorem using coherent-structure arguments of reliability theory which appears to have remained unpublished and which we believe is a very apt illustration of the reliability arguments for many modeling problems in the social sciences. His arguments are somewhat succint which we will review and amplify. Before reviewing Pechlivanides' proof, we take up a brief description and formal statement of Arrow's theorem which may not be entirely familiar to relia-

Reliability applications in economics

177

bility researchers. Central to this is the idea of a preference ordering R among the elements x, y,, ... of a finite set F. R is a relation among the elements of F such that for any x, y ~ F, we say: x R y iff x is at least as preferred as y. Such a relation R is required to satisfy the two axioms: (A1) Transitivity: For all x, y, z t F; x R y and y R z ~ x R z. (A2) Connectedness: For all x, y 6 F; either x R y or y R x or both. Technically R is a complete pre-order on F; it is analogous to a relation such as 'at least as tall as' among a set of persons. Notice that we can have both x R y and y R x but x ~ y. For a given F, it is sometimes easier to understand the relation R through two other relations P, I defined as x P y ~*~ x is strictly preferred to y; while x I y ,~ x and y are equally preferred (indifference). Then note, (i) x R y ~:~ y ~ x, i.e., x R y is the negation o f y P x and (ii) the axiom (A2) says: either x P y or y P x or x I y . Now consider a society S = { 1, 2 . . . . . n} of n-individuals (voters), n >I 2 and a finite set A of alternatives consisting of k-choices (candidates/policies/actions), k > 2. Each individual i t S has a personal preference ordering R i on A satisfying the axions (A1) and (A2). The problem is to aggregate all the individual preferences into a choice for S as a whole. To put it another way, since R; indicates how i 'votes', an 'election' 8 is a complete set of 'votes' {formally, = {Ri:i~ S}) and since the result of any such election must amalgamate its elements (i.e., the individual voter-preferences) in a reasonable manner into a well-defined collective preference of the society S; such a result can be thought of as another relation R* on A which, to be reasonable, must again satisfy the same two axioms (A1) and (A2) with F = A. Arrow conceptualizes the definition of a "voting system" as the specification of a social preference ordering R* given S, A. There are many possible R* that one can define including highly arbitrary ones such as R* = R~ for some i ~ S (such an individual i, if it exists, is called a 'dictator'). To model real-world situations, we require to exclude such unreasonable voting systems and confine ourselves to those R* which satisfy some intuitive criteria of fairness and consistency. Arrow visualized four such conditions, namely: (C1) (Well-definedness). A voting system R * must be capable of a decision. For any pair of alternatives a, b; there exists an 'election' for which the society prefers a to b. [R* must be defined on the set of all n-tuples B = (R~ . . . . . Rn) of individual preferences and is such that for all a, b in A, either a R* b or a ~ * b, there exists an B such that b $ * a . ] (C2) (Independence of Irrelevant Alternatives). R* must be invariant under addition or delition of alternatives. [ I f A ' c A and o~ = {Ri: i t S} is any election, then RI*, should depend only on {Ril A, : i t S} where Rtl A, (RI*, 1, respectively) is the restriction of R; (R* respectively) to A ' . ] (C3) (Positive Responsiveness). An increasing (i.e., nondecreasing) preference for an alternative between two elections does not decrease its social preference. [Formally, given S and A, let g = {R~:i~S} and g ' = { R ' ' i ~ S } be two elections. If there exists an a t A such that

M. C. Bhattacharjee

178

(i) a R i a ' =¢. aR; a' for all i t S , and a' ~ a ; (ii) for all pairs ( a ' , b ' ) t A x A with a ' # a , b'~b, a'#b', {(a',b'): a' R,b'} = {(a', b'):a' R; b'}, then, a R * a ' ~ a R * ' a', for all a' ~ a. In other words, if each voter looks on a t A at least as favorably under g ' as he does under g and if the individual preferences between any other pair of altematives remain the same under both elections, then the society looks on a at least as favorably under g ' as it does under do.] (C4) (No Dictator). There is no individual whose preference ('vote') always coincides with the social preference regardless of the other individual preferences. [There does not exist i t S with R* = Ri, i.e., such that for all (a, b),

a R i b ~ A R * b and a ~ i b ~ a ~ * b . ] Call a voting system (social preference ordering) R * admissible iff it satisfies the axioms (A1), (A2) and the conditions (C1)-(C4). Arrow's impossibility theorem then claims that for a society of at least two individuals and more than two alternatives, an admissible voting system does not exist. 1.2. The 'reliability' argument. Traditional proof of Arrow's theorem depends heavily on the properties of complete pre-orders. To see the relevance of reliability ideas for proving Arrow's theorem, Pechlivanides imagines the society S as a system and each voter i t S as one of its components. For every pair (a, b) of alternatives with a ¢ b , associate a binary variable x i : A 2 - - * { O , 1}, where A 2 = {(a, b): a t A, b E A, a ~ b} is a set in A x A devoid of its diagonal, by xi(a,b)= 1 i f a R i b , = 0

if aI~ib.

(1.1)

Relative to b, every xi(a, b) is a vote for a if xe(a, b) = 1 and is a vote against a if it equals zero. Thus x i defines i's vote and is an equivalent description of his individual preference ordering R r The vote-vector x = {x I . . . . . xn): A 2 ~ {0, 1} n is an equivalent description of an election ~ = (R l . . . . , R,). A voting system (social preference ordering) R * is similarly equivalent to specifying a social choice function FA: A 2 ~ {0, 1} such that

FA(a,b)= 1 i f a R * b , =0

if a ~ * b .

(1.2)

Each xe(a, b) = 1 or 0 (FA(a, b) = 1 or 0 respectively) according as the individual i (society S, respectively) does not/does prefer b to a. Formally, Arrow's result is then: IMPOSSIBILITY THEOREM (Arrow). There does not exist a social choice function FA satisfying (A1), (A2) and (C1)-(C4).

Reliability applications in economics

179

To argue that the two axioms and four conditions are collectively inconsistent, the first step is to show: LEMMA

(C1)-(C3) hold ¢~ FA = 4(x) for some monotone structure function 4.

1.

PROOF. Recall that a monotone structure function in reliability theory is any function 4: {0, 1}" ~ {0, 1} such that 4 is non-decreasing in each argument and 4(0) = 0, 4(1)= 1, where 0 = (0, ..., 0) and 1 = (1, ..., 1) (viz., Barlow and Proschan, 1975). First note (C2) ~ FA(a, b) depends only on (a, b) and not on all of A. Hence we will simply write F for F A. The condition (C1) =*. F(a, b) = 4(x(a, b)) for all (a, b ) ~ A 2, for some binary structure function 4. Next, (C3) =*, this 4(x) is monotone non-decreasing in each coordinate x;. Finally (C1) and (C3) together =~ 4(0) = 0, 4(1) = 1; viz., since by (C1), there exist vote-vectors x o and x 1 such that 4(Xo) = 0, 4(xl) = 1; by the monotonicity hypothesis (C3) for 4, we get

0 ~< 4(0) ~< 4(Xo) = 0, 1

:

4(Xl) ~ 4{1) ~ 1.

Thus the conditions (C1)-(C3) imply F = 4(x) for some monotone structure function 4. The converse is trivial. [] The axioms (A1) and (A2) for voting systems translated to requirements on the social choice function F(a, b) = 4(x(a, b)) become (A1) Transitivity: F(a, b) = 1 = F(b, c) =~ F(a, c) = 1. (A2) Connectedness: F(a, b)= 1 or 0. Consider a pair of alternatives (a, b ) ~ A 2 such that F(a, b)= 4(x(a, b))= 1. Borrowing the terminology of reliability theory, we will say P(a, b ) = : { i ~ S : xi(a, b ) = 1) = {i~ S: a R, b}

(1.3)

is an (a, b)-path. Similarly if F(a, b) = 0, call the set of individuals C(a, b) = : { i ~ S : xi(a , b)

= 0) =

{i6 S: b P~a}

(1.4)

as an (a, b)-cut. Thus an (a, b)-path ((a, b)-cut, respectively) is any coalition, i.e., subset of individuals whose common 'non-preference of b relative to a' ('preference of b over a', respectively) is inherited by the whole society S. Obviously such paths (cuts) always exist since the whole society S is always a path as well as a cut for every pair of alternatives. When the relevant pair of alternatives (a, b) is clear from the context, we drop the prefix (a, b) for simplicity and just refer to (1.3) and (1.4) as path and cut. A minimal path (cut) is a coalition of which no proper subset is a path (cut).

M. C. Bhattacharjee

180

To return to the main proof, notice that Lemma 1 limits the search for social choice functions F = ~(x) to those monotone structure functions tp which satisfy (A1), (A2) and (C4). A social choice function satisfies the connectedness axiom (A2) iff for every pair of alternatives (a, b); there exists either a path or a cut, according as F(a, b) = 1 or 0, whose members' common vote agrees with the social choise F(a, b). The transitivity axiom (A1) that F(a, b)= 1 = F(b, c) =~ F(a, c ) = 1 for each triple of alternatives (a, b, c) can be similarly translated as: for each of the pairs (a, b), (b, c), (a, c); there exists a path, not necessarily the same, which allow the cycle of alternatives a, b, c, to pass. Let ~ ' be the class of monotone structure functions and set = : { ~ J g : no two paths are disjoint}, ~ * =: {q~ J / : intersection of all paths is nonempty}, =

where q~d is the dual-structure function ~d(x) = :1 - ~b(1 - x). (~-* respectively) are those monotone structures for which there is at least one common component shared by any two paths (all paths, respectively). ~ is the class of self-dual monotone structures for which every path (cut) is also a cut (path). Clearly i f * ~ ~. Also ~ c ~ ; for if not, then there exists two paths P~, /'2 (which are also cuts by self-duality) which are disjoint so that we then have a cut P1 disjoint from a p a t h / 2 . This contradicts the fact that any two coalitions of which one is a path and the other a cut must have at least one common component, for otherwise it would be possible for a structure tp to fail (tp(x) = 0) and not-fail ((p(x)~ 0) simultaneously violating the weU-definedness condition (C1). Thus c~ ~ * ~ ~ .

(1.5)

To see if there is an admissible social choice function F, we are asking if there exists a $ ~ ' satisfying (A1), (A2) and (C4). To check that the answer is no, the underlying argument is as follows. First check (A2)

~

~

~

(1.6)

and hence q ~ ~ by (1.5). Which are the structures in (A2) that satisfy (A1)? We show this is precisely ~ * , i.e., claim ~ (A1) = ~ *

(1.7)

so that any admissible F = q~(x)~ ~ * . The final step is to show the property defining ~ * and the no-dictator hypothesis (C4) are mutually inconsistent.

181

Reliability applications in economics

The following outlines the steps of the argument. For any pair (a, b) of alternatives, the society S obeying axiom (A2) must either decide 'b is not preferred to a' (F(a, b)= q)(x(a, b))= 0) or its negation 'b is preferred to a' (F(a, b) = ¢(x(a, b)) = 1). If the individual votes x(a, b) result in either of these two social choices as it must, the dual response 1 - x(a, b) (which changes every individual vote in x(a, b) to its negation) must induce the other; i.e., for each x, q~(x) = 0 (1, resp.) ¢>

q~(1 - x) = 1 (0, resp.)

.¢~ ~a(x) = 0 (1, resp.) = ¢(x) Thus (A2) restricts use to ~. To argue (1.6), consider a q~e o~*. If i0 is a component individual common to all paths for all pairs of alternatives, then {io} is necessarily a cut; i.e., systems in ~ * have a singleton cut {to}. Since this component io obeys the transitivity axiom, so does q~. Thus systems in o~* satisfy (A1) so that together with o~ * c o~ we see, o~* is contained in o~ n (A1). One thus has to only argue the reverse inclusion: systems in ~ obeying transitivity must be in o~*. Consider any such system cpe ~ and the set of all of its paths for all alternative pairs (a, b). Now (i) if there is only a single path, then cp¢ o~* trivially and hence satisfies (A1) since ~ * does. (ii) If there are exactly two paths in all, then ~ = ~ * ; so again ¢ e ~'* satisfying (A1). (iii) If there are at least three paths, choose any three, say P~, p2, p3. Let i*(1, 2) be a component in p1 ~ e2. Suppose i*(1, 2) ¢ p3 if possible. Then there exists distinct components i*(2, 3), i*(1, 3) in p2 n p3 and p1 c~ p3 respectively. Choose the component-votes (individual preference orderings) of these components, and "the system-votes (social choices) by appropriate choices of the votes for the remaining components in the three paths for an arbitrary but fixed cycle of alternatives (a, b, c) as shown in Table 1 (for simplicity, the component preferences and votes are generically denoted by P and x(., ") by suppressing the individual identity subscript. Thus for i*(1, 2), the preference P = Pi*(1.2), x(a, b) = xi.(1 ' 2)(a, b) . . . . etc.).

Table 1 Paths

Common component

Individual preference

Equivalent componentvote

Suitable choices of votes for other components in

Corresponding social choice

ply p2

i~(1, 2) t~(2, 3) t'*(1, 3)

aP bP c cP aP b bP cP a

x(c, b) = x(b, a) = 0 x(b, a) = x(a, c) = 0 x(a, c) = x(c, b) = 0

p1 p2 p3

F(c, b) = 0 F(b, a) = 0 F(a, c) = 0

p2, p3 p l , p3

182

M. C. Bhattacharjee

Since F = cp(x) is self-dual, we have F(a,b)= 1-F(b,a),

all ( a , b ) ~ A 2 ;

viz., xi(a, b) = 1 - xt(b, a), all i~ S, all (a, b); hence F(a, b) = qb(x(a, b)) = ~d(x(a, b) = 1 - ~p(1 - x(a, b)) = 1 - ~(x(b, a)) = 1 - F(b, a). Hence, for the cycle of alternatives (a, b, c); from the last column of the above table, we have: F(b, c) = 1 = F(c, a), but F(b, a) = 0; thus contradicting the transitiveness axiom (A1). Hence all three paths must share a common component. In the spirit of the above construction, an inductive argument can now similarly show that if there are (j + 1) paths in all and if every set of j paths have a common component, then so does the set of all (j + 1) paths; j = 1, 2 . . . . if (A1) is to hold. Thus there is a component common to all paths, i.e., q ~ if*. Let i* be such a component. Since i* belongs to every path, it is a one-component cut. It is also a one component path, but the self-duality of qk That {i*} is both a path and a cut says,

x,.=l(o)

~

~(x)=l(0),

irrespective of the votes x~ of all other individuals i ~ S , i # i*. Hence i* is a dictator. But this contradicts (C4). [] While unless there are at least two individual components (n >~ 2) the problem of aggregation is vacuous, notice the role of the assumption that there are at least three choices ( k > 2 alternatives) which places the transitiveness axiom in perspective. There are real-life voting systems (social choice functions) which do not satisfy (A1). One such example is the majority system R * such that aR*b

.¢~ N ( a , b ) > l N ( b , a )

where N ( a , b ) = {# of voters i ~ S with aRab} = ~ x~(a,b). i=1

Since each individual is a one-component self-dual system (viz., xi(a, b) = 1 - xi(b, a), all (a, b)); the social choice function F corresponding to the majority voting system R* is r(a, b ) = (a(x(a, b))= O(l)

~

~ xi(a, 6)>1 (<)½n. i=l

Thus F is the so-called (m, n)-structure cp in reliability theory, where m = [½n] + 1 i f n o d d , = ½n

i f n even.

Reliability applications in economics

183

This F = ~p(x) is monotone, indeed a coherent-structure; but F and the corresponding voting system R* is not transitive since with three choices (a, b, c), we may have a majority (>~ n/2) voters not preferring 'c to b' and 'b to a' but strictly less than a majority not preferring 'c to a'. Formally Y~7=1x~(a, b)>~n/2, "i = 1 x i ( b , c) >i n/2 but ~ni = 1 xi(a, c) < n/2; correspondingly F(a, b) = F(b, c) = 1 but F(a, c) = O. The non-transitiveness of majority systems is a telling example of the impossibility of meeting conflicting requirements each of which is desirable by itself. Pechlivanides (ibid.) also shows that if we replace axiom (A1) by symmetry of components (i.e., require tp(x) to be permutation-invariant in coordinates of x) but retain all other assumptions in Arrow's theorem; the only possible resulting structures are the odd-majority systems. In this sense, majority voting systems with an odd number (n = 2m + 1) of voters is a reasonable system. While transitiveness is essentially a consistency requirement, the symmetry hypothesis is an assumption of irrelevance of the identity of individuals in that any mutual exchange of their identities do not affect the collective choice. One can ponder the implications of the trade-off between these assumptions for any theory of democratic behavior for social decision maing. 1.3. The monotone structures tp in Lemma 1 are referred to as coherent structures in Pechlivanides (1975). In accepted contemporary use (viz., Barlow and Proschan, 1975) however, coherence requires substituting the assumption q~(x) = x for x = 0, 1 for monotone structures by the assumption that all components are 'relevant'. A component (voter) i E S is irrelevant if its (the person's) functioning or non-functioning (individual preference for or against an alternative) does not affect the system's performance (social choice) i.e., ~(x) is constant in all x~, equivalently tp(1,, x) - tp(0;, x) = 0,

all x

where (0;, x):= (x I . . . . x,._ l, 0, xi+ 1. . . . . xn) and (li, x) is defined similarly. Hence tp(.;, x) is the social choice given i's vote, i e S. Thus, ie S is relevant

¢~

q~(li, x) - tp(0i, x) ¢ 0,

some x

~b(li, x(a, b)) - ~(0 i, x(a, b)) v~ O, some (a, b) when relevance is translated in terms of social choice given i's vote; while i ~ S is a dictator q~(le, x(a, b) = 1, qb(Oi, x(a, b)) = O, all (a, b). Let S~, b = {i~ S: ¢(li, x(a, b)) - ~(0,, x(a, b)) = O}

184

M. C. Bhattacharjee

Then the set of dictators, if any, is D = {i ~ S: tp(1 t, x) - (a(Oe, x) ~ O, all x} =

~

S~, b,

(a, b ) ~ A 2

while the set of irrelevant components is D O = {i 6 S : tP(li, x) - tP(Oi, x) = O, all x} =

(~

Sa, a.

(a, b ) ~ A 2

Note, tp is coherent ,~ ~p is coordinatewise monotone nondecreasing and D O = (empty); while the 'no dictator hypothesis' holds ~,, D = ~. In the context of the social choice problem, we may call D O as the set of 'dummy' voters who are those whose individual preferences are of no consequence for the social choice. An assumption of no dummies (Do empty), which together with (CI)-(C3) then leads to a coherent social choice function F = ~p(x), would require that for every individual there is some pair of alternatives (a, b) for which the social preference agrees with his own. By contrast Arrow's no-indicator hypothesis is the other side of the coin: i.e., for every individual there is some (a, b) for which his preference is immaterial as a determinant of the society's choice. While the coherence assumption of reliability theory has yielded rich dividends for modeling aging/wear and tear of physical systems, it is also clear that the 'no dummy' interpretation of 'all components are relevant' assumption is certainly not an unreasonable one to require of social choice functions. What are the implications, for traditional reliability theory, of replacing the condition of relevance of each component for coherent structures by the no-dictator hypothesis ? Conversely in the framework of social choice, it may be interesting to persue the ramifications of substituting the no dictator hypothesis (C4) by the condition of 'no dummy voters'--themes which we will not pursue here, but which may lead to new insights.

2. Voting g a m e s and political power

We turn to 'voting games' as another illustration of the application of reliability ideas in other fields. Of interest to political scientists, these are among the better known mathematical models of group behavior which attempt to explain the processes of decision for or against an issue in the social setting of a committee of n persons and formalize the notion of political power. For an excellent overview of literature and recent research in this area, see Lucas (1978), Deegan and Packel (1978), and Straffin (1978)--all in Brams, Lucas and Straffin (1978a).

2.1. The model and basic results. Denote a committee of n persons by N. Elements of N are called players. We can take N = {1, 2 . . . . . n} without loss of generality. A coalition is any subset S of players, S ~ N. Each player votes yes or no, i.e., for or against the proposition. A winning (blocking) coalition is any

Reliability applications in economics

185

coalition whose individual yes (no)-votes collectively ensure the committee passes (falls) the proposition Let W be the set of winning coalitions and v: 2Jv~ {0, 1}, t h e binary coalition-value function

v(S) = 1 if S ~ W (S winning), = 0

if s~ W (S is not winning).

(2.1)

Formally, a simple voting game G (also referred to as a simple game) is an ordered pair G = (N, W), such that (i) ~ s W , N ~ W

and

(ii) S ~ W , S c

T =~ T e W

(if everyone votes 'no' ('yes'), the proposition fails (wins); and any coalition containing a winning coalition is also a winning coalition) or, equivalently by an ordered pair (N, v) where (i) v(~) = 0, v(S) = 1

and

(ii) v is nondecreasing.

The geometry and analysis of winning coalitions in voting games, as conceptual models of real life committee situations, provides insights into the decision processes involved within a group behavior setting for accepting or rejecting a proposition. The theoretical framework invoked for such analysis is that of multiperson cooperative games in which the games G are a special class. To formulate notions of political power we view a measure of individual player's ability to influence the result of a voting game G as a measure of such power. Two such power indices have been advanced. To describe these we need the notions of a pivot and a swing. For any permutation odering 7t = (re(l), ..., re(n)) of the players N = { 1, ..., n), let Ji(r0 = {j ~ N: re(j) preceeds zr(i)} be the predecessor of i. The player i is a pivot in zc if Jr(re) ~ W but Je(rc) u {i) e W; i.e., player i is a pivot if i's vote is decisive in the sense that given the votes are cast sequentially in the order 7r; his vote turns a loosing coalition into a winning one. A coalition S is a swing for i if i E S, S e W but S \ { i } q~ W; i.e., if his vote is critical in turning a winning coalition into a loosing one by changing his vote. Then we have the following two power indices for each player i e N: (Shapley- Shubik)

• i =:P(i is pivotal when all permutations are equiprobable) = ~ ( s - 1)!(n - s)! , n!

(2.2)

where s = :[ S] = the number of voters in S and the sum is over all s such that S is a swing for i.

186

M. C. Bhattacharjee

(Banzhaff)

/~+= :proportion of swings for i among all coalitions in which i votes 'yes' _

7+

Y~+~N7+

_

7+

,

(2.3)

2 n-1

where 7+ is the number of swings for i. The Banzhaff power index also has a probability interpretation that we shall see later (Section 2.4). If the indicator variable, xi = 1 if player i votes 'yes', =0

if player i votes 'no',

(2.4)

denotes i's vote and C l ( x ) = {x: x+ --- i} is the coalition of assenting players for a realization x = (x 1, . . . , xn) of 2 n such voting configurations, then the outcome function ¢: {0, 1}n~ {0, 1} of the voting game is q,(x) = v ( C , ( x ) ) ,

where v is as defined in (2.1) and tells us whether the proposition passes or fails in the committee. Note q/models the decision structure in the committee given its rules, i.e., given the winning coalitions. In the stochastic version of a simple game, the voting configuration X = (X 1, . . . , Xn) is a random vector whose joint distribution determines the voting-function v =:E~O(X) = P { $ ( X ) = 1}, the win probability of the proposition in the voting game. Sensitivity of v to the parameters of the distribution of X captures the effects of individual players' and their different possible coalitions' voting attitudes on the collective committee decision for a specified decision structure ft. When the players act independently with probabilities p = (Pl . . . . . Pn) of voting 'yes', the voting function is (2.5)

v = h(p)

for some h: [0, 1 ] n ~ [0, 1]. The function h is called Owen's multilinear extension and satisfies (Owen, 1981): h ( p ) = p~h(l~, p) + (1 - p+)h(O~, p ) ,

Oh

he(p) = : - - = h(l+, p) - h(0+, p ) ,

since the outcome function can be seen to obey the decomposition

(2.6)

Reliability applications in economics

187

(2.7)

~k(x) = xiO(le, x) + (1 - x~) ~k(O. x ) , where

('i,x)

EO(.,

is

same

x) = h(pl .....

as

x

except

xi

is

specified

and

h(.,p)=:

P i - 1, ", P~+ 1. . . . , p , ) . These identities are reminiscent of

well known results in reliability theory on the reliability function of coherent structures of independent components, a theme we return to in Section 2.2. If, as a more realistic description of voting behavior, one wants to drop the assumption of independent players; the modeling choices become literally too wide to draw meaningful conclusions. The problem of assigning suitable joint distributions to the voting configuration X = {X1. . . . , X,) which would capture and mimic some of the essence of real life voting situations has been considered by Straffin (1978a) and others. Straffin assumes the players to be homogeneous in the sense that they have a common 'yes' voting probability p chosen randomly in [0, 1]. Thus according to Straffin's homogeneity assumption; the players agree to collectively or through a third party select a random number p in the unit interval and then given the choice of p, vote independently. The fact that p has a prior, in this case the uniform distribution, makes (X 1. . . . . X.) mutually dependent with joint distribution P(Xr:(1 ) .....

X . ( k ) = 1, X . ( k +

k ! ( n - k)! 1) . . . . .

X u ( n ) = O) -

(n + 1)! (2.8)

for any permutation (n(1), ..., n(n)) of the players. (2.8) is a description of homogeneity of the players which Straffin uses to formulate (i) a power index and (ii) an agreement index which is a measure of the extent to which a player's vote and the outcome function coincide. He also considers the relationship between these indices corresponding to the uniform prior and the prior f ( p ) = constp(1 - p ) ; results we will fred more convenient to describe in a more general format in the next section.

2.2. Implications of the reliability framework for voting games. F r o m the above discussions, it is clear that voting games are conceptually equivalent to systems of components in reliability theory. Table 2 is a list o f the dual interpretations of several theoretical concepts in the two contexts: Table 2 Voting games

Reliability structures

player committee winning (loosing) coalition blocking coalition outcome function voting function multilinear extension

component system patch (cut) complement of a cut structure function reliability function reliability function with independent components

188

M. C. Bhattacharjee

Thus every voting game has an equivalent reliability network representation and can consequently be analysed using methods of the latter. As an illustration consider the following: EXAMPLE. The simple game (N, IV) with a five N = {1, 2, 3, 4, 5} and winning coalitions IV as the sets (1,2,5),

(2,3,5),

(1,2,3,5),

(1,3,4,5,)

(1,4,5),

(2,4,5),

(1,2,4,5),

(2,3,4,5).

player

committee

(1,2,3,4,5),

This voting game is equivalent to a coherent structure 1

3

I

O 2

5

4

of two parallel subsystems of two components each and a fifth component all in series. We see that to win in the corresponding voting game, a proposition must pass through each of two subcommittees with '50~o majority wins' voting rule and then also be passed by the chairperson (component 5). The voting function of this game when committee members vote 'yes' independently with a probability p (i.e., the version of Owen's multilinear extension in the i.i.d, case) is thus given by the reliability function

h(p) =

p3(2 - p)2

of the above coherent structure. The minimal path sets of this structure are the smallest possible winning coalitions, which are the four 3-player coalitions in IV. Since the minimal cut sets are (1, 2), (3, 4) and (5), their complements (3,4,5),

(1,2,5),

(1,2,3,4)

are the minimal blocking conditions which are the smallest possible coalitions B with veto-power in the sense that their complements N \ B are not winning coalitions. To persue the reliability analogy further, we proceed as follows. Although it is not the usual way, we may look at a voting game (N, W) as the social choice problem of Section 1 when there are only two alternatives A = {a, b}. Set a = fail the proposition, and b = pass the proposition. Player i's personal preference ordering R; is then defined by

Reliability applications in economics

aR;b(ag,.b)

~

189

i d o e s not (does) prefer b t o a i votes no (yes).

If xi is i's 'vote' as in (2.4) and y,. = yi(a, b) = 1 or 0 according as a R~ b or a ~,. b (as in Section 1) is the indicator of preference, then Ye = 1 - xi, i s N , and clearly qJ(x) = 0 (1) ~ proposition fails (passes) ~ qJ(1 - x) = (p(y) --- 1 (0), where (p is the social choice and ~ the outcome function. Hence qJ(x) = 1 - q~(1 - x) = ~bd(x) = tp(x) since ~b is self-dual. Thus ~O= (p and hence qJ is also self-dual. The latter in particular implies the existence of a player who must be present in every winning coalition (viz. (1.7)). With the choice set restricted to two alternatives; Arrow's condition (C1) is trivial, condition (C2) of irrelevant alternatives is vacously true and so is the transitivity axiom (A1). Since ~O= tp, the condition (C1) says ~k(x) must be defined for all x while axiom (A2) says ~k is binary. The condition of positive responsiveness (C3) holds ¢~- all supersets of winning coalitions are winning, built in the definition of a voting game. Lemma 1 thus implies: LEMMA 2. The outcome function ~k o f a voting game is a monotone structure function. ~b is a coherent structure iff there are no "dummies'. The first part of the above result is due to Ramamarthy and Parthasarathy (1984). The social choice function analogy of the outcome function and its coherence in the absence of dummies is new. A dummy player is one whose exclusion from a winning coalition does not destroy the winning property of the reduced coalition, i.e., i~Nis

dummy

~*,

i~S, S~W

~

S\{i}¢W.

Equivalently, i is not a dummy iff there is a swing S for i. The coherence conclusion in Lemma 2 holds since in a voting game the 'no dummy hypothesis' says all components are relevant in the equivalent reliability network, viz. for any i~N,

i is relevant

~

there exists x ° such that ~O(li, x °) - qJ(0;, x °) ~ 0 So=:{j~U:j¢i, x ° = 1} u {i} is a swing for i ¢~ player i is not a dummy. An equivalent characterization of a dummy i ~ N is that i ¢ minimal winning coalitions. On the other hand in the social choice scenario of Section 1, a player i ~ N is a dictator if {i} is a winning as well as a blocking coalition. When the players act independently in a stochastic voting game, we recognize the identities (2.6), (2.7) on the outcome function and Owen's multilinears extension as reproducing standard decomposition results in coherent structure

M. C. Bhattacharjee

190

theory, as they must. The voting funcion h(p) being a monotone (coherent) structure's reliability function must be coordinatewise monotone: p<~p' =~ h(p)<~ h(p') which has been independently recognized in the voting game context (Owen, 1982). The Banzhaffpower index (2.3) is none other than the structural importance of components in ~. Since research in voting games and reliability structures have evolved largely independent of each other, this general lack of recognition of their dualism has been the source of some unnecessary duplication of effort. Every result in either theory has a dual interpretation in the other, although they may not be equally meaningful in both contexts. The following are some further well known reliability ideas in the context of independent or i.i.d. components which have appropriate and interesting implications for voting games. With the exception of 2.2.1 below, we believe the impact of these ideas have not yet been recognized in the literature on voting games with independent or i.i.d. players.

2.2.1. The reliability importance v, = E{~/,(1,, x) - ~k(Oi, x)}

(2.9)

measures how crucial is i's vote in a game with outcome function ~k and random voting probabilities. As an index of i's voting power, v; is defined for any stochastic voting configuration X and has been used by Straffin within the homogeneity framework ((X~, . . . , X,) conditionally i.i.d, given p). We may call v; the voting importance of i. If the players are independent, then Vi = h i ( p )

in the notation of Section 2.1 (viz. (2.6)). Thus e.g., in the stochastic unanimity game where all players must vote yes to pass a proposition, the player least likely to vote in favor has the most voting importance. Similarly in other committee decision structures, one can use vi to rank the players in order of their voting importance. For a game with i.i.d, players, i's voting importance becomes the function v; = hi(p) where he(p) = h(1 i, p) - h(O;, p) and h('i, o), h(p) denote the corresponding versions of h(.i, p), h(p) respectively when p = (p . . . . . p). Since in this case h'(p) = Y,i~Nhi(P), one can also use the proportional voting importance v~* -

vi E j ~ N Vj

_hi(P) h' ( p )

as a normalized power index in the i.i.d, case.

2.2.2. The fault-tree-analysis algorithm of reliability theory will systematically enumerate the smallest cut sets and hence the minimal blocking coalitions of a voting game through its reliability network representation. The dual event tree

Reliability applications in economics

191

algorithm will similarly produce all minimal winning coalitions, the Banzhaff power indices and the voting importances.

2.2.3. S-shapedness of the voting function for i.i.d, players with no dummies. This follows from the M o o r e - S h a n n o n inequality (Barlow and Proschan, 1965) dh p(1 - p) ~ >~ h(p)(1 - h(p))

dp

for the reliability function of a coherent structure with i.i.d, components. Implications of this f a c t in the voting game context is probably not well known. In particular the S-shapedness of the voting function implies that among all committees of a given size n, the k-out-of-n structure (lOOk~n% majority voting games) have the sharpest rate of increase of the probability of a committee of n i.i.d, players passing a bill as the players' common yes-voting probability increases.

2.2.4. Component duplication is more effective than system duplication. This property of a structure function implies: replicating committees is less effective in the sense of resulting in a smaller outcome/voting function than replicating committee members by subcommittees (modules) which mimic the original committee structure ~. This may be useful in the context of designing representative bodies when such choices are available. 2.2.5. Composition of coherent structures. Suppose a voting game (N, W) has no dummies and is not an unanimity game (series structure) or its dual (any single yes vote is enough: parallel structure). Suppose each player in this committee N with structure ~b is replaced by a subcommittee whose structure replicates the original committee, and this process is repeated k-times; k = 1, 2, .... With i.i.d. players, the voting function hk(p) of the resulting expanded committee is then the reliability function of the k-fold composition of the coherent structure qJ which has the property hk(p) $ 0, = Po, 1' 1 ¢> p < , = or > Po as ki', ~ or ~ ~ (Barlow and Proschan, 1965) where Po is the unique value satisfying h(po) = Po, guaranteed by S-shapedness. When we interpret the above for voting games, the first conclusion is perhaps not surprising, although the role of the critical value Po is not fully intuitive. The other two run counter to crude intuition; particularly the last one which says that by expanding the original committee through enough repeated compositions, one can almost ensure winning any proposition which is sufficiently attractive individually. The dictum 'too many cooks spoil the broth' does not apply here.

192

M. C. Bhattacharjee

2.2.6. Compound voting games and modular decomposition. If (Nj, Wj), j = 1, 2, ..., k, are simple games with palrwise disjoint player sets and (M, V) is a simple game with XMI = k players; the compound voting game (N, W ) is defined as the game with N = Uj= ~Nj and

W= {ScN:

{jeM: SnNje

Wj.}e V}.

(M, V) is called the master-game and (Nj, Wj) the modules of the compound game (N, W). The combinatorial aspects of compound voting games have been extensively studied. Considering the equivalent reliability networks it is clear however that if the component games (Nj, Wj) have structures ~, j = 1, ..., k, and the master game (M, V) has structure tp; then the compound voting game (N, W) has structure = ,/,(¢,,

...,

~).

Conversely the existence of some tp, ~k~, ..., ~bk satisfying this representation for a given ~k can be taken as an equivalent definition of the corresponding master game, component subgames and the accompanying player sets as the modular sets of the original voting game. E.g., in the 5-player example at the beginning of this section, clearly both subcommittees J1 = { 1, 2}, J2 - {3, 4} are modular sets and the corresponding parallel subsystems are the subgame modules. Ramamurthy and Parthasarathy (1983) have recently exploited the results on modular decomposition of coherent systems to investigate voting games in relation to its component subgames (modules) and to decompose a compound voting game into its modular factors (player sets obtained by intersecting maximal modular sets or their complements with each other). Modular factors decompose a voting game into its largest disjoint modules. The following is typical of the results which can be derived via coherent structure arguments (Ramanurthy and Parthasarathy, 1983). THREE MODULES THEOREM. Let J;, i = 1, 2, 3, be coalitions in a voting game (N, W ) with a structure ~b such that Ja to J2, Jz to J3 are both modular. Then each J~ is modular, i = 1, 2, 3 and U~= x Ji is either itself modular or the full committee N. The modules (J1, ~ki) i = 1, 2, 3 which appear in (N, ~k) are either in series or in parallel, i.e., the three-player master game is either an unanimity game, or a trivial game where the only blocking location is the full committee. 2.3. The usual approach in modeling coherent structures of dependent components is to assume the components are associated (Barlow and Proschan, 1975). By contrast, the prevalent theoretical approach in voting games, as suggested by Straffin (1978) when the players are not independent assumes a special form of dependence according to (2.8). One can show that (2.8) implies X 1. . . . , Xn are associated. Thus voting game results under Straffin's model and its generalized version suggests an approach for modeling dependent coherent structures. These

Reliability applications in economics

193

results are necessarily stronger than those that can be derived under the associatedness hypothesis alone. The remarkable insight behind Straffin's homogeneity assumption is that it amounts to the voting configuration X being a finite segment of a special sequence of exchangeable variables. The effect of this assumption is that the probability of any voting pattern x -- (x~, . . . , x,) depends only on the size of the assenting and dissenting coalitions and not on the identity of the players, as witness (2.8). One can reproduce this homogeneity of players through an assumption more general than Strattin's. Ramamurthy and Parthasarathy (1984) exploit appropriate reliability ideas to generalize many results of Straffin and others, by considering the following weakening of Straffin's assumption. GENERAL

HOMOGENEITY

X = (X 1. . . . .

HYPOTHESIS. The

random

voting configuration

X , ) is a finite segment of an infinite exchangeable sequence.

Since X l , 2 2 , . . . are binary; by the Finnetti's well known theorem, the voting configuration's joint distribution has a representation P(X~o ) . . . . .

X,~(k) = 1, X.(k+ ~) . . . . .

= --1"~p~'(1 - p ) " - k dF(p) .)o

X,~(,,) = O) (2.10)

for some prior distribution F on [0, 1]; and the votes X 1 . . . . . X n are conditionally independent given the 'yes' voting probability p. Straffin's homogeneity assumption corresponds to an uniform prior for p, leading to (2.8). For a stochastic voting game defined by its outcome (structure) function ~k, consider the powerindex

v,. =:E{$(1 i, X) - ~(0i, X)}, defined in (2.9) and the agreement indices Ai = : e { x , = ¢ ( x ) } ,

pi =:cov(x;, q4x)), t5 =:

cov(X, q l ( X ) l p ) d F ( p ) . )

Also, let b = :cov(P, H ( P ) ) . Here P is the randomized probability of voting 'yes' with prior F in (2.10). Note b, tri are defined only under the general homogeneity assumption, while vi, A t and Pi are well defined for every joint distribution of the voting configuration X. Recall

M. C. Bhattacharjee

194

that a power index measures the extent of change in the voting game's outcome as a consequence of a player's switching his vote and an agreement index measures the extent of coincidence of a player's vote and the final outcome. Thus any measure of mutual dependence between two variables reflecting the voting attitudes of a player and the whole committee respectively qualifies as an agreement index. An analysis of the interrelationships of these indices provides an insight into the interactions between players' individual level of command over the game and the extent to which they are in tume with the committee decision and ride the decisive bandwagon. The agreement index A i is due to Rae (1979). Under (2.8), ve becomes Straffin's power index and a e is proportional to an agreement index also considered by Straffin. Note all the coefficients are non-negative. This is clear for ve and A e, and follows Pc, ere and b from standard facts for associated r.v.s. (Barlow and Proschan, 1975) which is weaker than the general homogeneity (GH) hypothesis. The interesting results under the assumption of general homogeneity (Ramamurthy and Parthasarathy, 1984) are

pe=ai+b, 1

2 b s ~ ) ~ tri >/ i~N

EXe=½

~

~0

h(p)(1 - h(p)) d F ( p ) ,

A e = 2 o - j + 2 b + 1.

(2.11)

The equality in the second assertion holds only under StralTm's homogeneity (SH) assumption. This assertion follows by noting tre = ~ o1 P ( 1 - h(p))dF(p) under GH, h'(p) = Y'e hi(P), termwise integration by parts in Y~etre with uniform prior to conclude the equality and invoking the S-shapedness of h(p) for the bound. The above relations in particular imply (i) Under GH, i is dummy ¢~ a~ = 0. If the odds of each player voting yes and no are equal under GH, i.e., if the marginal probability P(X e = 1) = ½; then we also have, i dummy ¢:~ Pc--- b ~ A i = 2b + ½. Thus since ~5 is in a sense the minimal affinity between a player's vote and the committee's decision, Straffin suggests using 2a e (Ae - 2b - 1) as an agreement index. (ii) Let w, l = 2 n - w be the number winning and losing coalitions. Since hi(½) = fli (structural importance = Banzhaff power index) and h(1) = w/2"; taking F as a point-mass at ½, (2.11) gives

Z fli >/2-2(n-1) wl" i~N

Without the equal odds condition, the last relation in (2.11) has a more general version that we may easily develop. Let n; = : .[ 1 p dF(p) = E X~ be the marginal probability of i voting yes under general homogeneity. Then

Reliability applications in economics

195

1

A i = ~ P(X i = ~b(X) = j ) = E X~k(1., X ) + E((1 - X~)(1 - ~b(0e, X)) j=0

= E X 1 ~O(X) + E(1 - X 0 ( 1 - if(X)) = 2 cov(X 1, qJ(X)) + E ~O(X){2E X~ - 1} + 1 - E X~

= 2p, + v ( 2 n , - 1) + (1 - hi), = 2 p , + ~ v + (1 - h i ) ( 1 - v)

which reduces to the stated relationship whenever n i = 1 for some i e N. Notice that the convex combination term in braces, which measures the marginal contribution to A i of a player's voting probability n/, depends on the game's value v via 1 an interaction term unless n i - 2"

2.4. Influence indices and stochastic compound voting games. There are some interesting relationships among members of a class of voting games via their power and agreement indices. In the spirit of (2.10), consider a compound voting game consisting of the two game modules (i) a voting game G = (N, W) with N = { 1. . . . , n}, and (ii) a simple majority voting game G,, = ( N , W,,) of (2m + 1) players with

{n+ 1,...,n+2m, n + 2 m + W m = ( S = U m" ISl>~m+ 1},

Nm=

1}, (2.12)

i.e., any majority (at least (m + 1) players) coalition wins. Replacing the player - ( n + 2m + 1) in the majority game by the game G = (N, W), define the compound game G~* = (N*, W*), where

N*=NwN,,=

{1 . . . . . n , n +

1. . . . . n + 2 m } ,

W* = {S c N*" either ] S \ N I ~ m + 1 or/and

I S \ N I >~m, S n N ~ W}.

(2.13)

G* models the situation where the player - (n + 2m + 1) in the majority game G m is bound by the wishes of a constituency N, as determined by the outcome of the constituency voting game G = (N, W), which he represents in the committee N m. The winning coalitions in the composite game G* are those which either have enough members to win the majority game G,, or is at most a single vote short of winning the same Gm when the player representing the constituency N is not counted but containing a winning coalition for the constituency game G = (N, W). The winning coalitions in the latter category are precisely those S such that (i) ]S\N[ = m, i.e., for any i¢ S \ N , {i} u S \ N is a swing for every such player i in the majority game Gm and (ii) using appropriate players in S also wins the constituency voting game G. With i.i.d, voting configuration, if hi(p) and h*(p)

M. C. Bhattacharjee

196

respectively denote the voting importance of i~ N in G and G*, then clearly

h*(p)=(2n~)pm(1-p)mh,(p)

, i~N.

(2.14)

Under general homogeneity, the class of priors

F a . b ( p ) = ( a ~( )aT+(b~-- - l ) 1)! ! fo p u a - 1 ( 1 - u ) b- 1 du,

a>O, b>O,

which leads to the voting configuration distribution

a(k) b(n - k) /'(X~ . . . . .

X k = 1, Xk+~ . . . .

= X. = 0)-

(a + b) (") '

(2.15)

can reflect different degrees of mutual dependence (tendency of alignments and formation of voting blocks) of players for different choices of a, b. Player i's vote X,. in the model (2.15) is described by the result of the i-th drawing in the well known Polya-urn model which starts with a white and b black balls and adds a ball of the same color as the one drawn in successive random drawings. For any voting game G with a Polya-urn prior Fa. b, denote the associated influence indices of power/agreement by writing ve = re(G: a, b), etc . . . . Notice that Straffin's original homogeneity assumption corresponds to the prior F1, 1. Notice that Straffin's original homogeneity assumption corresponds to the prior F1, 2. Using vi(G: a, b)= S~ht(p)dF(p) and (2.14), Ramamurthy and Parthasarathy (1984) have shown: v,.(G: 1, 1)= ~i, a/(G: a, b ) =

ab (a+b)(a+b+

vi(G: a + 1, b + 1), 1)

and, in the framework of the compound voting game G* in (2.13), oi(G: m + 1, m + 1) = (2m + l)vi(G*: 1, 1),

iEN,

(2.16)

extending the corresponding results of Straffin (1978) which can be recovered from the above by setting a = b = m = 1. The second assertion above shows that the apparently distinct influence notions of 'agreement' and 'power' are not unrelated and one can capture either one from the other by modifying the degree of dependence among the voters as modeled by (a, b) to (a + 1, b + 1) or (a - 1, b - 1) as may be appropriate. The first assertion states the equivalence of Shapley-Shubik index with voting importance under uniform prior (Straffin's

Reliability applications in economics

197

power index), while the third assertion shows a relationship between voting importances in the compound game in (2.13) and the corresponding constituency game under appropriate choice of voter-dependence in the two games. Notice v~(G: m + 1, m + 1)--}fl;, the Banzhaff power-index in the constituency game, since the case of players voting yes or no independently with equal odds (p = ½) can be obtained by letting m ~ oo in the prior Fm+ ~.m + 1" Hence by (2.16), in the composite game G* with (2m + 1) players, (2m + 1)v;(G~: 1, 1)~fle

as n ~ oo,

ieN,

i.e., Straffin's power-index in the compound game G* multiplied by the number of players approaches the Banzhaff power index (structural importance) in the constituency game G = (N, W). The priors Fa. b, under the general homogeneity hypothesis, reflect progressively less and less voter interdependence with increasing (a, b) and thus in this sense also models the maximum possible such dependence under Straffm's homogeneity when a = b = 1, the minimal values for a Polya-urn. To emphasize the conceptual difference as well as similarity of the Shapley-Shulik and Banzhaff indices of power, we may note that they are the two extreme cases of the voting importance vt (viz. 2.9)) corresponding to a = b = 1 and limiting case a = b---} oo. It is interesting to contrast the probability interpretations of the Shapley-Shubik and Banzhaff power indices. A player i~ N is crucial if given the others' votes, his voting makes the difference between winning or loosing the proposition in the committee. While the Shapley-Shubik index ~; in (2.2) is the probability that i ~ N is crucial under Straffin's homogeneity (player's votes are conditionally i.i.d, given p), the Banzhaff index fl; in (2.3) is the probability that i is crucial when the players choose 'yes'-voting probabilities Pi, i ~ N, independently and the Pi, i ~ N are uniformly distributed. The probability of individual group agreement under this independence assumption is /g;. (1) + (1 -/~;). (½) = ½(1 +/8~). The right hand side can be used as an agreement index. These results are due to Straffin (1978). 2.5. While we have argued that several voting game concepts and results are variants of system reliability ideas in a different guise; others and in particular the general homogeneity assumption and its implications may contain important lessons for reliability theory. For example; in systems in which the status of some or all components may not be directly observable except via perfect or highly reliable monitors--such as hazardous components in a nuclear installation, the agreement indices can serve as alternative or surrogate indices of reliability importance of inaccesible components. The general homogeneity assumption in system reliability would amount to considering coherent structures of exchangeable components, a strengthening of the concept of associatedness as a measure

198

M. C. Bhattacharjee

of component dependence; an approach which we believe has not been fully exploited and which should lead to more refined results than under associatedness of components alone.

3. 'Inequality' of distribution of wealth 3.1. One of the chief concerns of development economists is the measurement of inequality of income or other economic variables distributed over a population that reflects the degree of disparity in ownership of wealth among its members. The usual tool kit used by economists to measure such inequality of distribution is the well known Lorenz curve and the Gini index for the relevant distribution of income or other similar variables, traditionally assumed to follow a log-normal distribution for which there is substantial empirical evidence and some theoretical arguments. Some studies however have questioned the universality of the lognormal assumption; see e.g., Salem and Mount (1974), MacDonald and Ransom (1979). Mukherjee (1967) has considered some stochastic models leading to gamma distributions for distribution of welath variables such as landholding. Bhattacharjee and Krishnaji (1985) have considered a model for the landholding process across generations, allowing for acquisition and disposal of land in each generation and where ownership is inherited, to argue that the equilibrium distribution of landholding when it exists must be NWU ('new worse than used') in the sense of reliability theory, i.e., the excess residual holding X - t [ X > t over any threshold t stochasticaly dominates the original landholding variable X in the population. The N W U property is a fairly picturesque description of the relative abundance of 'rich' landowners (those holding X > t) compared to the total population of landowners across the entire size scale. In practice, even stronger evidence of disparity has been found. In an attempt to empirically model the distribution of landholdings in India, it has been found (Bhattacharjee and Krishnaji, 1985) that either the log-gamma or/and the D F R gamma laws provide a better approximation to the landholding data for each state

Table 3 Landholding in the State of W. Bengal, India (1961-1962) and model estimates Landholding size (acres)

NS S

Lognormal

DFR gamma

Loggamma on (1, oo)

0- 1 1- 5 5-10 10-20 >20

1896 1716 482 164 39

2285 1350 333 189 138

1832 1745 515 165 40

1794 422 132 52

Reliability applications in economics

199

in India based on National Sample Survey (NSS) figures. Table 3 is typical of the relatively better approximations provided by the gamma and the log-gamma on (1, ~ ) relative to log-normal. While the log-gamma is known to have an eventually decreasing failure rate, the estimated shape parameter of the gammas were all less than one and typically around ½ for every state and hence all had decreasing failure rates. For landholdings, the NWU argument and the empirical D F R evidence above (everywhere with gammas, or in the long range as with the log-gamma) are suggestive of the possibility of exploiting reliability ideas. If X >/0 is the amount of wealth, such as land, owned with distribution F; it is then natural to invoke appropriate life-distribution for the concepts for the holding distribution F in an attempt to model the degree of inequality present in the pattern of ownership of wealth. The residual-holding X - t l X > t in excess of t with distribution Ft(x ) = 1 - {ff(t + x)/ff(t)} and the mean residual holding g(t) : = E ( X - t IX > t) correspond respectively to the notions of the residual-life and the mean residual life in reliability theory. In particular the extent of wealth which the 'rich' command is described by the behavior of g(t) for large values of t. More generally, the nature of/7, and the excess average holding g(t) over an affluence threshold t as a function of the threshold provides a more detailed description of the pattern of ownership across different levels of affluence in the population. Using the above interpretations of F, and g(t); the notion of skew and heavy tailed distributions of wealth as being symptomatic of the social disparity of ownership can be captured in fairly pitcuresque ways with varying degrees of strength by the different anti-aging classes (DFR, IMRL, NWU, NWUE) of 'life distributions' well known in reliability theory. For example a holding distribution F is D F R (decreasing failure rate: F,i"st stochastically increasing in t) if the proportion of the progressively 'rich' with residual holding in excess of any given amount increases with the level of affluence. The other weaker anti-aging hypotheses: IMRL (increasing mean residual life: g(t)'r ), NWU (new worse than used: Ft >~StF, all t) and N W U E (new worse than used in expectation: g(t)>~ g(0+)) can be similarly interpreted as weaker descriptions of disparity. Motivated by these considerations, Bhattacharjee and Krishnaji (1985) have suggested using 11 = g*/l~,

where g* = lim g(t), /~ = g(0 +) t~

oo

1 2 = t ~ o o l i m E ( E I x > t ) = l + limt_~g(t)--t '

(3.1)

when they exist, as indices of inequality in the distribution of wealth. They also consider a related measure Io = g* - # =/~(I1 - 1) which is a variant of I~, but

200

M. C. Bhattacharjee

is not dimension free as 11, 12 are. The assumption that the limits in (3.1) exist is usually not a real limitation in practice. In particular the existence of g* ~< oo is free under IMRL and DFR assumptions, with g* finite for reasonably nice subfamilies such as the D F R gammas. More generally, the holding distributions for which g* ~< oo (g* < oo respectively) exists is the family of 'age-smooth' life distributions which are those F for which the residual-life hazard function - l n f f t ( x ) converges on [0, oo] ((0, ~ ] respectively) for each x as t ~ o o (Bhattacharjee, 1986). 11 and 12 are indicators of aggregate inequality of the distribution of wealth in two different senses. 11 measures the relative prepondrance of the wealth of the super-rich, while 12 indicates in a sense how rich they are. The traditional index of aggregate inequality, on the other hand, as measured by the classical Gini-index (Lorenz measure) G can be expressed as G = P ( Y > X ) - P(Y<~ X ) = 1 - 2

~0°°

Fa(x ) dF(x),

(3.2)

where X is the amount of wealth with holding distribution F and Y has the so called "share-distribution' Fl(X ) = : # - 1 f o t dF(t), the share of the population below x. A somewhat pleasantly surprising but not fully understood feature of the three indices 11, I 2 and G is that they turn out to be monotone increasing in the coefficient of variation for many holding distributions F. Such is the case with G under log-normal, 11 under gamma and I 2 under log-gamma (Bhattacharjee and Krishnaji, 1985). Note also that whenever the holding distribution is anti-aging in DFR, IMRL, NWU or NWUE sense, the coefficient of variation (c.v.) is at least one (Barlow and Proschan, 1975); a skewness feature aptly descriptive of the disproportionate share of the rich. Recently the author has considered other inequality indices which share this monotonicity in c.v. under weak anti-aging hypotheses and have re-examined the appropriateness of 11, 12 and measures of aggregate inequality to show (Bhattacharjee, 1986a): (i) The non-trivial case 1 < 12 < m, implies I~ = ~ necessarily and then 12 = (1 + r/:) lim ~,'(t) t~ ~ 11(0

(3.3)

where t/ is the coefficient of variation of the holding distribution F, 11(0 = g(t)/l~ = S ~ ff(u) d u / # f f ( t ) ~ I~ = ~ and IFl(t) is the inequality function 11( 0 computed for the share distribution F 1 associated with F.

Reliability applications in economics

201

(ii) The ratio of the hazard functions of the holding and share distributions converge to 12: 12 = lim l n ( 1 - F(t)) ' ~ ln(1 - El(t))

(3.4)

Clearly 11 ~> l if the holding distribution F is N W U E , with equality iff F is exponential. Similarly by (3.1) I z >/1 with equality iff g(t) = o(t) or, an equivalent condition on hazard functions via (3.4). The question, when 11 and I 2 are finite so as to be meaningful for purposes of comparison across populations has the following answers (Bhattacharjee, 1986a): (iii) 11 < ~ ~ 1 - F(ln x) is ( - p)-varying, for some p • (0, 0o ]. F is strictly N W U E ~ I 1 > 1. (iv) For any holding distribution F, I <~ I 2 <<.00. The different possibilities are characterized by (a) I f F is D F R , then 12 = 1 ~:~ the residual holding scaled by its mean converges to exponential, i.e., e ( x > t + xg(t) [X > t) ~ e - x . This condition is necessary for I 2 = 1, without the D F R hypothesis. (b) 1 < 12 < oo . ~ the "excess holding factor' over an affluence threshold t converges to the Pareto distribution:

P(flX>

with ~ = & l ( & -

t)~x

-~,

1).

(c) 12 = ~ ¢:~ P ( X - t > x i X > t) ~ t/(t + x) as t ~ 0o. Notice that the distribution on the right hand side is D F R with infinite mean. The n.s.c, in (iii) is the condition of generalized regular variation (Feller, 1966; Senata, 1976): a real valued function h(x) on the half-line is regularly-varying if h(xy)lh(y) converges as y ~ o o and then h ( x y ) / h ( y ) - - , x ~, some ~ ( - ~ , ~). With an obvious interpretation of x ~ when ~ = + ~ , such an h(x) is called a-varying. 3.2. The Lorenz curve and TTT-transform. While 11, 12 and the classical Gini index are all aggregate measures of inequality, it is also useful to have a more dynamic measure of inequality which will describe the variation of the disparity of ownership with changing levels of affluence. This is classically modeled by the Lorenz curve L(p)=#

-lf~F-l(u)du,

O<~p<~l,

where # is the average holding and F - J(u) = inf{t: F(t) >1 u} measures the proportion of total wealth owned by the poorest 100p ~o of the population, and is thus

202

M. C. Bhattacharjee

a variant of the share distribution F 1 in (3.2), namely L(F(t)) = Fl(t ). As remarked earlier, the ratio g(t)/# of the mean residual holding to the average holding can also serve such a purpose. The Lorenz curve L and its inverse L - l are both distribution functions on the unit interval. The relevance of reliability ideas for modeling inequality and relationships of the Lorenz curve to some well known functionals of life distributions was first indicated by Chandra and Singpurwalla (1981) and further studied by Klefjs0 (1984). If

W(p) =" ~-- 1 ~0F '(p) F(t) dt is the scaled total time on test (TTT) transform of the holding distribution F viewed as a life distribution with mean # and the cumulative TTT-transform, V:= So1 W ( p ) d p , then L ( p ) = W ( p ) - (1 - p)/~- i F - l(p), V=I-G,

(Chandra and Singpurwalla, 1981) where the Gini-index

fo fo'

G= 1-2 = 2

1

F,(t) d F ( t ) = 1 - 2

L ( p ) dp

(3.5)

{ p - L ( p ) } d?

is scale-equivalent to the area bounded by the diagonal and the Lorenz curve, as is well known. Based on a random sample with order statistics X(1), X(2). . . . , X(,) from F, the estimated sample Lorenz curve and the Gini-statistic ~'wl

/ n

G.=:

j=,j(n -j)(X(j+I)n

(n - 1) Z j _ ,

X(j))

X(j)

are similarly related to the total time on test statistic and its cumulative version

L. Go=I-V

= W.

- (n - i)

i)

j

i

X(:)

,

n.

Chandra and Singpurwalla (1981), Klefsj0 (1984) and Taillie (1981) have used partial orderings of life distributions to compare the Lorenz curves of holding distributions which are so ordered. For the partial ordering notions

Reliability applications in economics

203

(i) H ~ 0, all x > 0, with equality at x = 0; they show, H
or

H<.TF

~

L~I(p)<.TLT--'(p)

H
~

L~'(p)
H
~

L r ( p ) <~L~r(p)

~

LF(p)<~LI_I(p) ,

(3.6)

In particular taking H to be exponential, the distribution F in (i) above corresponds to DFR, (ii)to D F R A and (iv)to H N W U E (Klefsj6, 1982). Reversing the roles of H and F leads to the dual aging classes. (3.6) implies that L ( p ) <~p + (1 - p)ln(1 - p ) ,

(3.7)

the Lorenz curve of the exponential whenever the holding distribution is H N W U E with a finite mean. This bound obviously remains valid for the smaller class of NWU and D F R distributions for which we have earlier found some theoretical and empirical evidence respectively as plausible models of landholding distributions. In a more general vein, Klefsj6 (1984) remarks that in the spirit of (3.5); contrasting the Lorenz curve against the uniform distribution on (0, 1), the quantities

Jk =:(k + Lk=:k(k-

1)fo'pk-l{p-L(p)}dp, 1) f o ~ ( 1 - p ) k

k>~ 1,

2 { p _ L ( p ) } d p , k>~2,

(3.8)

can be used as generalized indices of inequality. The Gini-index is the special case G = J~ = L 2. Notice in view of (3.7), we have Jk >t O, L k >~ 0 for all anti-aging holding distributions F or their 'aging' duals; and J~ = L k = 0 only in the egaliterian case L ( p ) = p where everybody owns the same amount of wealth (F is degenerate). By expressing Jk as

Klefsj6 (1984) implicitly notes that Jk can be interpreted as the excess over k - 1 of the ratio of the mean life of a parallel system of (k + 1) i.i.d, components with life distribution F to that of a similar system with exponential lives. Similarly, we note

M. C. Bhattacharjee

204

Lk = k

;

( l - u ) ~ - 1 ( 1 - W(u))du= 1 - #

1

ffk(t) dt

measures the relative advantage of a component with life F against a series system of k such i.i.d, components as measured by the difference of the corresponding mean lives as a fraction of the component mean life. These interpretations bring to a sharper focus the relationships of the notion of 'inequality of distribution' in economics to measures of system effectiveness in reliability.

3.3. Applications to statistical analysis of lifelengths. The reliability approach to modeling 'inequality of distributions' suggest applications to reliability inference. Using weak convergence of the empirical Lorenz process {L~(t): 0 ~< t ~< 1), L,(t)=:~ =:0

L,

-L(t)

}

if j -

1 < t ~ < -j, n

n

if t = 0,

to a process related to Brownian bridge (Goldie, 1977), it is thus possible to construct a test of exponentiality--a theme of central interest in reliability and life testing. However the difficulty of evaluating the exact distribution of L,(t) to determine the critical points of the goodness-of-fit test based on the sample Lorenz curve has in practice required simulation even in large samples (Gail and Gatswirth, 1978). In contrast the critical cut-off values of the corresponding test based on the sampled TTT-process Wn(t)=:xfn{Wn(j/n)-W(t)}, 0 ~ t ~ < 1, (Barlow and Campo, 1975) are the usual Kolmogroff-Smirnov statistics; since, under the null hypothesis of exponentiality (W(t) = t), Wn(t) converges exactly to the Brownian Bridge. If the alternatives belong to a more restricted family such as the well known non-parametric life distribution classes in reliability, then there are other possibilities. Kelfsj0 (1983) has used a variant of the aggregate inequality index L~ in (3.8) to construct a test of exponentiality against H N B U E ( H N W U E ) alternatives. His test statistic is based on an estimate of B~, =:kLk- ( k - 1), noting B k >/(~<)0 if F is H N B U E ( H N W U E ) with B k = 0 only if F is exponential. Estimation and tests of monotonicity and a turning point of the mean residual life function g(t) have been considered by Hollander and Proschan (1975), Guess and Proschan (1983). Our inequality indices 11 and 12 suggest a related open problem: estimation and tests for I~, I 2 which are parameters descriptive of the tail behavior of the mean residual life. The question of estimating I l is well defined within the family of age-smooth life distributions (Bhattacharjee, 1986). On the other hand the domains of attraction results (Bhattacharjee, 1986a) described earlier, which characterize possible values of 12 implies that estimating 12 and testing I s = 1 against 1 < 12 < oe are problems of independent interest for reliability theory.

Reliability applications in economics

205

4. R & D rivalry and the economics of innovation 4.1. Innovations and accompanying technological breakthroughs have changed the lot of mankind throughout history and noticeably more so in the present century at an accelerating pace. Since technological change affects market structure through altering the means of production, economists began to be interested in the subject of technical advance around the fifties. Although there are some earlier references to the economic aspects of technological advance (Taussig, 1915; Hicks, 1932), the stage for serious inquiry on the economics of such advance was set by Schumpeter (1961, 1964, 1975) who emphasized the role of innovation as an economic activity. Since then, the recognition of technical advance as a major source of economic growth has been the subject of many studies, mostly empirical. These studies deal with empirical relationships of industrial innovations to firm size and concentration as indicators of market structure, the 'technologypush' and 'demand-pull' factors (Arrow, 1962) as incentives for innovation, and such other relevant variables. Collectively they point to the need for a conceptual framework and recently an economic theory of technical advance has began to emerge (Kamien and Schwartz, 1982). In this view, the economic agents are firms or entrepreneurs and an act of product- or process-innovation straddles all activities from basic research through invention to development, production, distribution and collection of consequent revenues against the backdrop of industrial rivalry in the competition to gain market supremacy. Schumpeter recognized that acts of invention and innovational entrepreneurship are distinct as are the corresponding risks; and it is only the latter which can lead to the diffusion of benefits of invention to its ultimate consumers. Innovation and entrepreneurship in this framework is viewed as a race to be the first with the incentive of commanding extraordinary profits at least until imitators appear when such monopoly profits will begin to be eroded. The 'Schumpeterian hypothesis' that the opportunity to realize monopoy profits spurs invention and the presence of some monopoly power has a similar effect, the latter also stressed by Galbraith (1952), forms the basis of a modem economic theory of technical advance. The accent is on competition through innovation rather than through price alone, and is thus contrary to the traditional tenets of the western economic doctrine of 'perfect competition' which would eliminate any excess profit of an innovation by immediate imitation. 4.2. The presence of identified or potential rivals who are in the race to be the first to innovate constitutes the major source of uncertainty for an entrepreneur. It is this aspect of innovational ( R & D ) rivalry on which reliability ideas can be brought to bear that is of interest to us. Even within the context of such applications, there are a host of issues in modeling the economics of innovation which can be so addressed within the Schumpeterian framework. Kamien and Schwartz (1982) provide a definitive account of contemporary research on the economics of technical advance, where reliability researchers will recognize the potential to exploit reliability ideas through modeling the uncertainty associated with

M. C. Bhattacharjee

206

innovational rivalry and possible duration of monopoly between successful innovation and rivals' imitation. These ideas do not appear to have been explicitly recognized and are only implicit in Kamien and Schwartz (1982). We will consider one such model to focus on the relevance of reliability concepts in modeling the economics of technical advance which may lead to deeper insights into the role of innovational rivalry as a determinant of technological progress. In this simplified model of innovation as an economic activity under the Schumpeter scenario; our entrepreneur or firm has either only one product (economic 'good') or none at all (breaking in as a newcomer), and is competing against rivals to develop an innovation. We assume there is no essential resource constraint and no major uncertainty important enough to warrant stochastic modelling of the entrepreneur's time to complete development. Any desired completion time r can be achieved by spending a required amount C(v) representing the net present value of the cost stream incurred to complete development at time ~. Although it is usual to assume that 0 < C(x) is convex decreasing, for our purposes the latter assumption is unnecessary, and only assuming C(0) sufficiently large to prevent instantaneous development will suffice. Assume a market growth rate 7; 7>, = or < 0 according as the market is growing, stationary or decreasing. The development process is assumed to be contractual in the sense that innovation will be seen through its completion by the entrepreneur as well as the rivals either as a pioneer or as an imitator. The entrepreneur has only an incomplete knowledge about rivals' introduction time T reflected by its d.f. H(t) = P(T<~ t) about which more will be said later. The current rate of the entrepreneur's return r(t; ~, T) at time t depends not only on when the innovation is introduced in the market but also on whether our entrepreneur is a winner succeeding first or, an imitator of the rivals. Let this be r o (receipt on current good) until introduction of the innovation changes it to r 1 or Po recording as some rival or the entrepreneur succeeds first. These rates remain in effect until the moment both the innovating pioneer and the imitator appear. Once the entrepreneur and the rivals are both in the market, the former's rate of return changes again. The current value of its contribution to the total return is a function P(z, T), the current capitalized value of the stream of future receipts, which depend on and T typically through I v - T I: the lag between innovation and imitation. The structure of P also depends on whether the rivals win (r >~ T; correspondingly P = :P1('), say) or imitate (T > z, when P = : Po(')). Accordingly,

P(z, T)= P o ( T - z) = PI('~ -

T)

ifz / T ;

Reliability applications in economics

207

and the flow of receipts can be schematically described as below r

,

min (z, T) ro

,

P

max (~, T) Po

x

Po [ T

z ro

,

z < T: rival imitates

rl

P1 z >/T: rival precedes

T

z

The expected net present value of the entrepreneur's returns, with a market interest rate i, as a consequence of the decision to choose an introduction time z is oo

U(z) =

=

L

E { e - ( i - ~ ) ' r ( t ; z, T)} dt + E { e - ( ' - r) max(z. T)p(.c,

e-(i-

r),{ro~(t ) + rill(t) } dt + Po

+ e -(i-')*

Pl(z - t) dH(t) +

r)}

e - ( i - ,)t~(t) dt e - ( i - ' ) t P o ( t - ~) d H ( t ) . (4.1)

The optimal introduction time z* is of course the solution which maximizes the expected value of profit

V('O = U(-c)- C('O.

(4.2)

While z* = 0 can be ruled out by taking C(0) to be sufficiently large, it is possible to have z* = oo (best not to undertake development at all) depending on the relative values of the economic parameters. In the remaining cases there is a finite economically best introduction time. It is usual, but not necessary to have Po >~ ro >1 rl and PD >~ 0, P'I ~< 0 which are easily interpreted: (i) rival precedence, should it occur, does not increase the rate of return from old good which further increases if the entrepreneur succeeds first, (ii)in the post-innovation-cumimitation period, the greater is the lag of rival entry, if we succeed first (the greater is the lag in our following, if the rivals succeed first), the greater (the smaller) is our return from the remaining market. Various special cases m a y occur within these constraints, e.g., rivals' early success m a y m a k e our current good obsolete (r~ = 0); or the entrepreneur m a y be a new entrant with no current good to be

208

M. C. Bhattacharjee

replaced (ro = r 1 = 0 ) . Sensitivity of the optimal introduction time to these and other parameters in the model are of obvious economic interest and are easily derived (Kamien and Schwartz, 1982). 4.3. Intensity of rivalry as a reliability idea and its implications. What interests us more is how the speed of development, as reflected by the economic z*, is affected by the extent of innovational rivalry which is built-in in the rivals' introduction time distribution H. Kamien and Schwartz (1982) postulate m

H(t) = : P ( T > t) = e -hA(t)

and propose h > 0 as a degree of innovational hazard. To avoid confusion with the notion of hazard in reliability theory, we call h as the intensity of innovational rivalry. Setting F(t) = 1 - e-A(O, it is clear that H(t) = fib(t)

(4.3)

i.e., the rival introduction time d.f. H belongs to a family of distributions with proportional hazards which are of considerable interest in reliability. We may think of F as the distribution of rivals' development time under unit rivalry (h = 1) for judging how fast may the rivals complete development as indicated by H. Since the hazard function A n ( t ) = : - i n H ( t ) is a measure of time-varying innovational risk of rival pre-emption, the proportional hazards hypothesis A~(t) = hA(t) in (4.3) says the effects of time and rivalry on the entrepreneur's innovational hazards are separable and multiplicative. If F has a density and correspondingly a hazard rate (i.e., 'failure rate') 2(0, the so does H with failure rate h2(t). It is the innovational rate of hazard at time t from the viewpoint of our entrepreneur; and by standard reliability theoretic interpretation of failure rates, the conditional probability of rivals' completion soon after t given completion has not occurred within time t is P(T<<. t + 61 T > t) = h62(t) + 0(6).

As the intensity of rivalry increases by a factor from h to ch; this probability, for each fixed t and small b, also increases essentiall by the same factor c. To examine the effect of the intensity of rivalry on the speed of development, assume that having imitators is preferable to being one (Po > P~) and that the corresponding rewards are independent of 'innovation-imitation lag' (P'1 = P~ = 0) as a simplifying assumption. By (4.1) and (4.2), the optimal introduction time z* is then the implicit solution of OV - e-(i-~)~[{ro _ Po + h(P, - Po)2(z)}F(z) &

+ rl - ( i - 2)P~}F(z)] - C'(t) = O,

(4.4)

Reliability applications in economics

209

satisfying the second derivative condition for a maximum at z*. (4.4) defines z* = z*(h) implicitly as function of the rivalry intensity. Kamien and Schwartz (1982) show that if 2(t) t

and

2(t)/A(t)$

in t,

(4.5)

then either (i) z*(h) 1" or (ii) z*(h) is initially ~ and then t in h. The crux of their argument is the following. If ro(h) is implicitly defined by the equation

2(t){A~z)- h} = {po - ro + rl - ( i - 2)P1}/(Po- P1),

(4.6)

i.e., the condition for the left hand side of (4.4) to have a local extremum as a function of h; then z*(h) is decreasing, stationary or increasing in h according as z*(h) > , = or < zo(h). Accordingly, since (4.5) implies that zo(h) is decreasing in h; either z*(h) behaves according to one of the two possibilities mentioned, or (iii) r*(h) < zo(h) for all h >~ 0. The last possibility can be ruled out by the continuity of V= V(z, h) in (4.2), V(0, h ) < 0, V(z*, h ) > 0 and the condition P1 > Po. Which one of the two possibilities obtains of course depends on the model parameters. In case (i), the optimal introduction time z*(h) increases with increasing rivalry and the absence of rivalry (h = 0) yields the smallest such optimal introduction time. The other case (ii), that depending on the rates of return and other relevant parameters, there may be an intermediate degree of rivalry for which the optimal development is quickest possible, is certainly not obvious a-priori and highlights the non-intuitive effects of rivalry on decisions to innovate.

4.4. Further reliability ramifications. From a reliability point of view, Kamien and Schwartz's assumption (4.5) says F ~ {IFR} c3 ~

(4.7)

and hence so does H; where ~( is the set of life distributions with a log-concave hazard function. The IFR hypothesis is easy to interpret. It says; the composite rivals' residual time to development is stochastically decreasing so that if they have not succeeded so far, then completion of their development within any additional deadline becomes more and more likely with elapsed time. This reflects the accumulation of efforts positively reinforcing the chances of success in future. The other condition that F, and thus H, also has a log-concave hazard function is less apparent to such interpretation; it essentially restricts the way in which the time-dependent component of the entrepreneur's innovational hazard from competing rivals grows with time t. The proportional hazard model (4.3) can accomodate different configurations of market structure as special cases, an argument clearly in its favor. By (4.3), as

M. C. Bhattacharjee

210

h --, O, P(T > t) ~ 1 for all t > 0 and in the limiting case T is an improper r.v. witb all its mass at infinity. Thus h = 0 corresponds to absence of rivalry. Similarly as h ~ 0% P ( T > t)---,O for all t > 0; in the limit the composite rivals' appearance is immediate and this prevents the possibility of entreprenunial precedence. If our entrepreneur had a head start with no rivals until a later time when rivals appear with a very large h, then even if our entrepreneur innovates first; his supernormal profits from innovation will very quickly be eliminated by rival imitation with high probability within a very short time as a consequence of high rivalry intensity h, which shrinks to instantaneous imitation as h approaches infinity. In this sense the case h = oo reflects the traditional economists' dream of 'perfect competition'. Among the remaining possibilities 0 < h < oo that reflect more of a realism, Barzel (1968) distinguishes between moderate and intense rivalry, the latter corresponding to the situation when the intensity of rivalry exceeds the market growth rate ( h > 7). If rivalry is sufficiently intense, no development becomes best (h >>~, ~ z*(h) = ~ ) . In other cases, the intense rivalry and non-rivalous solutions provide vividly contrasting benchmarks to understand the innovation process under varying degrees of moderate to intense rivalry. Our modeling to illustrate the use of reliability ideas has been limited to a relatively simplified situation. It is possible to introduce other variations and features of realism such as modification of rivals' effort as a result of entrepreneur's early success, budget constraints, non-contractual development which allows the option of stopping development under rival precedence, and game theoretic formulations which incorporate technical uncertainty. There is now substantial literature on these various aspects of innovation as an economic process (DasGupta and Stiglitz, 1980, 1980a; Kamien and Schwarz, 1968, 1971, 1972, 1974, 1975, 1982; Lee and Wilde, 1980; Lowry, 1979). It appears to us that there are many questions, interesting from a reliability application viewpoint which can be profitably asked and would lead to a deeper understanding of the economics of innovation. Even in the context of the present model which captures the essence of the innovating proces under risk of rivalry, there are many such questions. For example, what kind of framework for R & D rivalry and market mechanisms lead to the rival entry model (4.3)? Stochastic modeling of such mechanisms would be of obvious interest. Note the exponential: H ( t ) = e -m, 2(0 = 1; Weibull: H(t) = e -h'~, 2(0 = ~t ~- 1 and the extreme-value distributions: H(t) = e x p { - h ( e ~ ' - 1)}, 2(t)= 0~e~t all satisfy (4.3) and (4.7), the latter for ~>1. A related open question is the following. Suppose the rival introduction time satisfies (4.3) but its distribution F under unit rivalry (h = 1) is unknown. Under what conditions, interesting from a reliability point of view with an appropriate interpretation in the context of rivalry, does there exist a finite maximin introduction time ~*(h) and what, if any, is a least favorable distribution F* of time to rival entry? Such a pair (z*(h), F*), for which max rain V(~, h; F) = min max V(z, h; F ) = V(z*(h), h; F * ) , z

F

F

~c

Reliability applications in economics

211

would indicate the entrepreneur's best economic introduction time within any specified regime of rivalry when he has only an incomplete knowledge of the benchmark distribution F. Here V(v, h; F) is the total expected reward (4.2) and (4.1) under (4.3). The proportional hazards model (4.3) aggregates all sources of rivalry, from existing firms or potential new entrants. This is actually less of a criticism than it appears because in the entrepreneur's preception, only the distribution of composite rival entry time matters. It is possible to introduce technical uncertainty in the model by recognizing that the effort, usually parametrized through cost, required to successfully complete development is also subject to uncertainties (Kamien and Schwartz, 1971). Suppose there are n competetors including our entrepreneur, the rivals are independent and let G(z) be the probability that any rival completes development with an effort no more than z. If z(t) is the cumulative rival effort up to time t, then the probability that none of the rivals will succeed by time t is

P(t) = 1 - {1

-

G(z(t))} n-1

This leads to (4.3) with F--- G(z), H = P and intensity h = (n - 1) the number of rivals. We note this provides one possible answer to the question of modeling rivalry described by (4.3). What other alternative mechanisms can also lead to (4.3)? If the effort distribution G has a 'failure rate' (intensity of effort) r(z), then the innovational hazard function and rates are

An(t )

( n - 1)

r(u) du, (4.8)

2H(t) = (n - 1)z'(t)r(z(t)), which show how technical uncertainty can generate market uncertainty. If our entrepreneur's effort distribution is also G(z) and independent of the rivals; then note the role of each player in the innovation game is symmetric and each faces the hazard rate (4.8) since from the perspective of each competitor, the other (n - 1) rivals are i.i.d, and in series. It would clearly be desirable to remove the i.i.d, assumption to reflect more of a realism in so far as a rival's effort and spending decisions are often dictated by those of others. Some of the effects of an innovation may be irreversible. Computers and information processing technology which have now begun to affect every facet of human life is clearly a case in point. Are these impacts or their possible irreversibility best for the whole society? None of the above formulations can address this issue, a question not in the perview of economists and quantitative modeling alone; nor do they dispute their relevance. What they can and do provide is an understanding of the structure and evolution of the innovating process as a risky enterprise and it is here that reliability ideas may be able to play a more significant role than hitherto in explaining rivalry and their impacts on the economics of

212

M. C. Bhattacharjee

i n n o v a t i o n . In t u r n the m e a s u r a b l e p a r a m e t e r s o f s u c h m o d e l s a n d their c o n s e q u e n c e s c a n t h e n serve as s i g n p o s t s for an i n f o r m e d d e b a t e o n the w i d e r q u e s t i o n s o f social r e l e v a n c e o f an i n n o v a t i o n .

References Arrow, K. J. (1951). Social Choice and Individual Values. Wiley, New York. Arrow, K. J. (1962). Economic welfare and the allocation of resources for invention. In: R. R. Nelson, ed., The Rate and Direction of Inventive Activity. Princeton University Press, Princeton, NJ. Barlow, R. E. and Campo, R. (1975). Total time on test processes and applications to failure data analysis. In: R. E. Barlow, J. Fussell and N. D. Singpurwalla, eds., Reliability and Fault Tree Analysis, SIAM, Philadelphia, PA, 451-481. Barlow, R. E. and Saboia, J. L. M. (1973). Bounds and inequalities in the rate of population growth. In: F. Proschan and R. J. Serfling, eds., Reliability and Biometry, Statistical Analysis of Lifelengths, SIAM, Philadelphia, PA, 129-162. Barlow, R. E. and Proschan, F. (1965). Mathematical Theory of Reliability. Wiley, New York. Barlow, R. E. and Proschan, F. (1975). Statistical Theory of Reliability and Life Testing: Probability Models. Holt, Rinehart and Winston, New York. Barzel, Y. (1968). Optimal timing of innovation. Review of Economics and Statistics 50, 348-355. Bergmann, R. and Stoyan, D. (1976). On exponential bound for the waiting time distribution in GI/G/1. J. AppL Prob. 13(2), 411-417. Bhattacharjee, M. C. and Krishnaji, N. (1985). DFR and other heavy tail properties in modeling the distribution of land and some alternative measures of inequality. In: J. K. Ghosse, ed., Statistics: Applications and New Directions, Indian Statistical Institute, Eka Press, Calcutta; 100-115. Bhattacharjee, M. C. (1986). Tail behaviour of age-smooth failure distribution and applications. In: A. P. Basu, ed., Reliability and Statistical Quality Control, North-Holland, Amsterdam, 69-86. Bhattacharjee, M. C. (1986a). On using Reliability Concepts to Model Aggregate Inequality of Distributions. Technical Report, Dept. of Mathematics, University of Arizona, Tucson. Brains, S. J., Lucas, W. F. and Straffin, P. D., Jr. (eds.) (1978). Political and Related Models. Modules in Applied Mathematics: Vol. 2, Springer, New York. Chandra, M. and Singpurwalla, N. D. (1981). Relationships between some notions which are common to reliability and economics. Mathematics of Operations Research 6, 113-121. Daley, D. (ed.) (1983). Stochastic Comparison Methods for Queues and Other Processes. Wiley, New York. Deegan, J., Jr. and Packel, E. W. (1978). To the (Minimal Winning) Victors go the (Equally Divided) Spoils: A New Power Idex for Simple n-Person Games. In: S. J. Brahms, W. F. Lucas and P. D. Straffin, Jr. (eds.): Political and Related Models. Springer-Verlag, New York, 239-255. DasGupta, P. and Stiglitz, J. (1980). Industrial structure and the nature of innovative activity. Economic Journal 90, 266-293. DasGupta, P. and Stiglitz, J. (1980a). Uncertainty, industrial structure and the speed of R& D. Bell Journal of Economics 11, 1-28. Feller, W. (1966). Introduction to Probability Theory and Applications. 2nd ed. Wiley, New York. Gail, M. H. and Gatswirth, J. L. (1978). A scale-free goodness-of-fit test for the exponential distribution based on the Lorenz curve. J. Amer. Statist. Assoc. 73, 787-793. Galbraith, J. K. (1952). American Capitalism. Houghton and Mifflin, Boston. Goldie, C. M. (1977). Convergence theorems for empirical Lorenz curves and their inverses. Advances in Appl. Prob. 9, 765-791. Guess, F., Hollander, M. and Proschan, F. (1983). Testing whether Mean Residual Life Changes Trend. FSU Technical Report #M665, Dept. of Statistics, Florida State University, Tallahassee. Hicks, J. R. (1932). The Theory of Wages. Macmillan, London. Hollander, M. and Proschan, F. (1975). Tests for the mean residual life. Biometrika 62, 585-593. Kamien, M. and Schwartz, N. (1968). Optimal induced technical change. Econometrika 36, 1-17.

Reliability applications in economics

213

Kamien, M. and Schwartz, N. (1971). Expenditure patterns for risky R & D projects. J. Appl. Prob. 8, 60-73. Kamien, M. and Schwartz, N. (1972). Timing of innovations under rivalry. Econometrika 40, 43-60. Kamien, M. and Schwartz, N. (1974). Risky R & D with rivalry. Annals of Economic and Social Measurement 3, 276-277. Kamien, M. and Schwartz, N. (1975). Market structure and innovative activity: A survey. J. Economic Literature 13, 1-37. Kamien, M. and Schwartz, N. (1982). Market Structure and Innovation. Cambridge University Press, London. Kelfsj/J, B. (1982). The HNBUE and HNWUE class of life distributions. Naval Res. Logist. Qrtly. 29, 331-344. Kelfsj/5, B. (1983). Testing exponentiality against HNBUE. Scandinavian J. Statist. 10, 65-75. Kelfsj~, B. (1984). Reliability interpretations of some concepts from economics. Naval Res. Logist. Qrtly. 31,301-308. Kleinrock, L. (1975). Queueing Systems, Vol. 1. Theory. Wiley, New York. KSllerstrSm, J. (1976). Stochastic bounds for the single server queue. Math. Proc. Cambridge Phil. Soc. 80, 521-525. Lucas, W. F. (1978). Measuring power in weighted voting systems. In: S. J. Brahms, W. F. Lucas and P. D. Straffin, Jr., eds., Political Science and Related Models. Springer, New York, 183-238. Lee, T. and Wilde, L. (1980). Market structure and innovation: A reformulation. Qrtly. J. of Economics 194, 429-436. Loury, G. C. (1979). Market structure and innovation. Qrtly. J. of Economics XCIII, 395-410. Macdonald, J. B. and Ransom, M. R. (1979). Functional forms, estimation techniques and the distribution of income. Ecometrika 47, 1513-1525. Mukherjee, V. (1967). Type III distribution and its stochastic evolution in the context of distribution of income, landholdings and other economic variables. Sankhy-d A 29, 405-416. Owen, G. (1982). Game Theory. 2nd edition. Academic Press, New York. Pechlivanides, P. M. (1975). Social Choice and Coherent Structures. Unpublished Tech. Report # ORC 75-14, Operations Research Center, University of California, Berkeley, Rae, D. (1979). Decision rules and individual values in constitutional choice. American Political Science Review 63. Ramamurthy, K. G. and Parthasarathy, T. (1983). A note on factorization of simple games. Opsearch 20(3), 170-174. Ramamurthy, K. G. and Parthasarathy, T. (1984). Probabilistic implications of the assumption of homogeneity in voting games. Opsearch 21(2), 81-91. Salem, A. B. Z. and Mount, T. D. (1974). A convenient descriptive model of income distribution. Econometrika 42, 1115-1127. Schumpeter, J. A. (1961). Theory of Economic Development. Oxford University Press, New York. Schumpeter, J. A. (1964). Business Cycles. McGraw-Hill, New York. Schumpeter, J. A. (1975). Capitalism, Socialism and Democracy. Harper and Row, New York. Seneta, E. (1976). Regularly Varying Functions. Lecture Notes in Math. 508, Springer, New York. Straffin, P. D., Jr. (1978). Power indices in politics. In: S. J. Brams, W. F. Lucas and P. D. Straffin, Jr., eds., Political Science and Related Models. Springer, New York, 256-321. Straffin, P. D., Jr. (1978a). Probability models for power indices. In: P. C. Ordershook, ed., Game Theory and Political Science, University Press, New York. TaiUie, C. (1981). Lorenz ordering within the generalized gamma family of income distributions. In: C. Taillie, P. P. Ganapati and B. A. Baldessari, eds., Statistical Distributions in Scientific Work. Vol. 6. Reidel, Dordrecht/Boston, 181-192. Taussig, F. W. (1915). Innovation and Money Makers. McMillan, New York.