An algorithm for finite conjoint additivity

An algorithm for finite conjoint additivity

JOURNAL OF MATHEMATICAL PSYCHOLOGY An Algorithm 16, 204-218 for Finite B. F. Department of Education, (1977) University Conjoint Additivity...

757KB Sizes 2 Downloads 72 Views

JOURNAL

OF MATHEMATICAL

PSYCHOLOGY

An Algorithm

16, 204-218

for Finite B. F.

Department

of Education,

(1977)

University

Conjoint

Additivity

SHERMAN

of Adelaide,

South

Australia

5000

A complete solution to the problem of finite conjoint additivity is provided, not only to answer the question of whether a particular order is conjointly additive, but also to detail the possible value assignments which demonstrate conjoint additivity for the order. In the process, a complete solution is given to the problem of solving a finite number of homogeneous linear inequalities in a finite number of variables. While others have reduced the problem to the solution of systems of linear inequalities, the algorithm presented here is based on “basic inequalities,” and it provides considerable computational simplicity over previous algorithms.

Consideration of conjoint additivity (Lute and Tukey, 1964; Lute, 1966; and others) and, in particular, finite conjoint additivity (Adams and Fagot, 1959; Scott, 1964; Tversky, 1964; and others) has been concentrated upon finding conditions under which an order is conjointly additive. In trying to find a method for assigning (to the relevant components) values which produce the required order in the product set, I have been fortunate in finding that difference in approach which makes the problem far more amenable to solution, The major problem at the moment seems to be that those conditions which can be easily tested cannot be combined to give both necessity and sufficiency, while the necessary and sufficient conditions (e.g., those of Scott, 1964 or Tversky, 1964) cannot be readily tested. It is my hope that the algorithm presented here will remedy this situation, providing not only a reasonable test which satisfies both necessity and sufficiency, but also a formula for all possible scales which gives the required order. Most of the individual parts of the work have been done previously by others; the only major new idea involved is that of the basic inequalities, which reduces the system of linear inequalities to be solved down to workable proportions. Certainly the conversion of the problem to one in linear inequalities has been done by Scott (1964) and Krantz et al. (1971), while the solution of systems of linear inequalities used can be found in Motzkin et al. (1953); it should be noted that I have used the variation given by Raiffa, Thompson, and Thrall on pp. 63 and 64, rather than that of Motzkin, although his work predates theirs (cf. Motzkin, 1936). However, I believe the major advance offered here is the ready access to a solution of the problem, whether by hand, or by computer for more complicated cases. This 204 Copyright All rights

0 1977 by Academic Press, Inc. of reproduction in any form reserved.

ISSN

0022-2496

ALGORITHM

FOR

FINITE

CONJOINT

ADDITIVITY

205

means that conjoint additivity can now become the tool it was originally intended to be, as the user no longer has to be mathematically sophisticated. The conversion of the problem to a system of linear inequalities is carried out in Section 2, and this section also contains the theoretical structure on which the algorithm is based. Section 3 carries us through the working of the algorithm, while in Sections 4 and 5, the necessary adjustments for weak orders and for the multidimensional case are given. While the algorithm can theoretically be also applied to semiorders, the idea of basic inequalities will have to be refined considerably to bring the calculations, in this case, down to manageable proportions.

1.

PRELIMINARIES

In this paper we are discussing four kinds of order on a given set S, first, a preorder on S is a relation which satisfies (i)

3

if a 3 b and 6 >, c in S, then a > c;

and (ii) A partial (iii)

a > a for each a in S. order has the additional rule

if a >, b and b > a in S, then a = b;

while a weak order (or total preorder)

is a preorder with the rule

(iv> either a > b or b > a for each a and b in S. A total order is an order which satisfiesall four of these conditions. Conjoint additivity is a property of an order on a Cartesian product of a number of disjoint sets;suchan order is conjointly additive if there is a function from eachcomponent set into the real field such that the original order on the product is determined by the order, within the reals, of the sums of the function values of the components of the elementsof the product. In this paper, I shallbe consideringonly finite conjoint additivity, that is, conjoint additivity of a finite product of finite sets, and, in fact, for most of the time, simply the product of two finite sets. For convenienceI shall call the elementsof the product items. An order > on the set S x T is said to be independent if, whenever (s, to) > (s, tJ is true for one s E S, it is true for all s E S, and whenever (s,,, t) 3 (si , t) is true for one t E T, it is true for all t E T. Scott (1964) and Tversky (1964) have given necessaryand sufficient conditions for finite conjoint additivity, and I shall refer briefly to these conditions in the form: The order > on the set S x T is conjointly additive if and only if there is no list of inequalities containing at least one strict inequality, such that for each component set the list of elementson the higher side of the inequalities is a permutation of the list of elementson the lower side.

206

B. F. SHERMAN

A nonnegative linear combination of a set of vectors is a linear combination of them in which all of the relevant scalarsare either positive or zero. (Note that “positive” will exclude zero throughout this paper.)

2. CONJOINT ADDITIVITY

FOR TOTAL

ORDERS

For this section, we shall consider the product S x T, where S has m elementsand T has n; on S x T we have a total order > in the form of a list of the items from least to greatest. We say that > is conjointly additive on S x T if there is a function v: S u T - R+ (the positive reals with zero) such that (s, , tr) 2 (s2, tJ in S x T if and only if v(sI) + v(tl) 2 v(s2) + v(tJ. To illustrate the algorithm we shall work through our first example as we explain each procedure. Our example will be a two-dimensional total order: let S be {a, b, c} and T be (x, y, z), so that we are looking at comparisonsbetween pairs such as (a, y) and (c, .s). Let us supposewe have made all comparisons;these can be tabulated (Fig. l), where the relationships are to be read from left to right across the page; e.g., (a, x) > (a, y), and (a, Z) < (b, y). Ranking the items, we obtain (c, y) < (c, x) < (a, y) < (c, x) < (a, Z) < (b, y) < (a, x) < (6, Z) < (b, x). As this is a totally ordered list of the items, we are now ready to start looking for our function v.

FIG. I.

Comparison

of elements

of S x T in Example

1.

We continue our preliminary procedures by assigning an order > to S by means of the order in which they first appear in our list of items and we do the samefor T. Then we draw a graph of S x T, dots representing items, such that the elementsof S are in order from left to right, and those of T in order from bottom to top, and join the dots in order by a sequenceof directed lines. The necessarycondition of independence

ALGORITHM

FOR

FINITE

CONJOINT

ADDITIVITY

207

(Krantz et al. (1971)) is expressed in the following rule: No point may be included in the graph unless all points dominated by it precede it in the graph. Because (c, y) < (a, y) < (b, y), we have c < a < b in S; and, similarly, we have .y < z < x in T. Our graph of (S x T, 2) is then as given in Fig. 2. To apply our independence conditions we consider each point of the graph taken in order, and look at all those points to the left of it, or below it, or both; e.g., for the point (a, z) in the graph below, the points it dominates are (c, z), (a, y), and (c, y), and all of these precede it in the graph.

T

C

a

b

AS FIG. 2. Graph of the order in Example 1. Using this procedure on each point of the graph, we see that the order in our example is independent on S x T. We now index S and T as follows: s0 < s1 < ... < s,,,+~ , and t, < t, < ..’ < t,,-, . (Note that we can use < in place of < because we are dealing with a total order.) Those inequalities of (S Y T, 2) represented by a horizontal or vertical line segment (not necessarily part of our graph) we say are of type (i), while those represented by an oblique line we classify as type (ii). Those of type (ii) which are segments of our graph we call basic; there are also possibly some basic inequalities of type (i), but these are not very common; a type (i) vertical inequality is basic if every horizontal translation of its graph segment is also a segment of the graph, and the horizontal ones are defined similarly. Thus our present example has no type (i) basic inequalities, as the segment (c, y) - (c, zz) is not repeated across the graph, i.e., (a, y) + (a, z) and (6, y) 4 (6, z) are not segments of the graph. We name a type (i) basic vertical (horizontal) inequality by its left-most (lowest) image, i.e., (so , tj) < (s0 , tj+l), or (si , to) < (sip1 , to). Independence and transitivity guarantee that all other inequalities on S Y T are consequences of the basic inequalities. The basic inequalities of our example are (c, z) < (a, y); (a, y) < (c, x); (c, X) < (a, z); (a, z) < (6, Y); (b, Y) -c (a, 4; (a> 4 < (6 4.

208

B. F. SHERMAN

If (S x T, >) is to be conjointly additive, we will have to provide each column and each row with a value of our function w; as the left-most and lowest (corresponding to s0 loss of generality, we have and to) can be given the value 0, each without P=(rn-I)+@-1) va 1ues to find. Second, we let the number of basic inequalities be r; then the set of inequalities can be represented by an r x p matrix A, as follows. We have r basic inequalities, each represented by a row of A. Columns 1 to m - 1 refer to the elements sr to s,-r of S, respectively, and columns m to p refer to the elements t1 to t,-, of T. s, and t, will have value 0 in all cases, so we have no column for them. If the element si (i # 0) of S is on the lower side and not on the higher side of a particular inequality, then we put a(-]) in the relevant row in the ith column; and if it is on the higher side and not the lower, then we put a 1 in that position; and we put a 0 if it does not appear, of if it appears on both sides. Similarly, the element ti (i # 0) of T d et ermines whether a -1, 0, or 1 is placed on the (m + j - 1)th column in the relevant row according to its role in the inequality. We can thus determine A for our example:

(G 4 -=c (as Y)

(4 Y> < cc94

a

b

1

O-l

-1

(6 x) < (6 4

1

(4 4 < @,Y)

-1

(4 Y) < (6 4 --I

x

0

0

0

0

1 -1

l-l l-l

(6 4 < (6 4

z

1 0

0 1

1

1 --I

A representsa linear transformation from Rr to RP; we define two orders > and Q on Rp thus x >OinRP

if

x1 2 0 for each component xi of x,

x D 0 in RP

if

xi > 0 for each component xi of x.

and

Furthermore, for any x of R” we construct a function a: S v T---f R by the rules w(so)= 0 = o(t,), w(sJ = xj )

for

1
@j> = xm-1+j ,

for

1
and

Then we obtain THEOREM

1 (Scott, 1964; Krantz et al., 1971, pp. 430-431).

The total order > on

ALGORITHM

S x T is conjointly additive any such x, the corresponding

FOR

FINITE

CONJOINT

209

ADDITIVITY

ty and onZy ;f there is an x in Rp for which Ax D 0; and, for function v satisfies the conditions for conjoint additivity.

Proof. First suppose (si , tj) < (Q , tJ is a basic inequality. a row of A, and the relation Ax D 0 gives, for that row,

Then

it corresponds

to

and so VW f

a($)

< V(Q) + v(t,).

(For cases such as i = 0, the relation v(sO) == 0 = v(t,) means that the above inequality stands.) then by consulting our graph we can If (Si , fj) < (S k , tJ is not a basic inequality, obtain a set of basic inequalities which imply it, and we can apply the first part of the proof to each of these, and then add inequalities, cancelling like terms on either side, to obtain w

+ $4)

-=I V(Sk) + v(h).

Thus, if x satisfies Ax D 0, then v satisfies conjoint additivity for > on S x T. Reversing the argument of the first part, we obtain the converse. Note that, because we selected (s,, , to) as the least element of S x T, each V(Q) and v(tJ will be nonnegative, and so each component of any such x will be nonnegative. We shall call an element x of Rp nonnegative if all its components are nonnegative. THEOREM

Motzkin’s

2. All nonnegative algorithm, in the form

solutions

x of the inequality

x = CIXl + czxp + ... + c,x, where the xi are the jinal constants.

vectors obtained from

However, this does not quite Hence we have:

by

. ..(ii.

the algorithm,

solve our problem

Ax 3 0 may be found

and the ci are nonnegative

as we need solutions

to Ax D 0.

COROLLARY 1. Ax D 0 will have a solution if and only if there is no component of the Ax,‘s which is zero for all of the xi’s, and for any x for which Ax D 0, the subset qf the xi’s corresponding to the nonzero ci’s in 0 (for this particular x) has this condition.

I’Toof. This follows immediately from the fact that Ax D 0 if and only if Ax > 0 and no component of Ax is zero. This corollary will provide us with a test for conjoint additivity within the algorithm; however, to give our solution a proper representation, we need the following additional corollary:

210

R. F. SHERMAN

COROLLARY 2 (cf. Krantz et al., p. 431, Theorem 2). Let z = x1 + x2 + ... + xp ; then if Ax D 0 has a solution, z will be a solution, and Ax D 0 if and only if there are di’s such that x = d,x, + ... + dDx, , where each di is strictly positive.

Proof. The fact that x is a solution if there is any solution follows directly from Corollary 1. For the second part, it is clear from Corollary 1 that such an x satisfies Ax D 0; conversely, if Ax D 0, then we can easily find a strictly positive co such that the minimum component of Ax is greater than cotimes the maximum component of AZ; hence Ax > c,Az, and so A(x - C,Z) > 0.

Thus x - c,z = crx, + ... + c,x, ; hence x = d,x, + ... + d@x, , where each di = c,, + cd > 0.

3. THE BASIC ALGORITHM

Our problem has thus been reduced to that of finding positive solutions x of the vector inequality Ax > 0. For this, I use a variation of Motzkin’s double description method (Motzkin, 1936; Motzkin et al., 1953). We have our Y x p matrix A; we write this down on the left side of the page, leaving p rows above it free. We then fill in a p x p identity matrix in the rows above A and in

Ineylit;e4 -

<-

.-

! .

'-

m&N FIG. 3. Initial layout of the tableau.

the columns to the right of A. In fact, it is probably useful to put the basic inequalities to the far left of the page, and then determine A from these in the one tableau. Our initial layout should thus be of the form shown in Fig. 3. We then proceed in steps, each step corresponding to a row of A. For our first step

ALGORITHM

FOR

FINITE

CONJOINT

ADDITIVITY

211

the p columns of the unit matrix are live; we form the product of the first row with each of these columns in turn, entering it in the obvious position. Some of the products will be positive, some negative, and some zero. We form new columns by considering each pair of columns where one has positive product and the other negative; if the columns are abutting, a term we shall discuss later, then we form a new column by multiplying each of the two columns by the modulus of the other’s product, then adding these two together. Let us carry through the first step for our example. For the first step, all the columns are abutting, so this will not worry us yet.

1

O-l

I 000 0100 0010 0001 11 O-l

0

1 0 1 0 0

0

new column

products

The new column is the sum of the first column and the third; note that the balance of the two in the new column is such that its product with the first row is 0. The columns which go on live to the second step are those with positive or zero product; once a column has a negative product arise, it is discarded at the end of that step. Let us now carry through the second step (in this case both the relevant pairs of columns are abutting, so we will not yet be troubled by this condition):

1 -1

O-l 0 0

0 1

1000 0100 0010 0001 1 O-l -1 0

0 1

111 0 00 101 0 11 0 x 0 -1 0 0

Here we have generated two new columns, by combining the fourth and the first, and then the fourth and fifth. Note that the product of the first row with the fifth column is simply recorded as an x ; this is simply to indicate that it is nonzero. The simple rule for determining these products of earlier rows with new columns is: if both columns being combined have 0 in that row, then we put 0 in that row for the new column; otherwise, we put X. Two columns are abutting if there is no other live column which has zeros in every row (including the rows above the line) that the two both have a zero. To check for this, we look at the two columns in question to locate the rows where they have common zeros, and then check each other live column for a nonzero entry in at least one of these rows.

212

B.

F. SHERMAN

If we complete the six steps of the algorithm,

a

b z x

w

-=c(%Y> 1 O-l

(u,y) (cp) (u,z) (b,y) (a,x)

< < < < <

0 (cp) -1 0 0 1 (u,z) 1 0 1 -1 (b,y) -1 1 -1 0 (up) 1 -1 0 1 (b,z) --I I 1 -1

10001111111112 01000000122233 00101011011011 00010112112122 1 O-1 0 ox 0 ox 0 ox ox --I 0 l-l 0 ox 0 ox ox 0 0 -1 0100x000x 1 -1-2 -2 0 0 0 x x 0 -1 10100x -1 1 0 0 1 0

then the only two relevant columns which were not abutting were the ninth and the thirteenth; they have a common zero only in the seventh row, and each of the eleventh and twelfth columns also have a zero there. Actually, Motzkin uses the term “adjacent” rather than “abutting” (Motzkin et al., 1953). However, the adjacency refers to the vector space representation, and not the tableau. To avoid confusion I have introduced the new term. A necessary, but not sufficient, condition that two columns be abutting is that the number of common zeros is at least two less than the number of variables. This can be used to establish that some pairs do not abut, but cannot verify that any pairs do abut. The solution of the vector inequality Ax 3 0 is given by the live columns at the end of the last step; each solution must be a positive linear combination of these column vectors: i.e.,

where each ci > 0. For Ax D 0, we take each ci > 0. We now use Theorem 1 to translate this solution into a solution for our conjoint additivity problem. The function v we have generated must thus be of the form: w(c) = 0 = w(y), (the first component of x), + 2c, (the second), v(b) = 2c, + 2c, + 2c, + 3c, + 3c, (the third), $4 = Cl + c2 + c4 + c5 w(a)

=

Cl

+

cc! +

V(X) = Cl + 2%

c3 +

+

c3

c4

+ 2c4 + 2c,

where cr ,..., c5 can be any positive real numbers.

(the fourth),

220

JENS

WANDMACHER

rather than to the conditional probabilities pii defined above. The matrix is called S-independent iff S CI and there exist two real-valued functions a and 6 defined on its rows i and columns j such that the probability of the (i, j)-event is represented by the product aibj whenever iSj. The notions of S-multiplicativity and S-independence are to be distinguished if the values of a and b are interpreted as the marginal probabilities of the row- and column-events, which seems to be implied by Goodman(l968, pp. 1096-1097). In that case S-independence is a special case of S-multiplicativity where u is a constant function (Falmagne, 1972, pp. 215-216). The concept of S-multiplicativity represents a generalization of Falmagne’s (1972) concept of a multiplicative confusion matrix. According to the latter concept the subset S includes all incorrect stimulus-response pairings, that is, all off-diagonal elements (i, j) EI; and the sequel, I shall refer to this as to &multiplicativity. Falmagne (1972) and Townsend (Note 2) have shown some connections between D-multiplicativity as a formal attribute of a stochastic square matrix and several process models of perceptual identification and choice behavior, which predict D-multiplicative confusion matrices. Among these models the all-or-none model (Townsend, 1971) is of particular interest here. According to the all-or-none model, the presented stimulus i is either identified with probability pi or no relevant information about the presented stimulus is available and the subject guesses some response j with probability bj ; thus the all-or-none model predicts the confusion matrix of a perceptual identification experiment (where the response i is the unique correct response to the stimulus i) by pij = pi + (1 -pi> =

(1 -piI

4 ,

i = j,

bi >

i#j,

(1)

and Cj”=, bj = 1. Clearly, Eq. (1) implies D-multiplicativity. However, later on it will be shown that confusion matrices obtained from perceptual identification experiments need not be fi-multiplicative and, correspondingly, the all-or-none model is not generally an adequate model of perceptual identification. In the present study, the confusion matrices obtained from three visual identification experiments (Wandmacher, Note 3; Keeley, Doherty, & Bachman, 1970; Rumelhart, Note 1) will be tested for &multiplicativity. This property will be shown to be supported in some cases, but rejected in others. At the same time, the aim of this paper will be directed to provide an explanation for this state of affairs. More precisely, it will be shown that the multicomponent theory of perception (MCTP) developed by Rumelhart (Note 1) may be applied successfully, such as to predict in which experimental situations D-multiplicativity holds and in which ones it is doomed to failure. Actually, for some sets of stimuli MCTP reduces to the all-or-none model ,i.e., it predicts D-multiplicativity. For other sets of stimuli, however, the theory demands that multiplicativity only holds for a more severely restricted subset S C I. This clearly points to the importance of S-multiplicativity for theories of visual identification. I shall begin with a brief exposition of some consequences of the definition of S-multiplicativity given above. Following the presentation of the actual data, the connections claimed to hold betweenS-multiplicativity and MCTP will be worked out in detail.

214

B. F. SHERMAN

I

Our tableau is then:

(~9 Y> < (by 4 (c>4 < (h Y) (b, y) < (a, z) (b,z)<(c,y)

b 1 l-l -1 -1

c y O-l 1 0 --I 1 l-l

1 0 1 0 0 0 0 0 1 O-1 1 -1

z 01 0 1

-1

0 0 1 0

I

0 1 010101 1120012 1 1 1 1 0 0 0 1 1 0 10 O/l 2 1 00x ox ox o2ooxxoo 1-2-l-2 0 0 0 -1 -2 -2 -1

1

2 0 0 0

As we can see, there are no positive products at the fourth step, so the order is not conjointly additive. This is in fact the standard example of an order not conjointly additive. Clearly the basic inequalities can be seen to form a set as described in Scott’s necessary and sufficient condition in Section 1. 4.

WEAK

ORDERS

So far we have consideredonly those casesin which a definite preference for one item over the other is expressedfor each pair of items, the resulting order, if the preferences are consistent, being a total order. In those caseswhere, for some pairs of items, no definite preference is discernible, it is customary to consider both items of such a pair as being of equal value, and the order, if again the preferencesare consistent, is then a weak order. For instance,if we take S = {a, b, c} and T = {x, y, z}, and consider the order defined by the comparisonsin Fig. 5 (where M denotesequivalence). The order is a weak order, with ordered list (a, x) < (a,~) < (6, x) < (b, y) < (a, 4 -=c(c, 4 < (b, 4 = (c, Y> < (c, 4.

FIG.

5.

Comparison

of elements

of S x

Tin

Example

3.

ALGORITHM

FOR

‘FINITE

CONJOINT

215

ADDITIVITY

Our graph is then given in Fig. 6. We can use either of the inequalities (c, X) < (c, y) or (c, X) < (b, z) and again either of (b, a) < (c, z) or (c, y) < (c, z). If we choose to use (c, X) < (c, y), then we see from the diagram that it is a type (i) basic inequality. We list our basic inequalities, and then our basic equivalences; we follow the same rules to determine which are basic.

FIGURE 6

In our example, then, the basic inequalities and equivalencesare: (a, 4 < (a, Y); (4Y) < (h 4; (b,Y) -c (a, 4; and (4 4 < cc,4; (b>4 * (c, Y). Thus our tableau is:

b (a, 4 -=c(a, Y> 0 (a, Y) < (b, 4 1 (b,y) < (a, z) -1 (a, z) < (c, x) 0 (b, z) m (c, y) -1

c y z 0 1 0 O-l 0 0 -1 1 1 0 -1 1 1 -1

100011 0 1 0 0 0 0 0 0 1 O-l -1 0 1 1

0 1 0 1

1 0 0 000112 0 101001 1 012112 ox ox 0 00X 00; 1-2 0 0 x -1 -1-2 0 O-l

I

1

-3

ox 0 0

1 2 0 1

0 0 0 0

‘,”

0 0

For the equivalence, the only difference in procedure is that we discard both positive and negative products at the end of that step, keeping only those columns having zero product with that row. Thus our solution, the final live vectors, are

216

B. F. SHERMAN

Thus, the symbolic solution of conjoint additivity is z (1,271) Y (O,l,O) x (O,O,O) a

(1,3,2) (2,493) (0,2,1) (1,392) (0, 1, 1) (1,Z 2) b C

5. MULTIDIMENSIONAL

CONJOINT ADDITIVITY

The basic algorithm may be used to solve conjoint additivity for more than two dimensions;in fact, very few adjustments are necessary. First, we should note that each component of the least item is the zero component for that dimension, and so there is no column for it in A. Second, the determination of basic inequalities becomesmore complicated. We can rephrasethe condition as follows: A segmentof the graph representsa basic inequality if and only if eachtranslation of the segment in any direction perpendicular to the minimal surface of the grid containing the segmentis itself a segmentof the graph. It is possibly simpler to interpret the condition in terms of the ordered list of items. If we call thoseinequalities correspondingto adjacent items in the ordered list (and hence to segmentsof the graph) the minimal inequalities, then our condition says: A minimal inequality is called basicif, for eachcomponent which hasthe sameelement on each side of the inequality, we can replace that element by any other element of that component set, and still have a minimal inequality. Thus, in Example 4 below, the inequality (a, u, x) < (b, u, x) is minimal but not basic, because (a, V, X) < (6, U, x) and (a, u, y) < (b, u, y) are not minimal (the fact that (a, D,y) < (b, V, y) is also minimal is not sufficient to make either basic). On the other hand, (b, U, X) < (a, V, x) and (b, U,y) < (a, w,y) are both minimal, and hence both basic; both give rise to the samerow of A, so we need only take one for our list. Also, (a, U,y) < (b, v, X) is minimal, and hencebasic,asit hasno components

5

U

FIGURE

7

J

ALGORITHM

FOR

FINITE

CONJOINT

217

ADDITIVITY

with the same element on each side. It can easily be seen that the restriction condition to the two-dimensional case is the condition give in Section 2. EXAMPLE

of this

4.

s = (a,43 T = G,Y>, u = {u,v} with graph as given in Fig. 7; then a, U, and x are our zero components; our ordered list is (a, 24,x) < (b, u, x) < (a, 74 x) < ( a, u, Y) -=c (h f4 3) < (6 % Y) -=c (6 0, Y) < (4 % Y). Our independence condition is satisfied, and so we can reduce this list to three basic inequalities: (b, u, x) < (a, ZI, x) < (a, U, y) < (b, V, x). Hence our tableau is

b (4 u, x) < (a, v, x) (4 u, x) < (a, % y) (a, u, Y) < (h v, 3)

--I

tJ 1

0 -1 1 1 -1

1001011 0101111 0010112 -1 10 -1 l-l -1

Y 0 1

Ii

I ox 0 0

0 0 1

0 x 0

Our column vector solutions are

so our solution is (LZ3) b (0, 1, 1)

(I,& X

a (O,O, 0) u

2)

C&3,4) Y

(1,

192)

G2,

3)

(1,1,1) V

REFERENCES ADAMS, E., & FAGOT, R. A model of riskless choice. Behawioural Science, 1959, 4, l-10. KRANTZ, D. H., LUCE, R. D., SUPPES, P., & TVERSKY, A. Foundations of measurement, Vol. 1. New York: Academic Press, 197 1. LUCE, R. D. Two extensions of conjoint measurement. ]ournaZ of Mathematical Psychology, 1966, 3, 348-370. LUCE, R. D., & TUKEY, J. Simultaneous conjoint measurement: a new type of fundamental measurement. Journal of Mathematical Psychology, 1964, I, I-27. MOTZKIN, T. S. Beitriige zur Theorie der Iinearen Ungleichungen, Dissertation, Basel, 1933.

218

B. F. SHERMAN

MOTZKIN, T. S., FCAIFFA, H., THOMPSON, G. L., & THRALL, Annals of Mathematics Studies, 1953, 28, 51-72. SCOTT, D. Measurement structures and linear inequalities. 1964, 1, 233-247. TVERSKY, A. Finite additiwe structures. Report MMPP 64-6, of Michigan, 1964. RECEIVED:

October 7, 1975

R. M.

The

Journal Department

double

description

of Mathematical of Psychology,

method. Psychology, University