ORGANIZATIONAL BEHAVIOR AND HUMAN DECISION PROCESSES
Vol. 67, No. 3, September, pp. 258–270, 1996 ARTICLE NO. 0078
Stopping Criteria in Sequential Choice GAD SAAD Concordia University AND
J. EDWARD RUSSO Cornell University
The essential decision in sequential choice is not which alternative to choose but when to stop acquiring additional information and commit to the leading alternative. Two studies investigate individuals’ stopping strategies. The key difference between the two studies is whether the experimenter or subjects controlled the order in which information was acquired. We hypothesize that subjects’ stopping criteria do not remain fixed, but are sensitive to acquisition costs and to the achieved progress during the choice. Across both studies 21 of 26 subjects exhibited significant adaptation. We also found an effort-reducing heuristic, stopping after all attributes in a predetermined set of core attributes had been acquired. As expected, use of this Core Attributes heuristic was greater when subjects controlled the order in which information was acquired. Both adaptation and the Core Attributes heuristic are compatible with the adaptive view of decision making and with the cost-benefit framework more generally. q 1996 Academic Press, Inc.
Consumers seldom consider all relevant information prior to making a purchase. Instead, they use various strategies and heuristics to decide when to stop acquiring additional information and commit to one alternative. Our purpose is to investigate these stopping strategies. This research was supported by the Social Sciences and Humanities Research Council of Canada (Grant 410-95-0451) to G.S. and by the Johnson Graduate School of Management, Cornell University. Portions of these data were presented at the fourteenth research conference on subjective probability, utility, and decision making, Aix-en-Provence, France, 1993 and the annual meeting of the Society for Judgment and Decision Making, Washington, D.C., 1993. We thank Darren Bicknell for his assistance in data analysis; and Ido Erev and an anonymous reviewer for their helpful comments on an earlier version of the paper. Address correspondence and reprint requests to Gad Saad, Concordia University, Marketing Department, 1455 de Maisonneuve Blvd. W., Montreal, QC, Canada H3G 1M8. E-mail:
[email protected].
258
0749-5978/96 $18.00 Copyright q 1996 by Academic Press, Inc. All rights of reproduction in any form reserved.
/ a707$$2641
08-19-96 21:34:14
Much of the literature on information search follows the economics tradition. Its guiding principle is to search as long as the perceived marginal benefits of acquiring additional information exceed the perceived marginal costs (e.g., Stiegler, 1961). This normative framework requires strong assumptions, typically about price distributions and information acquisition costs, in order to derive optimal levels of such search behaviors as the number of vendors that should be visited or the number of product attributes that should be acquired (e.g., Manning & Morgan, 1982; Ratchford, 1982; Hagerty & Aaker, 1984; Hutchinson & Meyer, 1994). In contrast, the consumer behavior tradition favors description over prescription. It often reports and analyzes the factors that are observed to influence the amount, type and processing of acquired information. The present work lies closer to the latter tradition. Our specific focus is the process that leads to the termination of information search. In investigating consumers’ stopping strategies, a variant of the sequential-sampling model developed by Wald (1947) will be used. The latter posits that information is acquired until the accumulated evidence in support of the ‘‘working’’ hypothesis exceeds some preestablished criterion (also referred to as an absorption barrier or a stopping boundary, policy or threshold). The model has been applied in signal detection (Swets, 1964), psychophysical judgment and discrimination (Link, 1975; Swensson & Thomas, 1974), choice under uncertainty (Petrusic & Jamieson, 1978; Busemeyer, 1982, 1985; Busemeyer & Townsend, 1993), multiattribute decision making (Diederich, 1995), probabilistic inference (Wallsten & Barton, 1982), and categorization (Van Wallendael & Guignard, 1992). Within the context of deterministic multiattribute choice, two recent models have been proposed. Svenson’s (1992) differentiation and consolidation theory addresses both predecisional and postdecisional processes (see also Festinger, 1964, p. 4). In the predeci-
obha
AP: OBHDP
STOPPING CRITERIA IN SEQUENTIAL CHOICE
FIG. 1. (a) The criterion-dependent choice model. (b) Concavity of the stopping thresholds.
sional stage, an alternative is tentatively chosen via a gradual differentiation process. The differentiation continues until it is sufficiently high to justify a choice. In the postdecisional phase, various consolidation operations are applied to protect the chosen alternative against possible threats. In the same vein, the criterion-dependent choice (CDC) model of Aschenbrenner, Albert, and Schmalhofer (1984) posits that decision makers acquire one attribute of information at a time and calculate the subjective difference between the two competing alternatives on that attribute (see also Diederich, 1995). The acquisition of new information continues until the cumulative attribute difference exceeds some threshold criterion k set by the decision maker (see Fig. 1a). Schmalhofer et al. (1986) obtained results supporting the CDC model (See Albert, Aschenbrenner, & Schmalhofer, 1989, for a more recent expose of the CDC model). Bockenholt et al. (1991) discuss the compatibility of the CDC model with the traditional cost-benefit framework. The trade-off between the cost of reaching a decision and the quality of the final choice are incorporated into the determination of the threshold value k. In the CDC model, the threshold is a constant that is independent of the progress achieved and the costs incurred during the search. Maintaining a rigid threshold could conceivably result in an extremely long and costly search for information. Instead, the following hypothesis is proposed: The stopping threshold converges with the number of attributes acquired. This is depicted in Fig. 1b. Such convergence of the threshold should be encouraged by (a) a high cost of acquiring additional information and (b) the achievement of relatively little cumulative discrimination between the two alternatives (so that it seems unlikely that the original constant thresh-
/ a707$$2641
08-19-96 21:34:14
obha
259
old will be crossed). We predict that the threshold will always be concave to some degree if decision makers are sensitive to search costs or progress. Stopping thresholds with the same converging shape have been proposed in other areas of decision making. For instance, using the classical probabilistic stimulusresponse framework, Rapoport and Burkheimer (1971) derived several optimal stopping boundaries. Depending on the assumed stimulus-response distributions and observation costs, the derived boundaries took several functional forms, including ones with a concave shape. Busemeyer and Rapoport (1988) provide a cogent discussion of other works that have yielded similar results. In a related context, Jacoby et al. (1994) found that the most common pattern of subjective uncertainty reduction, as subjects iteratively acquired items of information, corresponded to a decelerating power function. In other words, as additional information was acquired, subjective uncertainty decreased at an increasing rate, yielding the converging shape depicted in Fig. 1b. Finally, Meyer (1982) proposed a model in which alternatives are sequentially eliminated. In his model, an alternative is kept in the consideration set as long as the difference between its utility and the maximum utility of all other alternatives is smaller than some critical utility difference Vt . Meyer states that two factors are relevant to setting the optimal value of Vt : (a) the subjective importance of an individual’s desire to reach a ‘‘correct’’ decision and (b) time. He claims that Vt will approach 0 as t approaches infinity, yielding the converging form. ‘‘The condition [that Vt will approach 0 as t approaches infinity] is introduced to ensure that search will be finite, and reflects the hypothesis that the ‘cost’ of delaying consumption of a good for additional time units will induce a tendency to ‘satisfice’ rather than ‘optimize’ ’’ (Meyer, 1982, p. 101). Both the CDC model and the proposed hypothesis are compatible with the cost-benefit framework, albeit for different reasons. On the one hand, the a priori choice of the constant threshold k value in the CDC model corresponds to a contingent view of decision making (Payne, 1982). At the start of the decision, the individual chooses the strategy (or in this case the value of the threshold) that will be used throughout the decision. On the other hand, our claim of threshold convergence corresponds to the adaptive view of decision making (Payne, Bettman, & Johnson, 1988), wherein individuals adapts their behavior in an ‘‘online’’ manner. Both the contingent processes that occur prior to the task and the adaptive ones that occur during it are assumed to be sensitive to cost-benefit considerations.
AP: OBHDP
260
SAAD AND RUSSO
STUDY 1 Method
Task Subjects made 25 choices between pairs of competing apartments to be rented for 1 year. The apartments were defined by 25 attributes. A unit of information consisted of a complete attribute, i.e., the values of both alternatives on the attribute. Thus, for a given pair of apartments, a subject could request anywhere from 1 to 25 pieces of attribute information prior to making a choice. Apartments were used because they were a familiar, and consequential product with many relevant attributes. Of the 45 process-tracing studies reviewed in Ford et al. (1989), 9 used apartments as the product category. Apparatus A computerized interface was developed to track the sequential acquisition of information in a decision making task. For related computerized technologies in process-tracing research, see Dahlstrand and Montgomery (1984), Brucks (1988), and Bettman, Johnson, and Payne (1990). Subjects Subjects were 15 undergraduate students (11 males). They received grade credits for participating in the study. Stimuli A test of the proposed hypothesis required that subjects’ stopping points cover a wide range of attributes acquired from very few (stopping early) to very many (stopping late in the process). Accordingly, the choices were constructed so that the expected stopping points would span the full search space of 25 attributes. This was achieved by manipulating the order in which attributes were shown and the attribute differences between a competing pair of alternatives. Attribute differences were subjectively classified into three types: small, medium and large (denoted s, m, and l). Each of the 25 trials was defined by a specific configuration of attribute order and attribute differences. Attribute order was specified in terms of the rank order of importance as determined by each subject. Thus, the first five attributes of a choice trial might have had the following configuration: /16 l, /9 s, 07 m, 04 l, /1 l. The numbers represented the attribute importance ranks; the letters, the size of the attribute difference, and the { sign determined which of the two alternatives was superior. In the above configuration, the first attribute
/ a707$$2641
08-19-96 21:34:14
obha
to be shown would have been the subject’s 16th most important attribute, with a large attribute difference favoring one alternative. The only decision that a subject could make was whether to acquire additional information, not which attribute to acquire next. The order of attributes was experimenter-controlled and programmed into the interface for each of the 25 choice pairs. To encourage a very rapid choice, subjects were typically shown their most important attributes first, with one alternative being superior on nearly all of those attributes. To facilitate long trials with many attributes acquired before the final choice, less important attributes were generally shown first, with small attributes differences and alternation between which of the two apartments was favored. Trials were constructed with the goal of covering the full range of possible stopping points as classified into five regions: 1–5 attributes (region 1), 6–10 attributes (region 2), 11–15 attributes (region 3), 16–20 attributes (region 4), and 21–25 attributes (region 5). Five trials were constructed for each of the five regions. The 25 choices were partitioned in five blocks and each block contained one trial from each stopping region. For all subjects, the five trials from a particular region were randomly assigned to each of the five consecutive blocks, and within each block the order of the five trials was randomly determined. Procedure An experimental session began with a display of the 25 attributes and their respective ranges. Attribute ranks and weights were then elicited. First, subjects classified each attribute into one of five categories (very important, important, moderately important, slightly important and unimportant). Then, the attributes were rank-ordered within each category. Finally, subjects reviewed the full ordering of the 25 attributes, switching any as needed, and assigned weights (from 5 to 100) to reflect the attributes’ relative importance. See Jaccard, Brinberg, and Ackerman (1986) for a comparison of various weight elicitation methods. Once the attribute weighting procedure was completed, subjects made 25 choices between pairs of apartments. For each choice, subjects acquired one piece (i.e., attribute) of information at a time, judged the change in their cumulative confidence as a result of this newly acquired information and decided whether to seek additional information or to stop and choose. A cumulative confidence of x in favor of alternative A meant that based on the information acquired so far, there was a (1 0 x) probability that the preference would be reversed in light of all possible informa-
AP: OBHDP
STOPPING CRITERIA IN SEQUENTIAL CHOICE
TABLE 1 Means of Number of Attributes Acquired and Final Cumulative Confidence (Study 1) Subject #
Number of trials
Stopping points
Cumulative confidence scores
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
21 22 13 24 19 22 15 18 17 22 21 24 19 21 21
13.1 10.7 16.7 8.7 15.4 13.0 13.9 13.4 14.4 14.0 13.2 6.5 12.5 12.1 12.1
63.0 85.4 85.5 64.2 74.1 71.1 77.6 74.7 81.0 84.6 67.3 60.6 74.9 79.2 87.1
tion. The lower boundary of this measure of cumulative confidence was 50 or a toss-up between the two alternatives, while the upper boundary was 100 which was equivalent to zero chance of a reversal of the current preference. After each attribute, subjects also had the option of reviewing all previously acquired information. They could request the values of any attribute or view a pictorial record of the cumulative confidence measure up to the current point in the decision. The duration of the study was approximately 1 h. Results
All data from a subject’s first choice were treated as practice and removed from all analyses. All trials that ended in a choice being made after the full set of 25 attributes had been acquired were also removed (59 trials over the 15 subjects). This was necessary because it could not be determined (given our procedure) whether a subject had made a genuine choice or simply had run out of additional information and picked whichever alternative was ahead at the time. Two other trials were removed because the subject mistakenly keyed in an incorrect value. This left 299 trials as our database (out of 375, 15 subjects 1 25 trials per subject). Convergence of the Stopping Thresholds The average number of attributes acquired prior to making a choice and the average cumulative confidence score of the chosen alternatives are displayed in Table 1 for each of the 15 subjects. The median number of attri-
/ a707$$2641
08-19-96 21:34:14
obha
261
butes acquired across the 15 subjects was 13.1 while the median final cumulative confidence score was 74.9. We modeled the converging shape of the stopping threshold as an exponential decay, Y Å a 0 e bX, where Y was the cumulative confidence when the choice was made and X was the number of attributes acquired prior to making a choice. A nonlinear regression analysis (DUD method; Ralston & Jennrich, 1978) was performed on each subject’s data. If the estimated value of b was not significantly greater than 0, then the CDC hypothesis of a constant threshold would not be rejected. For 13 of the 15 subjects the discrimination threshold exhibited a significant decay (b ú 0, p õ .05), providing general support for the concavity hypothesis. The median value for b across these 13 subjects was 0.13; the median a was 86.0. The nonlinear estimation procedure required initial values for a and b. We chose a Å 80 since the resulting y intercept (79) seemed reasonable. The initial value of b was set at 0.15 to yield an x intercept of 22.7 or 90% of the information acquired. Although the nonlinear estimation procedure was theoretically robust over different starting parameters, especially for a well-behaved continuous function like ours, we tested for robustness by using seven additional combinations of starting parameters: a Å 80, b Å 0.00; a Å 80, b Å 0.10; a Å 80, b Å 0.20; a Å 80, b Å 0.30; a Å 80, b Å 0.40; a Å 60, b Å 0.15; and a Å 90, b Å 0.15. Thus, there were 105 additional tests of b Å 0 (7 parameter combinations 1 15 subjects). Of these, 5 resulted in the convergence criterion not being met. Of the remaining 100, 95 yielded the same decay results as those originally obtained. All 5 cases that yielded a different result (all were reversals from a decaying pattern to a non-decaying pattern), occurred at extreme values of b (four at b Å 0.40 and one at b Å 0.00 with a Å 80 in all cases). We judged these results to conform to the theoretical claim of robustness, and we concluded that the original results were not dependent on the chosen values of the starting parameters. As additional evidence for the convergence hypothesis, we considered the average cumulative confidence score of the 59 trials that resulted in a choice being made only after all 25 available attributes had been acquired. Recall that many of these choices were probably forced in that subjects would have preferred additional information before committing to one apartment. The mean confidence for these trials was 60.8, which as expected was significantly lower than the mean (74.8) for the other 299 trials (t Å 8.66; one-sided p Å 0.00). The Core Attributes Heuristic Two pilot studies had been performed using earlier versions of the software system, each employing six
AP: OBHDP
262
SAAD AND RUSSO
subjects.1 A postexperimental questionnaire asked one group to describe in writing how they decided to stop acquiring information and choose one of the apartments. Five of these six subjects claimed to have used a crude criterion that we labeled the Core Attributes (CA) heuristic. It stipulated that a subject would stop acquiring information and commit to the leading alternative once the last of his/her set of most important attributes had been acquired. The CA heuristic is related to the notion of determinant attributes (Myers & Alpert, 1968). These are attributes perceived to be highly influential in arriving at a choice. Two factors contribute to an attribute’s determinancy, its importance weight and the expected difference between the competing alternatives. All subjects that mentioned the CA heuristic as at least part of their stopping policy seemed to indicate that the inclusion of an attribute into the core set was strictly a function of its importance. This is not surprising given that our task environment offered no way for subjects to predict the size of an attribute difference. Much of the research dealing with determinant attributes has focused on developing techniques for identifying them from among a larger set of attributes (e.g., Alpert, 1971; Armacost & Hosseini, 1994). Little emphasis has been placed on investigating the process through which determinant attributes influence choice behavior. The CA heuristic offers one such mechanism, namely the termination of information acquisition once the full set of core attributes has been acquired. A similar postexperimental questionnaire in the current study revealed that 13 of the 15 subjects included the CA heuristic in their explanation of how they decided when to stop acquiring additional information. Of the 13 subjects who mentioned the CA heuristic, only 2 claimed to have used it as the sole basis for stopping. The other 11 subjects claimed to have used the CA heuristic in conjunction with some other strategy. For example, one subject responded as follows: ‘‘For some trials, if my top 4 priorities were all in favor of one option, I would choose. As this did not occur very often, I moved down my list and made sure that the next categories which I deemed important were all in general agreement.’’ The Core Attributes Heuristic versus Utility Difference Since the retrospective reports might not have corresponded closely to subjects’ actual stopping behavior, we attempted to construct a test of the use of the CA 1 For completeness, we note that 9 out of these 12 pilot subjects exhibited an estimate of the decay parameter (b) significantly greater than 0.
/ a707$$2641
08-19-96 21:34:14
obha
heuristic by each subject. We contrasted the stopping point predicted by the CA heuristic with the stopping point predicted by a numerically precise calculation of the cumulative weighted difference between the two alternatives. The latter corresponded roughly to the utility difference between two apartments. Being unanticipated and post hoc, each of these calculations had to tolerate some approximation. Nonetheless, the two contrasting predictions could each be compared to the observed stopping points. Subjects for whom the CA prediction deviated less from the observed stopping points than did the cumulative (weighted) difference prediction could safely be said to have used the CA heuristic as a substantial part of their stopping policy. The cumulative weighted prediction was derived as follows. First, small, medium and large attribute differences were arbitrarily assigned 1, 2, and 3 points respectively. Then, the ‘‘reciprocal’’ of an attribute’s rank was used as the importance weight we applied to the attribute difference. That is, the nth most important attribute received a weight of (26 0 n). Finally, the cumulative weighted difference was computed by S (Ni∗Di), where Ni corresponded to the weight of the ith attribute and Di represented the magnitude of the corresponding attribute difference. In the example previously discussed (i.e., /16 l, /9 s, 07 m, 04 l, /1 l), the algorithm would yield the following score: [(26 0 16) 1 3] / [(26 0 9) 1 1] 0 [(26 0 7) 1 2] 0 [(26 0 4) 1 3] / [(26 0 1) 1 3] Å 18. In other words, for this particular trial, the weighted cumulative difference would have been 18 after the first five attributes had been acquired. The next step was to derive a stopping point from the cumulative weighted difference. This process was necessarily subjective, but could be informed by a review of the actual stopping points. We estimated the cumulative weighted difference required to yield stopping points in each of the five regions to be 225, 200, 175, 125, and 25 respectively. To derive predicted stopping points for the CA heuristic, the key step was to identify the size of the set of core attributes for each subject. Only 9 of the 13 subjects who claimed in the postexperimental questionnaire to have used the CA heuristic also gave the exact size of their core set. The most frequent core size was 4 (4 subjects), followed by core sizes of 5 and 3 (2 subjects, respectively) and of 2 (1 subject). For the remaining 4 subjects, a core size had to be estimated from their stopping points. For these subjects, we counted the number of times that choices were made corresponding to the use of a CA heuristic with a core set in the range 1 to 7. The use of the CA heuristic with core size of m implied that a choice was made immediately after viewing the m most important attributes. The reverse
AP: OBHDP
263
STOPPING CRITERIA IN SEQUENTIAL CHOICE
TABLE 2 Average Deviations of the Stopping Points
Subject #
Number of trials
From the threshold model
From the CA heuristic
1 2 3 4 5 6 7 10 11 12 13 14 15
21 22 13 24 19 22 15 22 21 24 19 21 21
2.8 3.4 6.1 7.6 4.3 2.4 2.0 2.4 4.4 7.8 2.7 4.3 2.9
4.5 2.5a 5.6a a 5.8 (5.5) 5.2 3.9 (3.8) 3.9 3.9 5.1 (6.7, 4.3) 5.6a,* 3.5 4.3 2.0a
Note. Subjects 8 and 9 are not included in this table. They did not mention the CA heuristic in the postexperimental questionnaire. The ‘‘additional’’ results for the three subjects that had multiple possible core sizes are reported in parentheses; note that for Subject 11, one of the eligible core sizes (core Å 6) yielded a reversal from threshold to CA superiority, albeit a nonsignificant one. a CA heuristic superior to the threshold model (i.e., smaller deviations). * p õ .05.
inference, however, from observed stopping point to the exact core size was often ambiguous. For example, if a subject’s second through fourth most important attributes had been shown and the subject was currently viewing his/her most important attribute, then it is unclear whether stopping at this point should be classified as core size Å 1, 2, 3, or 4. Accordingly, the following rules were used to classify such ambiguous cases: (a) if an odd number of core sizes was possible, the median was used; (b) if there were an even number of possible core sizes, then the larger of the two nearest the median was used, e.g., a core size Å 3 in the above example. We stopped at core size Å 7 only because it seemed that few subjects would have had a larger core set. To do so would have violated the reduction of effort that is the raison d’etre of any heuristic. For each of the four subjects who did not explicitly state a core size, the value with the largest number of compatible choices was taken as that subject’s inferred core size. One subject was assigned a core size of 5. The three other subjects had ties between two or more core sizes as their most ‘‘common’’ one: a tie between core Å 1 and 2 (Subject 4); another tie between core Å 4 and 5 (Subject 6); a third and final tie between core Å 2, 6 and 7 (Subject 11). Given that it was impossible to unequivocally identify a core size for the latter three
/ a707$$2641
08-19-96 21:34:14
obha
subjects and in order to ensure the robustness of the reported results, all of the analyses to be discussed below were performed using each of these subjects’ possible core sizes. There were no statistically significant differences in the results, irrespective of whichever of a subject’s possible core sizes was used. For the sake of clarity, the results corresponding to the largest of these will be reported. We could now compare the actual stopping points with those predicted by the CA heuristic and the cumulative weighted difference model. The average absolute deviation of each subject’s observed stopping points from those derived from the two competing guidelines are shown in Table 2. Of the 13 subjects, 5 had actual stopping points closer to those predicted by the CA heuristic. We took this to be evidence of substantial use of this guideline for stopping the information search process. To test the statistical significance of the superiority of the CA heuristic for these five subjects, paired t tests were performed. Only the data of Subject 12 was significant (p õ .05). The data of Subjects 4 and 15 approached marginal significance (p õ .14 and p õ .11 respectively). Discussion
The results for the CA heuristic must be interpreted in the particular context of our task environment, which was exceptionally hostile to this heuristic. First, individuals wishing to use the CA heuristic would typically acquire their attributes in decreasing order of importance. In our study, however, subjects had no control over the acquisition order. This might have dissuaded some subjects from using the heuristic given that they could only wait passively for attributes in their core set to appear. Second, the 25 trials were designed to stretch stopping points over the entire range of number of attributes, from 1 to 25. This included some trials where a large cumulative weighted difference had been achieved before the last attribute in the core was reached and others where all core attributes had been acquired but the cumulative weighted difference was still quite small. In other words, spreading out the stopping points meant a contrived and unusual distribution of trials. In sum, both denying subjects control of the attribute to be acquired next and creating an unusual set of stopping points should have inhibited the use of the CA heuristic. Study 2 was designed to change this situation. STUDY 2
In the first study, our initial goal was to test for the convergence of the stopping thresholds. Unexpectedly,
AP: OBHDP
264
SAAD AND RUSSO
we discovered the CA heuristic as a stopping policy and devised a post hoc test to assess the extent of its use. The main purpose of Study 2 was a more rigorous determination of the prevalence of the CA heuristic. As a consequence, the experimental task was altered to enable subjects to choose the order in which they acquired the attributes. Thus, subjects could seek information according to its importance. This new task environment was both more realistic and more conducive to the application of the CA heuristic. In addition, Study 2 enabled a test of the convergence hypothesis in a second and, we believe, more realistic environment. Method
Task The experimental task differed from that of Study 1 in three respects. First, subjects now had control over the informational flow. That is, they could request information by attribute name. Second, because the subjects in Study 1 deemed 25 binary choices taxing, this number was reduced to 15 choices between pairs of competing apartments to be rented for one year. Third and last, subjects were requested to appear for three sessions. They made 15 binary choices in each of the first two sessions. These are the data that comprised Study 2. The third session was used for a different purpose irrelevant to the present work. Apparatus The computerized interface was revised to enable subjects to choose the order of attribute acquisitions. No other changes were made. For a complete expose of the interface, the reader is referred to Saad (1996). Subjects Subjects were 12 undergraduate students (9 males). They were paid $17 for participating in the three sessions (but recall that only data from the first two were used). One subject was subsequently dropped from the experiment because she did not return for the second session. Stimuli In Study 1, the construction of a single pairwise choice required specifying both the respective attribute differences between the two competing alternatives and the order in which the attributes were presented. Because Study 2 yielded control over the order of attributes to subjects, a two-alternative stimulus was specified solely by the magnitude of difference of each of the 25 apartment attributes.
/ a707$$2641
08-19-96 21:34:14
obha
The 15 choice pairs in Sessions 1 and 2 were constructed to achieve the following goals. The purpose of the first session was to determine whether a subject was a CA user and, if yes, to identify the size of his/ her core set as closely as possible. The second session investigated whether CA users adapted the application of this heuristic to ‘‘extreme’’ cases. The latter were defined as: (a) whenever a sufficiently large cumulative discrimination was achieved prior to the acquisition of the full core set (i.e., ‘‘early’’ trials in which the individual might make a choice before acquiring the full core set) and (b) whenever the full core set of attributes yielded relatively little discrimination between the alternatives (i.e., ‘‘late’’ trials in which the subject might acquire additional attributes beyond the core set because the full core set of attributes had provided insufficient advantage of one alternative over the other). In Study 1, 13 out of the 15 subjects mentioned the CA heuristic in their retrospective verbal reports and 11 of these 13 stated (or we inferred) a core size in the 3 to 5 range. As such, the choice trials for Study 2 were constructed assuming this range. Specifically, we manipulated the magnitude of the attribute differences to construct stimulus configurations designed for CA users with core sizes between 3 and 5. For example, one typical (core Å 5) stimulus configuration used was /s, 0m, /l, /l, and /l. Thus, if the first and second pieces of information requested were a subject’s second and fourth most important attributes respectively, the corresponding attribute differences would be 0m and /l. It was expected that the cumulative impact of these five attribute differences would provide sufficient discrimination between the two alternatives to enable stopping and choosing one. That is, neither an ‘‘early’’ nor a ‘‘late’’ stop would be likely to occur. The 15 choices in Session 1 consisted of 1 practice trial, 4 trials each with core sizes 3, 4, and 5, and 2 ‘‘late’’ trials that served as decoys. The 12 experimental trials were presented in four blocks with one trial from each of the three core sizes in each block. The order within each block was counterbalanced. The two ‘‘late’’ decoys occurred in positions 2 and 12. In Session 1, all subjects were presented with the same stimulus material. Session 2 followed a similar stimulus structure but with a greater presence of ‘‘early’’ and ‘‘late’’ trials in keeping with our purpose of testing for adaptation of the CA heuristic to ‘‘extreme’’ situations. There was 1 practice trial, 4 ‘‘early’’ trials, 4 ‘‘late’’ trials and 6 that fit a CA set (i.e., for which an ‘‘early’’ or ‘‘late’’ stop would be unlikely to occur). Two of the 6 regular trials were repeated from Session 1. Trials were again blocked, with each of the four blocks containing an ‘‘early,’’ a ‘‘late,’’ and a regular trial, with order counter-
AP: OBHDP
STOPPING CRITERIA IN SEQUENTIAL CHOICE
TABLE 3 Means of Number of Attributes Acquired and Final Cumulative Confidence (Study 2) Subject #
Number of trials
Stopping points
Cumulative confidence scores
1 2 3 4 5 6 7 8 9 10 11
28 28 28 28 28 28 28 28 28 27 28
6.1 5.5 7.8 4.4 6.0 10.5 9.4 6.6 5.7 5.9 6.9
74.4 78.6 80.6 78.6 68.1 81.1 71.2 70.2 69.5 80.8 90.1
balanced within the blocks. The two regular repeated trials were placed in positions 5 and 15. Three different versions of the 12 experimental choices were created, corresponding to CA users with core sizes Å 3, 4, or 5. Based on the data of Session 1, subjects were classified into one of these three core size categories. For 9 out of the 11 subjects in Study 2, direct examination of the mean number of attributes acquired in Session 1, provided a clear estimate of the core size. Contrary to the expectations derived from the set sizes in Study 1, all of these 9 subjects had mean stopping points greater than 5. Consequently, they were presented with the stimulus set based on a core set size of 5. The data of Subjects 2 and 4 required a closer inspection. By analyzing several indices including the subject’s stated core size in the postexperimental protocols and the mean and standard deviation of the stopping points, the latter two subjects were assigned core sizes of 4 and 3, respectively. Procedure In each of the two sessions, the same procedure of Study 1 was used, with the sole differences being subjects’ choice of which attribute to acquire next and 15 instead of 25 binary choices. Postexperimental verbal reports were collected following the first session only. Results
As in Study 1, data from a subject’s first choice were removed from all analyses. Also, all trials that ended in a choice being made after the full set of 25 attributes had been acquired were also removed. This occurred only once, for Subject 10, in 308 trials (14 trials per session 1 2 sessions 1 11 subjects). In the analyses that follow, the data from Sessions 1 and 2 were combined. Table 3 displays the average number of attri-
/ a707$$2641
08-19-96 21:34:14
obha
265
butes that were acquired prior to making a choice and the average cumulative confidence scores of the chosen alternatives for each of the 11 subjects. The median number of attributes acquired across the 11 subjects was 6.1 while the median final cumulative confidence score across the 11 subjects was 78.6. In comparison to the medians obtained in Study 1, fewer pieces of information were acquired in Study 2 with a higher achieved cumulative confidence, a pattern in accordance with the convergence hypothesis. Note that the substantially fewer pieces of information acquired in Study 2 were compatible with a CA-based process in which subjects acquire their most important attributes at the onset of the process rather than having to wait for the desired attributes to appear as was the case in Study 1. Convergence of the Stopping Thresholds The convergence hypothesis was tested using the same nonlinear regression of Study 1, namely Y Å a 0 ebX, where Y is the cumulative confidence when the choice is made and X is the total number of attributes acquired. For 8 of the 11 subjects, the discrimination threshold exhibited a significant convergence (b ú 0, p õ .05), replicating the support for the convergence hypothesis, within this new and more realistic task environment.2 The median values for a and b across the latter 8 subjects were 84.0 and 0.26, respectively. Note that while the median a estimate was very close to the 86.0 of Study 1, the median b estimate was much larger than that of Study 1 (0.13). This reflected the expected increase in the convergence rate when the attribute acquisition order was determined by the subject. Postexperimental Protocols The postexperimental questionnaire administered after the first session revealed that 10 of the 11 subjects included the CA heuristic in their explanation of how they decided when to stop acquiring additional information. A core size, or at least a range, was explicitly stated on 9 occasions. Based on these verbal statements, subjects were classified into one of three categories: (a) PURE CA, if the stopping policy was explained solely in terms of the CA heuristic; (b) MIXED CA, if both the CA heuristic and some other stopping strategy were mentioned; and (c) NO CA, if there was no mention of the CA heuristic. Examples of each were: (a) 2
The non-linear regression was performed on the data from sessions 1 and 2 separately. In each case, 9 out of 11 subjects exhibited significant decays, confirming the validity of combining the data from the two sessions.
AP: OBHDP
266
SAAD AND RUSSO
PURE CA, ‘‘After I got the location—distance to school, rent, distance to laundro mat and sunlight information, I had pretty much decided on an apartment’’; (b) MIXED CA, ‘‘Compulsory: Rent, Neighborhood Safety, Distance to School. Sometimes: Laundro mat, Distance to Grocery Store. Almost Never: [{the rest}]’’ (this subject used what we termed a Cascaded CA strategy); (c) NO CA, ‘‘A choice was made when: 1) after at least 3 comparisons, one apartment was favored by approximately 90% or more; 2) after the ‘very important’ categories had been examined, lesser categories were examined until one apartment appeared to be consistently better’’ (a more or less pure example of a constant threshold criterion for stopping). Table 4 displays the CA classification and stated core size (where applicable) of each of the 11 subjects. Overall, 5 subjects each were classified as PURE CA and MIXED CA users respectively, with only 1 NO CA user.
TABLE 5 ‘‘Window’’ Analyses from Sessions 1 and 2 Combined
Subject # 1 2 3 4 5 6 7 8 9 10 11 Mean:
Window 5–7 4–6 4–6 3–5 5–7 11–12 7–9 5–7 4–6 4–6 2–4
(4) (4) (none) (4) (5) (11) (none) (5) (6) (6) (4)
‘‘Window Scores’’ (% hits) 93 82 43 89 79 89 61 82 86 70 39 74
PURE PURE NO CA MIXED MIXED PURE MIXED MIXED PURE PURE MIXED
Confidence SDs in Window 14.0 14.5 NO CA 14.1 15.7 14.8 10.5 9.1 12.8 14.3 3.3
Note. numbers in parentheses represent the subjects’ stated core sizes, as per their protocols (see Footnote 5).
The CA Heuristic To determine the accord between what subjects claimed they did and what they actually did, the observed stopping points were analyzed across the two sessions. For each subject’s combined data (28 choices), we looked for the ‘‘window’’ or range of three consecutive stopping points which contained the greatest number of observed stops. If the subject used the CA heuristic to any substantial extent, this ‘‘CA window’’ should have captured a subject’s actual core size. Recall that even PURE CA users were expected to adapt their stopping policy to ‘‘early’’ and ‘‘late’’ configurations (i.e., to stop short or beyond their ideal core). Similarly, the MIXED CA users including subjects with a cascaded
TABLE 4 Evidence of CA Heuristic Based on Postexperimental Protocols Subject #
Classification
1 2 3 4
CA only (PURE) CA only (PURE) Threshold model (No CA) Cascaded CA with possibly compensatory rule as a modifier Cascaded CA with conjunctive CA only (PURE) CA modified by other CA modified by other CA only (PURE) CA only (PURE) Cascaded CA with other
5 6 7 8 9 10 11
/ a707$$2641
Stated core size CÅ4 CÅ4 N.A. C Å 3 or 5
C Å 2 then 5 then 7 C Å 10 or 11 Not provided C Å 4 or 5 CÅ6 CÅ6 C Å 1 then 4 then 6
08-19-96 21:34:14
obha
CA strategy were not expected to stop at the same core size, but might have been bracketed within a narrow range of, say, 3 stopping points. Clearly, the more rigorously the CA heuristic was applied, the greater the number of stops that occurred in the ‘‘window.’’ Table 5 exhibits the percentage of trials that resulted in stops within the ‘‘window’’ for each subject. The mean ‘‘window’’ score was 74 (i.e., 74% of all trials yielded stopping points within the ‘‘windows’’), with the minimal and maximal scores being 39 and 93, respectively. As might be expected, the mean score of the five PURE CA users (84) was higher than that of the five MIXED CA users (70), while the NO CA subject exhibited the second lowest score (43). The difference between PURE and MIXED CA users though in the predicted direction, was not statistically reliable (t Å 1.40; one-sided p Å .11). A second test of the actual application of the CA heuristic to stopping was derived from these data. Consider the mean standard deviation of the cumulative confidence scores for those trials that resulted in stopping points within the ‘‘window.’’ Large standard deviations of the cumulative confidence scores implied that the threshold model could not fully explain the stopping policy being used. Although some variation was expected (that was the point of a converging versus constant threshold), use of the CA heuristic should have substantially increased this variance. Thus we take larger standard deviations to indicate greater use of the CA heuristic. Such a pattern was observed in Table 5. The mean standard deviations of the cumulative confidences for PURE and MIXED CA users were, respectively, 14.08 and 10.54 (t Å 1.61; one-tailed p õ .10). This difference was in the predicted direction, but statistical reliability was marginal.
AP: OBHDP
STOPPING CRITERIA IN SEQUENTIAL CHOICE
267
Of the nine subjects that explicitly mentioned a core size in their protocols, eight had an overlap between ‘‘the realized window’’ and the stated core size. The one time where an overlap did not occur, it was by the smallest possible miss (Subject 1). This result demonstrated the accord between subjects’ stated and realized behavior. Adaptive Use of the CA Heuristic To summarize our results regarding use of the CA heuristic, the verbal reports and stopping points both indicated substantial presence of this heuristic. However, if the majority of subjects were using the CA heuristic as an integral part of their stopping policy, why did their data exhibit a cone-shaped threshold? Recall that the stimuli of Session 2 introduced ‘‘extreme’’ trials, namely choices where ‘‘early’’ or ‘‘late’’ stops would likely occur. If CA users adjusted their use of the heuristic as a function of the cumulative discrimination achieved, then the pattern of results would be fully explained (i.e., strong presence of the CA heuristic AND support for the convergence hypothesis). To test for such adaptations, the mean cumulative confidence achieved was calculated for three sets of trials: ‘‘early,’’ ‘‘window’’ and ‘‘late’’ trials. Note that for ‘‘early’’ and ‘‘window’’ trials, the final cumulative confidence achieved was the relevant measure. However, for ‘‘late’’ trials, the cumulative confidence achieved at the end of the ‘‘window’’ (and not the final cumulative confidence achieved) is the key measure. We predicted a monotonically decreasing relationship of the cumulative confidence scores across the three sets of trials. In other words, ‘‘early’’ trials should have yielded the largest cumulative confidence scores, while ‘‘late’’ trials should have yielded the smallest. Across the 11 subjects the means were ‘‘early’’ trials, 82.9 (n Å 10); ‘‘window’’ trials, 77.3 (n Å 227); and ‘‘late’’ trials, 71.3 (n Å 70). Interestingly, the mean final cumulative confidence achieved for ‘‘late’’ trials (i.e., once a choice was made) was 73.7 (n Å 70), only slightly larger than that achieved at the end of the ‘‘window’’ for the same ‘‘late’’ trials (73.7 vs 71.3). In sum, the data supported adaptation of the CA heuristic to ‘‘extreme’’ trials.3 The adaptive and cascaded nature of the CA heuristic is reminiscent of a heuristic first identified nearly 3
Note that ‘‘early’’ and ‘‘late’’ trials corresponded to 3 and 23% of all trials. Recall that the stimuli of Session 2 consisted of four ‘‘early,’’ four ‘‘late,’’ and six regular trials. Thus, while the observed number of ‘‘late’’ trials were roughly those expected, there were many fewer ‘‘early’’ trials than were expected. It would appear that CA users perceived ‘‘early’’ stops as too risky (from a cost-benefit perspective) and as such were often willing to adapt ‘‘late’’ but were reticent to adapt ‘‘early.’’
/ a707$$2641
08-19-96 21:34:14
obha
FIG. 2. Three methods for predicting the threshold model’s stopping points for violating trials.
30 years ago. The SemiOrder Lexicographic Rule (Tversky, 1969) postulates that an individual will choose the alternative that is superior on the most important attribute by an amount that is greater than some justnoticeable difference. Clearly, the latter decision rule is a special case of the cascaded core heuristic, with core size Å 1. CA Heuristic versus Threshold Model Thus far, the findings lend support to both the CA heuristic and threshold model. It appeared that most subjects used a combination of both stopping strategies. Specifically, most subjects began with the intention of using the CA heuristic but adapted in a manner consistent with the threshold model. We attempted to determine which of the two stopping strategies had the greater influence in predicting subjects’ stopping points. To do this, we identified the stopping points predicted by each of the two strategies and determined which prediction better fit the observed stopping points. From the previously estimated threshold function, Y Å a 0 ebX, where Y and X corresponded to the observed confidence scores and stopping points when choices were made, the (observed) stopping confidence level was used to predict the stopping point.4 This required solving the equation X Å [ln(a 0 Y)]/b. Given that the 4 Given that stopping points should only assume integer values, the estimated stopping points were rounded off. On one trial, Subject 7 had a predicted stopping point of 0 (following rounding off). Given that one must minimally acquire one attribute prior to making a choice, the predicted stopping point was changed to one. On a few occasions, the rounded-off predicted stopping point corresponded to a Y value which was smaller than 50. Given that the realized Y values were confined to the closed interval 50 to 100, the next smallest X yielding a Y value equal to or greater than 50 was chosen.
AP: OBHDP
268
SAAD AND RUSSO
ln function is strictly positive for values solely greater than 1, this implied that the estimated a parameter had to be greater than the observed Y / 1. Trials where the latter condition did not hold were labeled violating trials. For violating trials, an alternate method was needed to estimate the threshold model’s predictions of stopping points. We devised three separate procedures to address the estimation problem raised by the violating trials. The analysis pitting the threshold model versus the CA heuristic was performed using all three predictions, providing a test of the robustness of the results. For each violating trial, the three procedures were: (1) identifying the first time that the observed cumulative confidence crossed the estimated threshold and using the x value of the latter point as the threshold model’s prediction (see X1 in Fig. 2); (2) identifying the last instance that the observed cumulative confidence crossed the estimated threshold and using that x value as the threshold model’s prediction (see X2 in Fig. 2); and (3) using the final x value of the violating trial as the threshold prediction. In other words, ‘‘dropping down’’ from the violating point onto the threshold (see X3 in Fig. 2). This last adjustment was clearly biased toward the threshold model in that for each violating trial, the residual between the threshold prediction and the realized X was zero. To obtain predicted stopping points from the Core Attributes Heuristic, we used the stated core size in the post-experimental verbal reports.5 Both sets of predictions were compared to the observed stopping points by taking the absolute difference. Finally, to determine which absolute differences were smaller and therefore which stopping strategy, the threshold model or the CA heuristic, was dominant, a matched t test was performed on the two sets of residuals. To reiterate, we computed the matched t test between the threshold and CA predictions three times corresponding to the three different procedures used to estimate the threshold model’s stopping point predictions for violating trials.6 The same pattern of findings 5 For Subjects 3 and 7, who did not state a core size, we used the mode in their CA ‘‘window’’ as the predicted core stopping point. Several subjects stated two or more core sizes in their protocols. For example, Subjects 5 and 12 had cascaded cores with an odd number of stated core sizes. In those instances, we chose the median core as a conservative estimate. Subject 4 stated two possible core sizes and once again we chose the median between those two values as the estimate. Finally two subjects (6 and 8) each stated two core sizes, which were consecutive integers. In each of the latter two cases, we simply chose the larger of the two as the predicted core stopping point. 6 Both Subjects 1 and 10, who exhibited non-decaying thresholds, nonetheless had positive b estimates. In other words, while they did not display significant decays, they nonetheless did have concave thresholds. On the other hand, Subject 5, who was the third of the
/ a707$$2641
08-19-96 21:34:14
obha
FIG. 3. The estimated threshold for Subject 3 (sole NO CA subject in Study 2).
was observed across the three procedures, namely that the CA heuristic was overwhelmingly superior in predicting the realized stopping points. As such and for the sake of clarity, only the matched t test results of the first procedure will be presented in detail. Six subjects exhibited significant CA superiority (with p õ .05). Of the remaining four subjects that did not display any significant superiority, three favored the CA heuristic. The only subject who displayed threshold superiority (albeit nonsignificantly) was the sole NO CA user (see Fig. 3 for a plot of the latter subject’s data and estimated threshold). Of the six subjects who displayed significant CA superiority, four were PURE and two were MIXED CA users.7 Comparison of Studies 1 and 2 The results of the two studies yielded a consistent and coherent picture. The majority of subjects (21 out of 26 across the two studies) exhibited a concave threshold, providing strong support for the convergence hypothesis. Furthermore, 23 out of 26 subjects mentioned the CA heuristic in their postexperimental verbal report. The key difference in results between the two studies arose when the CA heuristic was pitted against the threshold model. Whereas in Study 1, only 1 subject displayed significant CA superiority, a substantially greater proportion of subjects (6 of 10) did so in Study 2. The latter difference was due to the task environment. In Study 1, subjects could not request subjects that did not have a decaying threshold, yielded a negative b estimate. In other words, this particular subject had a horizontal threshold, making it impossible to estimate a stopping point given a realized confidence value. Given the latter estimation problem, Subject 5 was dropped from this analysis. 7 Procedure 2 yielded five subjects with significant and one with marginal CA superiority; Procedure 3 resulted in three subjects having significant CA superiority and one subject with marginal threshold superiority. Thus, even in Procedure 3 which was biased toward the threshold model, the CA heuristic was superior in predicting the realized stopping points.
AP: OBHDP
STOPPING CRITERIA IN SEQUENTIAL CHOICE
the specific information that they wished to acquire, making it difficult to apply the CA heuristic. In Study 2, however, control of the information flow was relinquished to the subjects, permitting implementation of the CA heuristic. Discussion
Our original goal was to test for an adaptive (i.e., converging rather than constant) stopping criterion in a sequential search task. The bulk of the evidence clearly supports such adaptation. At the start of a decision, subjects might begin by establishing a constant threshold that they hope to reach prior to committing to a choice. However, in light of an accumulation of acquisition costs and/or poor progress (both achieved and projected) in discriminating between the competing alternatives, they adapt their behavior by reducing the acceptable threshold that is necessary to reach a decision. The concavity of the stopping thresholds accords with the adaptive view of decision making postulated by Payne, Bettman and Johnson (1988, 1993). It also fits within a general cost/benefit framework in that the decision maker is making a trade-off between the potential benefits of a ‘‘correct’’ choice (had all the available information been acquired) and the costs of acquiring additional information. Having demonstrated that adaptive thresholds provide a better account of stopping points than constant thresholds, we discovered the CA heuristic which is an alternative to the use of any form of threshold. Subjects not only use costs (search) and benefits (progress in discrimination) to modify a constant threshold, but they use a heuristic that simplifies the amount of calculation required to apply any threshold-based stopping policy. Interestingly, determining one’s core set size can be viewed as a cost/benefit tradeoff; the larger the core set, the greater the potential benefits with a corresponding increase in acquisition and processing costs. A deeper understanding of stopping policies may require a cost-benefit analysis that is more inclusive in two ways. First, search costs must include calculation effort as well as pure acquisition costs. Second, the available stopping policies should include various calculational short-cuts as well as the more or less precise computations required by threshold models. The potential for future research appears fruitful. One possible avenue might be to identify and subsequently investigate the situational and individual factors that are likely to have an effect on behavior in a sequential decision. One such individual variable is expertise in a product category. It is likely to influence whether the CA heuristic is applied. Novice consumers should be less likely to use the CA heuristic because
/ a707$$2641
08-19-96 21:34:14
obha
269
they are less able to identify the key attributes. Important situational variables might include time pressure and mood at the time of the decision. For example, whether an individual is dysphoric is likely to have an effect on the extent to which the thresholds will converge. Conway and Giannopoulos (1993) found that dysphoric subjects used fewer pieces of information when evaluating multiattribute objects. In the current context, one might argue that dysphoria would lead to an increased convergence rate of the thresholds, resulting in fewer attribute information being acquired. The importance of a decision is yet another situational variable that might have an effect on the shape of the discrimination thresholds. One might expect that the more consequential a decision is, the lower the convergence rate of the thresholds will be. In keeping with the cost/benefit framework, another area of future work might include the extent to which individuals adapt their use of the CA heuristic in light of changes in the acquisition costs of core attributes. Do subjects reduce the size of their core sets when facing substantial acquisition costs or do they change the attributes that comprise their core sets? Using an 8 1 12 informational display board, Gilliland, Schmitt, and Wood (1993) found that a systematic manipulation of costs and benefits yielded behavioral differences regarding the depth, variability and latency of search. Hence, it might be fruitful to implement a monetary reward paradigm in the current context as a means of investigating whether corresponding behavioral differences would arise. REFERENCES Albert, D., Aschenbrenner, K. M., & Schmalhofer, F. (1989). Cognitive choice processes and the attitude-behavior relation. In A. Upmeyer (Ed.), Attitudes and behavioral decisions (pp. 61–99). New York: Springer. Alpert, M. I. (1971). Identification of determinant attributes: A comparison of methods. Journal of Marketing Research, 8, 184–191. Armacost, R. L., & Hosseini, J. C. (1994). Identification of determinant attributes using the analytic hierarchy process. Journal of the Academy of Marketing Science, 22, 383–392. Aschenbrenner, K. M., Albert, D., & Schmalhofer, F. (1984). Stochastic choice heuristics. Acta Psychologica, 56, 153–166. Bettman, J. R., Johnson, E. J., & Payne, J. W. (1990). A componential analysis of cognitive effort in choice. Organizational Behavior and Human Decision Processes, 45, 111–139. Bockenholt, U., Albert, D., Aschenbrenner, M., & Schmalhofer, F. (1991). The effects of attractiveness, dominance, and attribute differences on information acquisition in multiattribute binary choice. Organizational Behavior and Human Decision Processes, 49, 258–281. Brucks, M. (1988). SearchMonitor: An approach for computer-controlled experiments involving consumer information search. Journal of Consumer Research, 15, 117–121.
AP: OBHDP
270
SAAD AND RUSSO
Busemeyer, J. R. (1982). Choice behavior in a sequential decision making task. Organizational Behavior and Human Performance, 29, 175–207. Busemeyer, J. R. (1985). Decision making under uncertainty: A comparison of simple scalability, fixed-sample, and sequential-sampling models. Journal of Experimental Psychology: Learning, Memory, and Cognition, 11, 538–564. Busemeyer, J. R., & Rapoport, A. (1988). Psychological models of deferred decision making. Journal of Mathematical Psychology, 32, 91–134. Busemeyer, J. R., & Townsend, J. T. (1993). Decision field theory: A dynamic-cognitive approach to decision making in an uncertain environment. Psychological Review, 100, 432–459. Conway, M., & Giannopoulos, C. (1993). Dysphoria and decision making: Limited information use for evaluations of multiattribute targets. Journal of Personality and Social Psychology, 64, 613–623. Dahlstrand, U., & Montgomery, H. (1984). Information search and evaluative processes in decision making: A computer-based process tracing study. Acta Psychologica, 56, 113–123. Diederich, A. (1995). A dynamic model for multi-attribute decision problems. In J.-P. Caverni, M. Bar-Hillel, F. H. Barron and H. Jungermann (Eds.), Contributions to decision making-I (pp. 175– 191). Elsevier Science B. V. Festinger, L. (1964). Conflict, decision and dissonance. Stanford, CA: Stanford University Press. Ford, J. K., Schmitt, N., Schechtman, S. L., Hults, B. M., & Doherty, M. L. (1989). Process tracing methods: Contributions, problems, and neglected research questions. Organizational Behavior and Human Decison Processes, 43, 75–117. Gilliland, S. W., Schmitt, N., & Wood, L. (1993). Cost-benefit determinants of decision process and accuracy. Organizational Behavior and Human Decision Processes, 56, 308–330. Hagerty, M. R., & Aaker, D. A. (1984). A normative model of consumer information processing. Marketing Science, 3, 227–246. Hutchinson, J. W., & Meyer, R. J. (1994). Dynamic decision making: Optimal policies and actual behavior in sequential choice problems. Marketing Letters, 5(4), 369–382. Jaccard, J., Brinberg, D., & Ackerman, L. J. (1986). Assessing attribute importance: A comparison of six methods. Journal of Consumer Research, 12, 463–468. Jacoby, J., Jaccard, J. J., Currim, I., Kuss, A., Ansari, A., & Troutman, T. (1994). Tracing the impact of item-by-item information accessing on uncertainty reduction. Journal of Consumer Research, 21, 291–303. Link, S. W. (1975). The relative judgment theory of two choice response time. Journal of Mathematical Psychology, 12, 114–135. Manning, R., & Morgan, P. B. (1982). Search and consumer theory. Review of Economic Studies, XLIX, 203–216.
Meyer, R. J. (1982). A descriptive model of consumer information search behavior. Marketing Science, 1, 93–121. Myers, J. H., & Alpert, M. I. (1968). Determinant buying attitudes: Meaning and measurement. Journal of Marketing, 32, 13–20. Payne, J. (1982). Contingent decision behavior: A review and discussion of issues. Psychological Bulletin, 92, 382–402. Payne, J. W., Bettman, J. R., & Johnson, E. J. (1988). Adaptive strategy selection in decision making. Journal of Experimental Psychology: Learning, Memory, and Cognition, 14, 534–552. Payne, J. W., Bettman, J. R., & Johnson, E. J. (1993). The adaptive decision maker. Cambridge: Cambridge Univ. Press. Petrusic, W. M., & Jamieson, D. G. (1978). Relation between probability of preferential choice and time to choose changes with practice. Journal of Experimental Psychology: Human Perception and Performance, 4, 471–482. Ralston, M. L., & Jennrich, R. I. (1978). DUD, a derivative-free algorithm for nonlinear least squares. Technometrics, 20, 7–14. Rapoport, A., & Burkheimer, G. J. (1971). Models of deferred decision making. Journal of Mathematical Psychology, 8, 508–538. Ratchford, B. T. (1982). Cost-benefit models for explaining consumer choice and information seeking behavior. Management Science, 28, 197–212. Saad, G. (1996). SMAC: An interface for investigating sequential multiattribute choices. Behavior Research Methods, Instruments, & Computers, Vol. 28 (2), 259–264. Schmalhofer, F., Albert, D., Aschenbrenner, K. M., & Gertzen, H. (1986). Process traces of binary choices: Evidence for selective and adaptive decision heuristics. The Quarterly Journal of Experimental Psychology, 38A, 59–76. Stiegler, G. J. (1961). The economics of information. Journal of Political Economy, 69, 213–225. Svenson, O. (1992). Differentiation and consolidation theory of human decision making: A frame of reference for the study of preand post-decision processes. Acta Psychologica, 80, 143–168. Swensson, R. G., & Thomas, R. E. (1974). Fixed and optional stopping models for two-choice discrimination times. Journal of Mathematical Psychology, 11, 213–236. Swets, J. A. (Ed.). Signal detection and recognition by human observers: Contemporary readings. New York: Wiley, 1964. Tversky, A. (1969). Intransitivity of preferences. Psychological Review, 76, 31–48. Van Wallendael, L. R., & Guignard, Y. (1992). Diagnosticity, confidence, and the need for information. Journal of Behavioral Decision Making, 5, 25–37. Wald, A. (1947). Sequential analysis. New York: Wiley. Wallsten, T. S., & Barton, C. (1982). Processing probabilistic multidimensional information for decisions. Journal of Experimental Psychology: Learning, Memory, and Cognition, 8, 361–384.
Received May 18, 1995
/ a707$$2641
08-19-96 21:34:14
obha
AP: OBHDP