Decision Support Systems 44 (2007) 79 – 92 www.elsevier.com/locate/dss
Marketing decision support system openness: A means of improving managers' understanding of marketing phenomena Nathalie T.M. Demoulin ⁎ IESEG-School of Management, Catholic University of Lille, 3, rue de la Digue, 59800 Lille, France Received 27 January 2006; received in revised form 30 October 2006; accepted 8 March 2007 Available online 15 March 2007
Abstract Previous research has shown that managers offered the opportunity to use MDSS perform better but are not more confident about their decision. The performance increase seems to result from a reliance effect rather than from a better understanding of the decision problem. By conducting a laboratory experiment in a marketing environment with experienced and inexperienced subjects, we find that enhancing the MDSS openness decreases the reliance effect but does not have an impact on the decisionmakers' evaluation of their decisions. © 2007 Elsevier B.V. All rights reserved. Keywords: Decision making; Decision Support Systems; Laboratory experiment; Marketing; Black-box systems
1. Introduction Great hopes have been placed in the development of Marketing Decision Support Systems (MDSS). They are especially designed to allow marketing managers to benefit from advances in marketing theories underlying models. As marketing models are built, MDSS makes them easy to operate. For example, Lilien and Rangaswamy [19] present a set of marketing models embedded in software. Yet, in 1970, Little stated that “marketing managers in companies are not so eager to use marketing models” [[20], B-466]. Many authors still agree on this assertion [3,25,29]. Several studies have examined the effectiveness of MDSS models. In a field experiment, Fudge and Lodish [13] observed that salespersons using CALLPLAN ⁎ Tel.: +33 3 2054 58 92; fax: +33 3 20 54 47 86. E-mail address:
[email protected]. 0167-9236/$ - see front matter © 2007 Elsevier B.V. All rights reserved. doi:10.1016/j.dss.2007.03.002
outperform their counterparts who did not use it. In contrast, Chakravarti, Mitchell, and Staelin [8] showed that ADBUDG, Little's original decision calculus model does not help the users to make better decisions and is even worse than decisions based on intuition. However, McIntyre [21] shows that using CALLPLAN in an experimental setting improves decision-makers' performance, at least for problems involving constrained budget allocations in simple and stationary environments. In a marketing strategy game, Van Bruggen, Smidts, and Wierenga's [30] findings reveal that the availability and the quality of the MDSS improve decision-makers' performance with no negative effect on user confidence, whatever the level of time pressure. Van Bruggen, Smidts, and Wierenga [31] show that decision-makers using MDSS are better able to set the values of decision variables to increase performance. Yet, Barr and Sharda [5] propose two potential explanations of the performance improvement: a
80
N.T.M. Demoulin / Decision Support Systems 44 (2007) 79–92
reliance and a development effect. The former effect suggests that DSS usage leads to deferring the decision process to “let the computer do it” whilst the latter refers to an increased understanding of relationships between relevant variables (i.e., the decision model). The results of their experimental study indicate that the improvement in the decision-makers performance was due to the efficiency of the DSS than the development effect. In most experimental research in marketing, authors use MDSS that enable users to run simulations. These MDSS are supplied to marketing managers with information about the tasks for which the tool should be used, but without any explanation about the supporting model. Consequently, decision-makers may see these systems as black-box MDSS (BMDSS). The question arises as to whether decision-makers are willing to use tools that do not lead them to better evaluate their own decision making. Similarly, Eisenstein and Lodish [11] mention that the use of a model that users do not understand might affect the likelihood of adoption and usage. Consequently, we propose to enhance the transparency of MDSS. We argue that decision-makers should not only be familiar with the interface of the MDSS but they should also understand the hypotheses underlying the decision model, variables and relationships between them and data used to quantify these relationships. Firstly, increasing the transparency of MDSS may help managers achieve a better understanding of the market. Managers may gain this understanding not only by decision aid usage but also by being aware of the MDSS' underpinning model. Secondly, given that managers are not very inclined to use decision models and that the latter models usage does not increase their confidence in their decision, we investigate whether the systems' openness increases decision-makers confidence. Indeed, some studies look into MDSS characteristics. Van Bruggen, Smidts, and Wierenga [30] studied the impact of MDSS quality on managers' performance. Yet, researchers in the DSS field [5] believe that DSS specific parameters may influence the understanding of decisions that managers gain from using systems. In this research, we look into the effect of MDSS transparency by conducting a laboratory experiment in which decision-makers make a forecast. Forecasts are very important for companies because they influence their budgets and future plans. Marketing managers make forecasting decisions not only to evaluate the expected impact of their actions, (e.g., effect of promotion on sales) but also for media planning or to evaluate the attractiveness of markets. In the next
section, this paper proceeds to outline the relevant literature and research hypotheses derived from it. Then, we explain the methodology used to test these hypotheses. Finally, we present the results followed by a discussion. 2. Research hypotheses Two conditions should be met for MDSS to be perceived as open by decision-makers. Firstly, users should be familiar with the MDSS interface. Second, and most importantly, they should understand what Larréché and Montgomery [17] have called the “hidden part of the iceberg”. They constructed a framework incorporating 16 dimensions of models' properties that have a potential impact upon the propensity of marketing managers to use them. These dimensions fall into two categories. The first one concerns the more obvious components of a model, that is, its simplicity and the interface with the marketing manager. The second category – the “hidden part of the iceberg” – includes the expected value of the model, its structure, its validation history, adaptability, and robustness. Hence, we define Open MDSS (OMDSS) as systems supplied to decision-makers with not only explanations of their functionalities but also with insights about the way they derive the results they report that is the assumptions underlying the decision model − variables and their supposed relationships −, the way these relationships have been tested and calibrated, i.e., data used. In addition to examining the transparency of MDSS, we look into the interaction between the model openness and the decision-makers' experience. We define the latter as being any experience that decision-makers have acquired through their professional activities. In marketing and DSS literatures, experiments are often conducted with inexperienced subjects (e.g., [15,30,31]). Several studies [1,4,9,10] suggest that students are dubious surrogates for actual decisionmakers. They differ in their judgment, attitudes and decisions. Differences in decisions namely come from different usage of information [9]. Indeed, novices and experts do differ in the way they search and evaluate information as well as make decisions. Experienced managers regard more pieces of information as being useful and make more conservative decisions [22]. Hughes and Gibson [16] found that in MIS research, students can not be considered as surrogates for managers regarding the decision process. Students and managers show significantly different behaviours in the decision-making process. Moreover,
N.T.M. Demoulin / Decision Support Systems 44 (2007) 79–92
81
past research shows that the managers' experience moderates the expected benefits of DSS. Indeed, Spence and Brucks [27] find that novices especially benefit from using decision aids. Consequently, the objective of this research is also to show that researchers using inexperienced decision-makers as subjects in experiments must be cautious. Indeed, their results may not be generalized to experienced decision-makers. We measure the effects of the systems' openness and the decision-makers' experience on several dependent variables mainly related to the impact the system has on users, that is on, the decision-maker's mental model quality, the decision confidence and the system usage. Let us now introduce the research hypotheses.
effect will be stronger for BMDSS' users than for OMDSS ones. Indeed, the OMDSS should supply decision-makers with a better understanding of the reasoning behind the marketing phenomenon. In order to assess this understanding, we measure the quality of the decision-makers' mental model. Similarly to Fripp [12], we define the mental model as being the person's representation of the marketing phenomenon under study i.e. the relationships existing between the relevant variables. The mental model quality assesses the extent to which subjects have considered all the relevant variables and their relationships. We evaluate the mental model quality by comparing the manager's mental model with the best representation he/she could have of the phenomenon. We thus hypothesize that:
2.1. The MDSS openness
H1. Decision-makers provided with an OMDSS have a better mental model quality than unaided decision-makers whereas those supplied with a BMDSS do not show a better mental model quality.
According to Fripp [12], DSS' effectiveness may be envisaged by two points of view. The first and most obvious one, given that the MDSS has been adopted, is that it will hopefully result in some visible improvement of the performance. The MDSS is effective if the decision made is accurate, that is, if the MDSS has achieved its operational purpose. The second facet of MDSS effectiveness is a better understanding of the marketing decision process by MDSS users. However, few studies evaluate whether the MDSS usage leads managers to learn about market phenomena. Van Bruggen, Smidts, and Wierenga [31] show that decision-makers using MDSS are better at discerning the most critical decision variables and make better decisions based on those variables. Fripp [12] measures decision-makers' mental model accuracy after they had made decisions with a MDSS and finds it better for aided decision-makers than for unaided ones. He measured the mental model accuracy by asking subjects to estimate which variables affected sales. The score reflects the number of variables correctly identified. Yet, Barr and Sharda [5] show that decision-makers, who usually use the DSS, display poor achievement once they are deprived of the DSS1. Their results reveal a reliance effect and suggest that DSS only provide a computational support instead of enhancing their understanding of relationships between relevant variables (i.e., the decision model). We expect that this reliance
1 In this study, the MDSS provided to decision-makers is relatively open because it is built with a MDSS generator, which allows decision-makers to examine the model.
Previous studies indicated that aided decision-makers are not more confident in their decision than unaided ones [27,28]. Given that users do not understand how BMDSS works out the proposed solution, they are unlikely to be confident in their decision when the latter is based on BMDSS recommendations. Indeed, they would be unable to bear out their decision to their colleagues given that they rely on a MDSS that is not clear to them. The decision-makers supplied with an OMDSS know how the support tool proceeds to come up with the alternative it suggests. Accordingly, they may be better able to justify their own decision. Therefore, we expect that: H2. OMDSS' users are more confident in their forecasting decision than unaided decision-makers whilst BMDSS' users are not more confident than unaided decision-makers. OMDSS' users are provided with more knowledge about the relationships existing among the decision variables. Consequently, for the first usage occasion, we expect they will undertake fewer simulations than BMDSS' users. BMDSS users will make more simulations in order to test the system's reliability. Indeed, they will need to run simulations to acquire insights about the relationships existing among decision variables whereas OMDSS users might already understand these relationships before using the system. H3. Decision-makers provided with a BMDSS make more simulations than those supplied with an OMDSS.
82
N.T.M. Demoulin / Decision Support Systems 44 (2007) 79–92
2.2. The decision-maker's experience According to Spence and Brucks's [27] framework and findings, experts outperform novices. The former have the capacity to form a meaningful internal conceptual model of problems whereas the latter evoke simplifying and often inappropriate heuristics. Indeed, experts have appropriate procedural knowledge to encode and interpret data [27]. They are better at selecting and evaluating information and make better decision than novices. Moreover, novices provided with a decision aid make more accurate decisions whereas experts are less affected by them. Therefore, we expect that: H4. The difference in mental model quality between unaided or BMDSS users and OMDSS users will be larger for inexperienced decision-makers than experienced ones. Similarly to previous research [26,27], we expect that experienced decision-makers will show more confidence in their decisions than do inexperienced ones. However, this difference will be smaller when decision-makers use an open system because it provides inexperienced users a better understanding of the decision and thus more confidence. We thus foresee that: H5. The difference in decision-makers confidence between unaided or BMDSS users and OMDSS users will be greater for inexperienced decision-makers than experienced ones. As far as the MDSS usage is concerned, Van Bruggen [28] shows that experienced decision-makers make more simulations than inexperienced ones, at least during the first two decision periods. Furthermore, Perkin and Rao [22] find that experienced decisionmakers use more pieces of information than inexperienced ones. We predict that the number of simulations made with MDSS will be greater for experienced than for inexperienced decision-makers. Indeed, experienced decision-makers are more likely to compare the MDSS outputs with their mental model by submitting inquiries to the system. Nevertheless, the system's openness will moderate this effect. The need of information of experienced decision-makers provided with an OMDSS will be met before using the system. They will already have insights about links existing between decision variables before running simulations. Consequently, the expected experience effect on the system usage will be
lessened when the OMDSS is used by experienced decision-makers. We hypothesize that: H6. Experienced decision-makers use their system more intensely than inexperienced ones except when the system provided is open. 3. Methodology To test these hypotheses, we conducted a laboratory experiment. In the following sections, we describe the experimental environment as well as the experimental task. Then, we define the “satisfying” solution (i.e., the benchmark), we specify the experimental design and the origin of the subjects. Finally, we present the method used to operationalize the treatment conditions, the experimental procedure and the measurement of dependent variables. 3.1. Experimental environment and task In contrast to field testing and survey research, laboratory experiments offer several advantages regarding measurement and control of independent, dependent, and extraneous variables. The experimental tool used in this research is CADDIE [33]2, which is an integrated case on retail store location and set-up. The task of the decision-maker in this case is to choose, from three sites, the location of a new food retail outlet, to determine the floor space of the store, and the average level of mark-up to add on wholesale prices. Before running the experiment, CADDIE has been used as a computer assisted learning tool by around 251 students and experienced managers. The experiment takes place in the first module of the computerized case, which concerns the estimation of the potential average sales per household in the trade area of the new food retail store. Subjects have therefore to take a forecasting decision in which the variables are not under control3. At the beginning of the experiment, the first module sets the mission, the decision context and the objectives to managers (users) who are entrusted with the extension of a retail network. Indeed, managers have the opportunity to familiarize themselves with the 2 CADDIE is a computerized case study designed with the financial support of the FSRIU (Belgium National Fund For Research) at the CREER, FUCaM. 3 This task can be considered as similar to the one used in Hoch and Schkade's [15] experiment. Indeed, it is a forecasting exercise based on uncontrollable variables. In Chakravarti, Mitchell, and Staelin [10] and McIntyre [30], decision-makers also performed a forecasting exercise: they had to assess the expected results of their own decision.
N.T.M. Demoulin / Decision Support Systems 44 (2007) 79–92
food retail trade in Belgium. Then, users estimate the household purchasing power in a defined area by using secondary data such as census data from government sources and data related to the Belgian retail sector. To process the data, decision-makers eventually use a MDSS named MADAM. This is a model, which analyzes households' average expenses on food, beverages, cleaning products and Beauty and Health Care (BHC) products. It helps decision-makers to assess the share of the households income to be allocated to expenses in non-specialized food retail stores, whatever their sizes. 3.2. Benchmark definition The benchmark definition has an impact on the mental model quality assessment. Decision-makers have to evaluate the expenses that households, living in the neighbourhood of the three potential sites, spend in supermarkets and hypermarkets (i.e., self-service shops of the non-specialized food retail trade4, whose size are superior or equal to 2500 m2). The estimation process unfolds into two steps: Step 1. According to the ACNielsen definition of the non-specialized food retail trade, product categories sold in these types of store are food, beverages, cleaning products and HBC products. Consequently, the first step of the forecasting decision process is to determine the proportion of household income spent on these products categories in non-specialized stores (i.e., selling at least 4 product categories related to food). On the basis of available data in Belgium's regions, we estimate the relationship between income levels and the proportion of the income (PIR) spent in the non-specialized retail trade with the next function: PIR ¼ Min þ ðMax−MinÞ FðMIR Þ
ð1Þ
83
where MIR = Mean Income of household living in a particular region R; Min and Max are respectively the Minimum and Maximum income proportion spent in the non-specialized food retail trade; β0 and β1 are the regression coefficients. In order to estimate this function, we used data about Belgian regions defined by ACNielsen. The explanatory power of the model is quite good given the high coefficient level of determination (R2 = 0.9959). This relationship shown in Fig. 1 reveals that households with a higher income spend a lower proportion of their income in non-specialized food retail trade stores. On the one hand, they save more and allocate a greater part of their income to purchase durable goods than lower income households. On the other hand, they generally prefer the quality and services offered by specialized stores. On the basis of the estimated function, we compute ENSTA, the mean Expenses in the Non-Specialized retail trade per household in the Trade Area (TA). At step 1, the forecasting decision process is: ̂ MITA : s ¼ 1 : ENSTA ¼ PI TA
ð3Þ
Step 2. In the second step of the forecasting decision process, subjects should take into account the size of the store. The non-specialized food retail trade includes stores of any size. However, we are mainly interested in expenditure made in super- and hypermarkets. More precisely, only the expenditure made in stores whose size is superior or equal to 2500 m 2 should be included in the average
with FðMIR Þ ¼
b
eb0 d MIR1 b
1 þ ðeb0 d MIR1 Þ
ð2Þ
4
According to ACNielsen, to be considered as a non-specialized store, the turnover in food must be at least 40% of the total turnover. Stores cannot make more than 50% of their turnover in product categories such as fish or meat.
Fig. 1. The relationship between mean income and the proportion of income spent in the non-specialized retail trade.
84
N.T.M. Demoulin / Decision Support Systems 44 (2007) 79–92
sales potential per household. To evaluate households' expenditure allocated to these types of stores, decision-makers must retrieve from the expenses estimated in step 1, the market share of smaller stores. The forecasting decision process is then: s ¼ 2 : EHSTA ¼ ENSTA MSHS
ð4Þ
where EHSTA mean Expenses in Hyper and Supermarket of households living in the trade area; MSHS Market Share of Hyper and Supermarket in the non-specialized food retail trade. 3.3. Experimental design CADDIE has been used as an experimental setting for testing the hypotheses. The experimental design is a between-subjects design with two factors. The first factor is the decision-makers' experience, i.e., inexperienced versus experienced. The second experimental variable is the MDSS availability (i.e., without versus with a MDSS) and its type (i.e., OMDSS or BMDSS). Experienced and inexperienced subjects are thus assigned across three treatment conditions that are, (1) no MDSS, (2) availability of a BMDSS and (3) availability of an OMDSS. The experimental design is a 2 × 3 factorial design (Table 1). 3.4. Subjects 54 experienced and 54 inexperienced subjects were randomly assigned to the three MDSS treatment con-
Table 1 Experimental Design Availability and type of Marketing Decision Support Systems (MDSS)
The level of Experience (EXPB)
Without experience (No Expb) With experience (Expb)
No MDSS available
BMDSS available
OMDSS available
No MDSS – No Expb
BMDSS – No Expb
OMDSS – No Expb
No MDSS – Expb
BMDSS – Expb
OMDSS – Expb
ditions. Experienced managers were enrolled on a complementary graduate program in management and were following a course either in marketing or in econometrics. They had on average 7.24 years of work experience. The inexperienced subjects were undergraduates in management and were taking a marketing research course. As far as the decision-maker is concerned, we have controlled two dimensions: the cognitive style and the experience. We compared them across experimental groups. Alavi and Joachimsthaler [2] in a meta-analysis of 144 findings from 33 studies about DSS implementation suggest that the cognitive style predicts DSS benefits. Given that the cognitive style seems to influence decision-makers' performance [6,32] we ensured that all treatment groups are similar regarding their cognitive style variables (measured by the Cognitive Style Analysis [23]). The cognitive style is seen as “an individual's preferred and habitual approach to organizing and representing information” [[24], p.8]. The two basic dimensions are: (a) “the Wholistic-Analytic Style dimension, which indicates whether an individual tends to organize information into wholes or parts and (b) the Verbal-Imagery Style dimension, which shows whether an individual is inclined to represent information during thinking verbally or in mental pictures” [[24], p.9]. In addition to the subjects' work experience (considered as a factor in the experiment), we measured their prior involvement in similar tasks, i.e., their task familiarity. We measured it by asking subjects whether they had already evaluated household expenses in a particular type of store or not. We tested whether means differences appeared between treatment groups. We did not find any significant differences for the length of the decision-makers' experience between experienced subjects groups (F = 0.85, p = 0.4317) and for the cognitive style dimensions, that is, verbal versus imager (CSVI) (F = 0.22, p = 0.9532) and wholistic versus analytic (CSWA) (F = 1.87, p = 0.1086). Few subjects were familiar with the task (less than 10%) and they were uniformly distributed across treatment groups. 3.5. Experimental procedure As noted above, the session started with a computerized test aiming at measuring subjects' cognitive style [23]. Next, subjects began to solve CADDIE. The first module enabled them to become familiar with the non-specialized retail trade in Belgium. They were then involved in the experimental task. Almost no time pressure was imposed on decision-makers.
N.T.M. Demoulin / Decision Support Systems 44 (2007) 79–92
Indeed, on the basis of several pre-tests made before the experiment, we observed that most decision-makers took on average four hours to make their decision. As a result, we decided to plan the experiment during fourhour sessions. At the end of the task, that is, after having provided their estimation of the average sales potential per household in the trade area, we measured the decision-maker's confidence and the ease of use of the provided MDSS. Participants completed a final questionnaire in which they described their mental model, that is, unaided decision-makers were asked to explain which variables they had considered whilst making their forecasting decision and which relationships they had established among these variables in order to compute their estimation. To avoid leading the participants we did not ask them directly about their reliance on the MDSS in making their decisions. Rather we asked them to explain how they would proceed in estimating the sales potential without the help of the MDSS (whether the MDSS was open or not). In order to measure the task familiarity, subjects were also asked whether they had already faced such a problem, and, if so, to provide further details. Finally, subjects specified their educational background, their work experience, their age and gender. Participants were given incentives to be fully involved in the experiment. Given that experimental sessions took place within the framework of a course, we encouraged their commitment by linking their performance to their final course grade. The experiment accounts for 30% of student's final grade. Professors of the course introduced the experiment before the session in order to motivate students. Students' attitude and behavior during the experiment revealed that they participated seriously. 3.6. Treatments operationalization Both BMDSS and OMDSS help decision-makers to assess the share of households' mean income spent in non-specialized food retail stores, whatever the store size. The forecasting model behind both MDSS is the function (1). The forecasting model is robust: it provides plausible predicted values compared to the range of observed values occurring in the sample of predictor variables [18]. The model represents the relationship between income values and the percentage of income spent in non-specialized retail stores, which is an inverted S-curve. Decision-makers have to suggest income values and MDSS provides the income percentages and show the S-curve on a graph. BMDSS and
85
OMDSS are presented as estimation tools named MADAM. Both are placed in the context of the forecasting decision. An example of use is also made available to users. Contrary to the BMDSS, the OMDSS includes some additional screens presenting the model's underlying assumptions as well as the data used to calibrate it. We checked whether both MDSS are perceived as equally easy to use. We did not find any statistically significant difference between treatment groups (p = 0.7125). The Appendix describes the BMDSS and the OMDSS. 3.7. Measurement of dependent variables Let us define and explain how we measured the dependent variables. 3.7.1. Mental model quality (MMQ) The mental model quality measures the decisionmakers' understanding of the decision model. It is the representation decision-makers have of the marketing phenomenon, i.e. relationships between the relevant variables. We thus measure their understanding of these relationships. We assess the rationality of the decisionmakers' thinking by taking into account subjects' reasoning and the way they formalize it. The mental model quality includes several steps of the decision process. Each step represents a relationship between two variables. Unaided decision-makers have to make their forecasting decision process explicit while aided decisionmakers have to explain how they would proceed to produce an estimate without the help of a MDSS. We measure the mental model quality by adding up the number of self-reported decision-making steps correctly completed. Each step of the decision process takes into account the relationship between two variables. For each step accomplished, subjects receive one point from which we subtract the relative difference due to errors in formalisation: MMQ ¼
X s
ds −
X
FEs
s
where δs is a dummy variable indicating whether the subject considers step s (δs = 1) or not (δs = 0); FEs is the formalization error at step s. FEs is always smaller than 1. Within each step s, the forecasting decision process must be formalized numerically, i.e., the relevant variables must be considered and data must be chosen to specify them numerically. Decision-makers have to choose the right data to make
86
N.T.M. Demoulin / Decision Support Systems 44 (2007) 79–92
variables operational and have to establish the most accurate relationships. If subjects do not use the right data to compute the estimation at a particular step, they make a formalization error (FEs). The estimation resulting from the formalization of the forecasting decision process at step s is then different from the standard at step s. Let us consider the subscript f as indexing the Fs possible formalizations at a particular step. For each step considered by subjects, we compute the relative difference due to formalization mistake(s):
FEs ¼ ds
Fs X ds; f jSs; f −Ss* j S* f ¼1
S
where Ss, f S s⁎ S S⁎ ds, f
is the “sub-benchmark” resulting from the choice of formalization f with f = 1,…, F; is the benchmark solution at step s; is the benchmark solution at the last step S of the forecasting decision process; is a dummy variable indicating whether the subject selects P the formalization f at step s. It implies that f ds;f ¼ 1; 8s.
It should be noted that if the correct formalization for a particular step is chosen, then Ss,f should be equal to S s⁎ and FEs to zero. Thus, the mental model quality measure has been conceived so that the subjects' score is an indicator of their reasoning. It reflects the number of steps correctly formalized i.e. the extent to which subjects considered all variables and their relationships.
3.7.2. Decision confidence (DCON) Decision-makers' confidence in their decisions may be regarded as a self-evaluation of their own actions. As in other studies [26,30], we use Likert scales to measure decision confidence. The measurement tool we used is a French translation of the one developed by Van Bruggen et al. [30]. The confidence scale attained good reliability (i.e., the Cronbach alpha is equal to 0.76). 3.7.3. MDSS usage (MDSSU) The system usage can be defined as the number of times the decision-maker addresses a request to the MDSS. The experimental tool automatically measures MDSSU.
4. Results In order to test our hypotheses, we performed a set of Analyses of Covariance (ANCOVA) presented in Table 2 and ran planned comparisons. For each dependent variable, we analyzed the effect of the three MDSS levels (No MDSS/BMDSS/OMDSS) and the two levels of the decision-maker's experience (EXPB). The general model also includes the two dimensions of the cognitive style (CSWA and CSVI) and an error term (α). Given its influence on decisionmakers performance and DSS benefits [2,6,32], the cognitive style is considered as a control variable. Indeed, we use a covariance analysis to statistically control the effect of cognitive style on the benefit of using MDSS. The form of the general model for each dependent variable is: Dependent variables MMQ DSSU = β 0 + β 1 M D S S + β 2 E X P B + β 3 M D S S * E X P B DCON + β4CSWA + β5CSVI + α
In the next sections, we successively present the results for each dependent variable.
Table 2 Results of Ancovas 1 Source
DF
SS
MMQ: Mental model quality MDSS 2 EXPB 1 MDSS * EXPB 2 CSVI 1 CSWA 1 Error 100
MS
F
Sig of F
4.50 0.52 1.57 8.63 0.10 74.80
2.25 0.52 0.79 8.63 0.10 0.75
3.01 0.69 1.05 11.54 0.13
0.0538 0.4071 0.3537 0.001 0.7206
DCON: Decision confidence MDSS 2 4.69 EXPB 1 9.74 MDSS * EXPB 2 3.78 CSVI 1 0.55 CSWA 1 0.48 Error 100 49.21
2.34 9.74 1.89 0.55 0.48 0.49
4.76 19.8 3.84 1.12 0.98
0.0106 b.0001 0.0247 0.2926 0.3253
MDSSU: Marketing decision support system usage MDSS 1 0.44 0.44 0.02 EXPB 1 70.56 70.56 2.74 MDSS * EXPB 1 59.01 59.01 2.3 CSVI 1 14.36 14.36 0.56 CSWA 1 1.08 1.08 0.04 Error 66 1696.88 25.71
0.8965 0.1023 0.1345 0.4575 0.8382
1
All significant levels presented in this paper are two-tailed.
N.T.M. Demoulin / Decision Support Systems 44 (2007) 79–92
87
Fig. 2. The mental model quality (MMQ). Fig. 3. The decision confidence (DCON).
4.1. Mental model quality (MMQ)
4.3. DSS usage (MDSSU)
The mean MMQ for the six treatment conditions are presented in Fig. 2. The MDSS main effect is significant (F = 3.01, p = 0.0538). No difference in terms of mental model quality shows up between unaided subjects and decision-makers who have had the opportunity to use the BMDSS (F = 0.07, p = 0.788). However, OMDSS' users have a better mental model quality than unaided (F = 5.04, p = 0.027). Therefore, H1 is accepted. Yet, there is no significant interaction between MDSS and EXPB (F = 1.05, p = 0.3537). Therefore, H4 cannot be accepted. The effect of the verbal-imager dimension of the cognitive style is significant (F = 11.54, p = 0.001). The greater the verbal inclination of decision-makers the better their mental model quality.
Fig. 4 shows that inexperienced decision-makers run fewer simulations than experienced when they use the BMDSS. No difference appears between experienced and inexperienced decision-makers using the OMDSS. Looking at Table 2, we observe that none of the variables have a significant effect. However, the analysis of simple effect reveals that the difference in BMDSS usage between experienced subjects and inexperienced ones is significant (F = 5.01, p = 0.0285) whereas, as expected, the experience effect is not significant at the OMDSS level (F = 0.01, p = 0.9223). H3 cannot be retained while H6 is accepted.
4.2. Decision confidence (DCON)5 Table 2 results and Fig. 3 indicate a significant effect for the levels of experience (F = 19.8, p = 0.0001) and the MDSS levels (F = 4.76, p = 0.0106). As expected, the MDSS effect is not equal at all levels of the experience factor (F = 3.84, p = 0.0247). Contrasts' results show that among inexperienced decision-makers, those using either the BMDSS or the OMDSS are more confident in their forecast than unaided (F = 13.87, p = 0.0003 and F = 8.16, p = 0.0052). However, among experienced decision-makers neither BMDSS' users nor subjects using the OMDSS show significantly different decision confidence compared to unaided (F = 0.39, p = 0.5316 and F = 0.63 and p =0.4298). Thus, H2 and H5 are partially supported.
5. Discussion Our purpose was to investigate to what extent improving the openness of MDSS increases managers' understanding of marketing phenomena (i.e., decreases the reliance effect) and enhances their evaluation of the decision. We also look at the moderating role played by the decision-makers' experience.
5
Decision-makers confidence is measured on 5-point scales ranging from −2 to +2.
Fig. 4. The MDSS Usage (MDSSU).
88
N.T.M. Demoulin / Decision Support Systems 44 (2007) 79–92
Barr and Sharda [5] show that the use of the systems leads to a “reliance effect”, that is, decisionmakers defer the decision process to “let the computer do it” instead of enhancing their understanding of the market. Our results reveal that decision-makers, who have had the opportunity to use the OMDSS, have a higher mental model quality than unaided ones whereas BMDSS users do not. Therefore, MDSS openness reduces decision-makers' reliance towards the system. Experienced and inexperienced decision-makers do differ in the way they evaluate their decision. Overall, experienced decision-makers are more confident in their forecast than those who are inexperienced. Van Bruggen [28] shows a somewhat similar experience effect on decision confidence, but in his experiment, the difference between experienced and inexperienced decreases after several decision periods. Contrary to our expectation, the MDSS openness does not improve the decision-makers' confidence. Nevertheless, the MDSS availability does not similarly influence the confidence experienced and inexperienced users place in their decision. To the best of our knowledge, only two studies have investigated the MDSS availability effect on decision-makers' confidence and none of them showed a significant increase in the DSS' users confidence [5,28]. On the one hand, we find that inexperienced MDSS users are more confident in their decision than unaided subjects, whose confidence is rather weak. On the other hand, the decision confidence of experienced decision-makers does not differ depending on whether they use a MDSS or not. We should point out, however, that contrary to inexperienced decision-makers, experienced ones reach a relatively high level of confidence even when they are unaided. It would have been more judicious to choose a 7-point scale to measure decision confidence rather than a 5-point scale. A 7-point scale would have offered more discriminative values. Finally, experienced decision-makers use the BMDSS more intensely than inexperienced ones. This result is in accordance with previous research [22,28] which shows that experienced people use DSS more often and need more pieces of information to make their decision. We also believe that they are much more critical towards the model outputs and running simulations allows them to evaluate the validity of the model structure included in the BDSS. However, when they are provided with the ODSS, they already have insights about the decision model. Consequently, they do not need to run as many simulations to assess its validity.
6. Conclusions To conclude, we discuss the limitations of the study, its potential research contribution and the managerial implications. 6.1. Limitations of the study The results of this study have some limitations. As far as the internal validity is concerned, we controlled most of the extraneous variables related to the decision situation, i.e., the decision environment, the task and subjects. Nonetheless, the most difficult extraneous variables to control are those related to the decisionmakers themselves. Their background might have influenced the MDSS openness effect. By measuring the familiarity with the task, we assessed whether decision-makers had some prior knowledge regarding the decision. Furthermore, our results reveal that the greater the verbal inclination of the decision-makers the better their mental model quality. This may suggest that individuals, who are inclined to verbally represent information during thinking, express their decision model with greater ease. Therefore, a longitudinal experiment would, not only, have been more adequate to measure decision-makers' learning resulting from MDSS usage, but also, to assess the task familiarity effect. The issue of the external validity needs to be addressed. The methodological choice of conducting a laboratory experiment may be a threat to results generalization. Our results depend to some extent on the characteristics of the decision-making environment, the task, and the type of subjects. According to the managers, who participated in the experiment, CADDIE does effectively reflect a real decision making environment. Nevertheless, we must admit that our findings may have been different in other environments and tasks. It would be wise to conduct replications of this study in several other environments. Moreover, the experimental environment is a one-shot decision. The disadvantage of this—is that, subjects start the experiment without experience in the decision environment. To deal with such a drawback, in the first module we offered subjects the opportunity to become familiar with the retail trade in Belgium. Furthermore, we chose an experimental design focused on individual decisionmaking despite the fact that strategic decisions are most often made within groups. Nevertheless, given that our main interest consists in investigating the decisionmaking process of decision-makers, an individual decision-making situation was a necessary condition to bring this research to a successful conclusion. The type
N.T.M. Demoulin / Decision Support Systems 44 (2007) 79–92
of subjects used may also limit the external validity. Our experienced subjects were not specially working in the marketing field. Marketing managers would perhaps have been more critical towards the underlying assumptions of the decision model. 6.2. Research contribution and managerial implications Given that the MDSS openness is a system's characteristic which reduces the reliance effect; MDSS' designers should be encouraged to make their systems more open. They should provide users with information about how the system derives the solution proposed. Indeed, the decision model included in the MDSS must be clearly presented to users. MDSS designers must provide information about the decision model: its underlying assumptions, the factors influencing the decision variable(s), the form of the relationships between these factors and the decision variable(s), the data used to calibrate the identified relationships as well as the limits of the model. Charts help system users to better understand the relationships between variables. It is essential for users to be able to justify their decision. To do so, they must know the important factors influencing the phenomenon being studied in their decision process. Consequently, MDSS must provide computational support, but in addition, help decisionmakers to understand the influence of decision factors. Black-box systems do not enable their users to master the decision process which may discourage them to use such systems on a regular basis. Our research is also an attempt to respond to some unanswered questions emerging from previous studies. In most of the studies that evaluate the benefits of simulation tools, decision-makers are provided with black-box systems. This is hardly a new observation. Sharda, Barr, and McDonnell [26] observed as much in an extensive review of all studies on MDSS effectiveness conducted from 1970 to 1987. We know from previous research that MDSS characteristics may determine the benefits derived from its use. For instance, decision aids interaction [7] and the MDSS quality [28] improve decision-makers performance. Moreover, Barr and Sharda [[5], p. 144] state that: “an examination of specific designs or parameters of DSS that promote development effects and minimize reliance effects appear to be warranted”. They also mentioned that the study of individual decision-making may explain the latter effects. In addition, Van Bruggen, Smidts, and Wierenga [[30], p.341] propose that “it would be interesting to investigate whether systems which also provide deci-
89
sion-makers with insight in the mechanisms through which their decisions work, do help decision-makers to become not only more effective but also more confident”. At a methodological level, we suggest that this research adds to existing knowledge in two areas. These are a better understanding of the moderating effect of the subjects' experience and the development of an objective measure of the decision-makers' mental model quality. Firstly, we have shown that results which are obtained from studies using inexperienced subjects as surrogates for experienced managers, should be interpreted with caution. Indeed, inexperienced and experienced decision-makers do not equivalently evaluate their decision and use the MDSS. Indeed, experienced decision-makers are more confident in their decision. Furthermore, we have also conceived an objective measure of decision-makers' understanding of relationships between decision variables. Barr and Sharda [5] mentioned that the development of such measurement may enable the evaluation of reliance and development effects. This measure of the mental model quality is used here in a decision, which is broken down in decision-process steps. Nevertheless, it can also be adapted to decisions for which a strategy must be chosen by determining a particular value for several decision variables. Indeed, absolute deviations should then be computed for every decision variable. As far as further research is concerned, we propose to investigate the impact of MDSS openness a longitudinal experiment in order to better assess the impact of task familiarity as well as the progress of the reliance effect over a period of time. Moreover, only experienced decision-makers should be involved in such study. Finally, it would be interesting to investigate whether the effect of MDSS characteristics changes depending on environmental factors such as data availability and time pressure. Previous research shows that, on the one hand, time pressure affects the decision-makers' evaluation of MDSS varying according to their quality [30], on the other hand, DSS' users exposed to a high level of data consider less decision alternatives and have less understanding of their impacts [14]. Acknowledgments The author especially thanks Alain Bultez and Fabienne Guerra [33] (CREER, Catholic University of Mons, FUCaM, Belgium) for their contribution and cooperation during the experiment. The author gratefully thanks Berend Wierenga for his helpful comments on the previous draft of this article.
90
N.T.M. Demoulin / Decision Support Systems 44 (2007) 79–92
Appendix A Box I Description of the BMDSS and ODSS THE BMDSS The following information is available to BMDSS users as well as to OMDSS users. To make the evaluation of the sales potential easier in the trade area, the company LOCASITallows you to use a Decision Support System that is named MADAM (Model of Analysis of the expenditure in large-scale retailers). This statistical tool estimates the proportion of household expenditure for which you specify the annual income, in large-scale retailers. Let us see to what extent this tool can be useful for you.! The sales potential in the trade area depends on the amount households spend in food retail stores such as hypermarkets and supermarkets. The latter are self-service stores whose size is greater than 400 m2. Households spend a part of their income to buy food, beverages, cleaning products and Beauty and Health Care (BHC) products. They buy these products in both specialized stores (i.e. butchers shops, bakeries, fishmongers and perfumeries stores) and non-specialized stores (such as hypermarkets and supermarkets). MADAM calculates the proportion of the income spent in non-specialized stores. To use MADAM (presented on the screen below), you must input the household income. Then press the button “CALCULATE”, MADAM outputs the proportion of the income spent in non-specialized food stores. The relationship between the annual average income and the proportion spent in the non-specialized retail trade is shown in the graph below.
N.T.M. Demoulin / Decision Support Systems 44 (2007) 79–92
91
Box II Description of the OMDSS THE OMDSS The following table presents the information only available to OMDSS users. It is presented before the OMDSS usage. Households spent a part of their income for food, beverages, health and beauty purchases. They purchase in specialized and non-specialized stores (such as hypermarkets and supermarkets). The income proportion spent in non-specialized food retail trade decreases with income. Indeed, households with high incomes: (1) save more and spend a higher part of their income to invest in dry and capital goods; (2) prefer quality and service offered by specialized stores (i.e., butcher shops, baker shops, fish shop and perfumeries). Low income households prefer low prices offered by nonspecialized stores.
As illustrated in Fig. 1, the proportion of income decreases to form an S-shape curve. This function has been calibrated by using data provided by the ACNielsen company and the National Institute of statistic (NIS). ACNielsen cuts up the country in 5 areas. For each area, the company assesses the average expenses in nonspecialized food retail stores. NIS publishes the average income of households living in each province. The average income for each area defined by ACNielsen can be easily derived (Table with these figures are shown on the screen).
The proportion of income spent in large-scale retailers can be easily computed for each area (Calculus appears on the screen). As presented in Fig. 2, the data show that the income proportion spent in non-specialized retail stores decreases with the level of income. The function is non-linear. Households living in the north have higher incomes and invest more in capital goods such as houses, garden, furniture whereas people living in the south have lower income and spend a higher proportion of their income in consumers goods. Moreover, in the north we find a higher density of specialized stores. The non-linear relationship has been estimated by the company LOCASIT on the basis of district data. Results of these estimations are presented in Fig. 3. MADAM uses this function to provide the proportion of income spent in the non specialized food retail trade for any given level of income.
92
N.T.M. Demoulin / Decision Support Systems 44 (2007) 79–92
References [1] M. Ahdolmohammadi, A. Wright, An examination of the effects of experience and task complexity on audit judgments, The Accounting Review 62 (1) (1987) 1–13. [2] M. Alavi, E.A. Joachimsthaler, Revisiting implementation research: a meta-analysis of the literature and suggestions for researchers, MIS Quarterly 16 (March 1992) 95–113. [3] S. Alberts, Impact of types of functional relationships, decisions and solutions on the applicability of marketing models, International Journal of Research in Marketing 17 (2–3) (2000) 167–176. [4] B. Alpert, Non-businessmen as surrogates for businessmen in behavioral experiments, The Journal of Business 40 (2) (1967) 203–207. [5] S.H. Barr, R. Sharda, Effectiveness of decision support systems: development of reliance effect, Decision Support Systems 21 (1997) 133–146. [6] I. Benbasat, A.S. Dexter, Individual differences in the use of decision support aids, Journal of Accounting Research 20 (1) (1982) 1–11. [7] W.L. Cats-Baril, G.P. Huber, Decision support systems for illstructured problems: an empirical study, Decision Sciences 18 (1987) 350–372. [8] D. Chakravarti, A. Mitchell, R. Staelin, Judgment based marketing decision models: an experimental investigation of the decision calculus approach, Management Science 25 (3) (1987) 251–265. [9] J.C. Chang, J.L.Y. Ho, Judgment and decision making in project continuation: a study of students as surrogates for experienced managers, Abacus 40 (1) (2004) 94–116. [10] R.M. Copeland, A.J. Francia, R.H. Strawser, Students as subjects in behavioral research, The Accounting Review 48 (2) (1973) 365–372. [11] E.M. Eisenstein, L.M. Lodish, Marketing decision support and intelligent systems: precisely worthwhile or vaguely worthless? in: Weitz, Wensley (Eds.), Handbook of Marketing, Sage, London, 2002. [12] J. Fripp, How effective are models? Omega 13 (1985) 19–28. [13] W.K. Fudge, L.M. Lodish, Evaluation of the effectiveness of a saleman's planning system by field experimentation, Interfaces 8 (November 1977) 97–106. [14] M.D. Goslar, G.I. Green, T.H. Hughes, Decision support systems: an empirical assessment for decision making, Decision Sciences 17 (1986) 79–91. [15] S.J. Hoch, D.A. Schkade, A psychological approach to decision support system, Management Science 42 (1) (1996) 51–64. [16] C.T. Hughes, M.L. Gibson, Students as surrogates for managers in a decision-making environment: an experimental study, Journal of Management Information Systems 8 (2) (1991) 153–166. [17] J.-C. Larréché, D.B. Montgomery, A framework for the comparison of marketing models: a Delphi study, Journal of Marketing Research 14 (November 1977) 487–498 (19). [18] P.S.H. Leeflang, D.R. Wittink, M. Wedel, P.A. Naert, Building models for marketing decisions, international series, Quantitative Marketing, Kluwer Academic Publishers, 2000.
[19] G.L. Lilien, A. Rangaswamy, Marketing Engineering: Computerassisted Marketing Analysis and Planning, 2nd edition Prentice Hall, 2002. [20] J.D.C. Little, Models and managers: the concept of a decision calculus, Management Science 16 (8) (1970) 476–485. [21] S.H. McIntyre, An experimental study of the impact of judgmentbased marketing models, Management Science 28 (1) (1982) 17–33. [22] S.W. Perkins, C.R. Ram, The role of experience in information use and decision making by marketing managers, Journal of Marketing Research 17 (February 1990) 1–10. [23] R.J. Riding, Cognitive Style Analysis. Birmingham: Learning and Training Technology, 1991. [24] R.J. Riding, S.S. Rayner, Cognitive Styles and Learning Strategies, David Fulton Publishers Ltd, London, 1998. [25] J.H. Roberts, The intersection of modelling potential and practice, International Journal of Research in Marketing 17 (2000) 127–134. [26] R. Sharda, S.H. Barr, J.C. McDonnell, Decision support system effectiveness: a review and an empirical test, Management Science 34 (2) (1988) 139–159. [27] M.T. Spence, M. Brucks, The moderating effects of problem characteristics on experts' and novices' judgments, Journal of Marketing Research 34 (1997) 223–247. [28] G.H. Van Bruggen, Performance effects of a marketing decision support system : a laboratory experiment, Proceeding of the 21st Conference of the European Marketing Academy, May 26–29 1992, Aarthus, Denmark, May 26–29 1992, pp. 1159–1178. [29] G.H. Van Bruggen, B. Wierenga, Broadening the perspective on marketing decision models, International Journal of Research in Marketing 17 (2000) 159–168. [30] G.H. Van Bruggen, A. Smidts, B. Wierenga, The impact of the quality of a marketing decision support system: an experimental study, International Journal of Research in Marketing 13 (4) (1996) 331–343. [31] G.H. Van Bruggen, A. Smidts, B. Wierenga, Improving decision making by means of marketing decision support systems, Management Science 44 (5) (1998) 645–658. [32] M.A. Vasarhelyi, Man-machine planning systems: a cognitive style examination of interactive decision making, Journal of Accounting Research (Spring 1977) 138–153. [33] F. Guerra, A. Bultez, N. Demoulin, CADDIE: Cas d'Autoapprentissage de Décisions en Distribution: Implantation d'enseigne, CREER, MODE, Catholic University of Mons (FUCAM), 1997. Nathalie Demoulin is an Assistant Professor in Marketing at Iéseg — School of Management — Catholique University of Lille in France. Her research interests are the marketing managers' decision-making process and the impact of marketing decision support systems (MDSS) on managers.