Attribute non-attendance in discrete choice experiments: A case study in a developing country

Attribute non-attendance in discrete choice experiments: A case study in a developing country

Economic Analysis and Policy 47 (2015) 22–33 Contents lists available at ScienceDirect Economic Analysis and Policy journal homepage: www.elsevier.c...

574KB Sizes 0 Downloads 74 Views

Economic Analysis and Policy 47 (2015) 22–33

Contents lists available at ScienceDirect

Economic Analysis and Policy journal homepage: www.elsevier.com/locate/eap

Full length article

Attribute non-attendance in discrete choice experiments: A case study in a developing country Thanh Cong Nguyen a,∗ , Jackie Robinson b , Jennifer A. Whitty c , Shinji Kaneko d , Nguyen The Chinh e a

Faculty of Environment and Urban, National Economics University, Hanoi, Viet Nam

b

School of Economics, University of Queensland, Brisbane 4072, Australia

c

School of Pharmacy, The University of Queensland, Brisbane 4072, Australia

d

Graduate School for International Development and Cooperation, Hiroshima University, Higashi-Hiroshima 739-8529, Japan

e

Institute of Strategy and Policy on Natural Resources and Environment, Ministry of Natural Resources and Environment, Hanoi, Viet Nam

article

info

Article history: Received 5 January 2015 Received in revised form 23 June 2015 Accepted 26 June 2015 Available online 29 June 2015 JEL classification: C83 L97 Q51 Keywords: Attribute non-attendance Discrete choice experiments Developing countries Vietnam Willingness-to-pay

abstract In a discrete choice experiment (DCE), some respondents might not attend to all presented attributes when evaluating and choosing their preferred options. Utilizing data from a DCE survey in Vietnam, this paper contributes to the literature on attribute non-attendance (ANA) with an investigation of the ANA in a developing country context. Based on a review of relevant published ANA studies, we find that the extent of ANA reported by respondents in our Vietnam case study could be potentially more serious than in developed country studies. Our econometric analysis, based on a mixed logit model, shows that respondents who ignored the attributes have different preferences from respondents who attended to the attributes. An examination of ANA determinants using a multivariate probit model was undertaken to gain a better understanding of reasons for the differences in the preferences of two groups of respondents. Our results confirm that the stated ANA could be an example of a simplifying strategy of respondents, and that respondents ignored attributes which were not relevant to their situation. © 2015 Economic Society of Australia, Queensland. Published by Elsevier B.V. All rights reserved.

1. Introduction A discrete choice experiment (DCE) is a stated-preference (SP) technique that can be used to estimate the economic value of changes in non-market goods and services. In a DCE exercise, non-market goods and services are described to respondents by a number of attributes. A standard application of the DCE approach assumes that respondents consider all presented attributes in evaluating and choosing their preferred options. However, an increasing amount of research provides empirical evidence that when faced with a typical choice task in a DCE exercise, some respondents may actually make their choices by using only a subset of the attributes (Hensher et al., 2005; Campbell et al., 2008; Carlsson et al., 2010; Scarpa et al., 2010). This phenomenon is commonly referred to as attribute non-attendance (ANA) or attribute ignoring.



Corresponding author. Tel.: +84 4 36280280x5137 (Office); fax: +84 4 38698231; +84 944008982 (Mobile). E-mail addresses: [email protected] (T.C. Nguyen), [email protected] (J. Robinson), [email protected] (J.A. Whitty), [email protected] (S. Kaneko), [email protected] (T.C. Nguyen). http://dx.doi.org/10.1016/j.eap.2015.06.002 0313-5926/© 2015 Economic Society of Australia, Queensland. Published by Elsevier B.V. All rights reserved.

T.C. Nguyen et al. / Economic Analysis and Policy 47 (2015) 22–33

23

There are several reasons why respondents ignore attributes. The first reason may be the cognitive burden for respondents in making trade-offs among several attributes with different levels in a DCE exercise. Each attribute in itself may be quite difficult to understand. To deal with the cognitive burden, respondents may use simplifying strategies when making decisions, and ANA can be an example of a simplifying strategy (Carlsson et al., 2010). In addition, the design of a choice task can lead to attribute ignoring. Selection of attributes may result in lexicographic orderings (i.e. one attribute is much more important than the others) (Carlsson et al., 2010); or attributes are immaterial to some respondents (e.g. elderly people may not use mobile phone text messaging). Hensher et al. (2012) suggest that the irrelevant range of attribute levels (e.g. improvement levels are infeasible) can cause ANA. Other reasons for respondents’ ignoring attributes may be protestlike reasons (Carlsson et al., 2010; Alemu et al., 2013), e.g., they do not agree with the idea of paying for a public good. Regardless of the reason behind attribute neglect, the majority of studies dealing with ANA suggest that if ANA is taken into account, model performance is better, and potential biases in welfare estimates could be minimized (Hensher et al., 2005; Campbell et al., 2008; Carlsson et al., 2010; Alemu et al., 2013). To account for ANA in the data analysis, there are two main approaches that have been commonly applied in the literature. The first approach, which is called stated ANA, is the use of follow-up questions asking respondents to state which attributes they attended to (or ignored) when deciding on their preferred options (Hensher et al., 2005; Campbell et al., 2008; Carlsson et al., 2010; Scarpa et al., 2010; Balcombe et al., 2011). Respondents’ answers to the ANA questions are used to assign a weight to the attribute parameters, since ANA will affect the estimated attribute parameters. Typically, if a respondent i ignores an attribute j in a choice situation, the attribute parameter βij in the utility function will be restricted to zero (Hensher et al., 2005). There are grounds for questioning the restriction of zero parameters. A number of researchers have shown that respondents who indicate that they ignored a given attribute often show a non-zero sensitivity to that attribute (Campbell and Lorimer, 2009; Carlsson et al., 2010; Hess and Hensher, 2010). A possible interpretation of these results is that respondents who claim to have ignored a given attribute may simply have assigned a lower weight to the attribute (Hess and Hensher, 2013). Respondents’ self-stated ANA may still contain valuable information, but such data should not be used deterministically by the restriction of zero coefficients. Stated ANA responses could be used to determine the weighted parameters via interactions between the ignored attributes and dummy variables representing the stated ANA (Carlsson et al., 2010; Balcombe et al., 2011). Without the need for self-reported indicators, the second approach to modeling ANA is to make use of latent class modeling techniques to infer ANA from the choice data (Scarpa et al., 2009; Hensher and Greene, 2010; Campbell et al., 2011; Hensher et al., 2012). This approach is termed inferred ANA. With this approach, different latent classes represent different combinations of attendance and non-attendance across attributes; and when a given attribute is assumed to be ignored in a class, the ignored attribute’s parameter is constrained to zero. All possible combinations can be covered in a 2K class specification, with K being the number of attributes. The estimated class probabilities show the share of respondents’ attendance and non-attendance across attributes. While the DCE approach was originally developed and mostly applied in developed countries, there has been a growing interest in applying this approach to address developing country issues. However, applying the DCE method in the developing country context faces some particular challenges, such as respondents’ lack of experience with surveys of public opinion and/or a lower level of literacy (Mangham et al., 2009; Bennett and Birol, 2010). To the best of the authors’ knowledge, this paper presents the first study to examine the issue of ANA reported by respondents in the context of a developing country. In the next section, we introduce our case study in Vietnam. In Section 3, we present the findings related to incidence of stated ANA in our Vietnam study and a comparison of the rate of stated ANA between our study and a number of ANA studies undertaken in developed countries. We find that the extent of ANA stated by respondents in our Vietnam case study could be potentially more serious than in the developed country studies. To assess the effects of the stated ANA, our econometric analysis in Section 4 includes two groups of models accounting and not accounting for the ANA. The results of the econometric models indicate that respondents who ignored the attributes have different preferences from respondents who attended to the attributes. Determinants of stated ANA in our study are analyzed in Section 5 to provide a better understanding of reasons for ANA. The findings will help DCE practitioners to reduce ANA in their DCE applications in developing countries. 2. Study design and implementation Data used for this study of ANA came from a DCE exercise which aimed to elicit households’ preferences for improvements in cyclone warning services in Vietnam. Each improvement alternative was described by different improvement levels of three cyclone warning service attributes—accuracy of forecast information, number of updates per day and mobile phone short message warning. A fourth (cost) attribute, defined as a one-off levy paid through the electricity bill, was also included. The willingness-to-pay (WTP) for an improvement in a single attribute of cyclone warning services can be estimated by the ratio of estimated coefficients of the attribute to the coefficient of the cost attribute. Table 1 presents the attribute levels applied in the DCE exercise. The attributes and their levels were identified following two focus group studies of public opinion about what attributes of a cyclone warning system they would be interested in, alongside a rigorous literature review and on-going discussions with Vietnamese meteorological experts. More detailed discussion of the attribute selection is provided in Nguyen et al. (2013).

24

T.C. Nguyen et al. / Economic Analysis and Policy 47 (2015) 22–33 Table 1 Attributes and levels for the discrete choice experiment in Vietnam. Source: Nguyen et al. (2013). Attributes

Current levels

Improvement levels

Accuracy of tropical cyclone forecast Number of updates per day (times) Mobile phone short message warning A one-off payment in household electricity bill (Vietnamese dong)

Current condition 8 Not available 0

Level 1, Level 2, Level 3 8, 12, 16 Not available, Available 50, 150, 250

Table 2 Share of respondents ignoring a certain attribute. Attribute

Number of respondents

Sharea of respondents

Accuracy of forecast Frequency of update Mobile phone based warning Cost

63 90 368 127

6.2% 8.9% 36.3% 12.5%

a

Total number of respondents in the full sample is 1014.

Once attributes and levels were determined, an orthogonal design was used to construct twenty-four choice tasks. In order to reduce the cognitive burden on respondents, each respondent randomly answered a block of six choice tasks. For each choice task, respondents were required to indicate their preference between two alternatives: a potential improvement program and the status quo (i.e., keeping all attributes at their current levels). The status quo option was the same across all choice tasks. An example of a choice task is presented in Fig. 1. After the last choice task, four follow-up questions were used to investigate whether respondents had ignored any of the four specified attributes when making their choices. Respondents answered Yes/No to their attribute ignoring for each attribute. This approach differs from asking about ignored attributes following every choice task, which was applied in Puckett and Hensher (2009) and Scarpa et al. (2010). The follow-up questions in the present study collected less detailed information, but it is likely that ANA questions after every choice task could affect respondents’ behavior over a series of choice questions (Carlsson et al., 2010; Hole et al., 2013). In particular, after the first choice task with questions on ANA, respondents could think that they were expected to ignore attributes or alternatively pay more attention to all the attributes. Respondents’ behavior may change and therefore not reflect their true preferences in the following choice tasks. In addition, asking the ANA questions after every choice task may create a large cognitive burden on respondents. In 2011, face-to-face surveys were conducted at four sites, which represent both urban and rural coastal areas in Northern and Central regions of Vietnam. These sites were recently affected by tropical cyclones in 2009 and 2010. Given the recent memories of tropical cyclone effects and experience of the use of the cyclone early warning service, responses to the survey were expected to be reliable. The face-to-face survey mode was also selected to enhance the reliability of respondents’ answers.1 Interviewers were carefully trained to present and explain all questions, especially the DCE choice tasks, to respondents. It is expected that with the support from interviewers, respondents understood the questions asked in the questionnaire.2 The surveys were carried out using stratified sampling, and the village was used as the strata. Maps, provided by village leaders, were used as the sampling frames. Households were the sample units with a member of the household who was over 18 years old being the unit of inquiry. The survey was introduced to 1133 households, of which 1014 (89%) household representatives agreed to participate and completed the questionnaire. The total sample for analysis was 1014. 3. Incidence of attribute non-attendance The responses to the ANA follow-up questions are summarized in Tables 2 and 3. As seen in Table 2, 63 (6.2%), 90 (8.9%), 127 (12.5%) and 368 (36.3%) respondents said they ignored the accuracy of forecast, frequency of update, cost and mobile phone based warning service, respectively. These results reveal that the mobile phone based warning was the most frequently ignored attribute. This result suggests that the attribute of mobile phone based warning was the least important

1 In a DCE in Australia, Windle and Rolfe (2011) found that when compared with the paper-based survey using a drop-off/pick-up collection technique, their internet survey had quicker collection time and lower survey costs; more importantly the internet survey provided similar WTP estimates to the paperbased survey. However, it is believed that face-to-face surveys are more appropriate in developing countries, where further explanation of interviewers for respondents, especially in rural areas, would play an important role to ensure the reliability of survey results (Bennett and Birol, 2010). 2 According to the World Bank’s database, the literacy rate in Vietnam was 93% in 2011 (http://data.worldbank.org/indicator/SE.ADT.LITR.ZS). Although the literacy rate is relatively high, the instructions from interviewers was necessary. This is because respondents in Vietnam are not familiar with DCE surveys and have relatively lower education levels when compared with respondents in developed countries. For example, only 4% of respondents in our sample have an education with a university degree, while the sample of Carlsson et al. (2010) in Sweden had 32% of respondents with a university education.

T.C. Nguyen et al. / Economic Analysis and Policy 47 (2015) 22–33

25

Fig. 1. An example of choice task. Source: Nguyen et al. (2013).

Table 3 Share of respondents used a given attribute processing strategy. Attribute processing strategy

Number of respondents

Share of respondents

Attended to all attributes

595

58.7%

Respondents ignoring 1 attributes Ignored only accuracy Ignored only update Ignored only mobile phone based warning Ignored only cost

296 3 17 248 28

29.2% 0.3% 1.7% 24.5% 2.8%

Respondents ignoring 2 attributes Ignored accuracy & update Ignored accuracy & mobile phone based warning Ignored accuracy & cost Ignored update & mobile phone based warning Ignored update & cost Ignored mobile phone based warning & cost

62 1 1 1 13 0 46

6.1% 0.1% 0.1% 0.1% 1.3% 0.0% 4.5%

Respondents ignoring 3 attributes Ignored update, mobile phone based warning & cost Ignored accuracy, mobile phone based warning & cost Ignored accuracy, update & cost Ignored accuracy, update & mobile phone based warning

16 1 9 2 4

1.6% 0.1% 0.9% 0.2% 0.4%

45

4.4%

Ignored all attributes Total

1014

100%

factor in influencing respondents’ decisions. More than one-tenth of all respondents stated that they ignored the cost attribute, indicating that for these respondents there were no trade-offs between cost and improvements in the cyclone warning attributes. Both the attributes of accuracy and frequency of update were ignored by less than 10% of all respondents. This suggests that the majority of respondents believed that the two traditional attributes are important for an improved cyclone warning service. In Table 3, most (58.7%) respondents stated that they considered all attributes in making their decisions. Three and two attributes were stated to be attended by 29.2% and 6.1% of respondents, respectively. When reaching their decisions, 4.4% of respondents stated that they ignored all attributes, and 1.6% of respondents indicated that they used only one attribute; thus, a total of 6% (= 4.4% + 1.6%) of responses provided no information on their willingness to make trade-offs among the attributes. To examine the stated ANA issue in the developing country context, all relevant published DCE studies in which ANA was reported by respondents were reviewed. Beside the difference in developing versus developed countries, other factors, e.g. choice task complexity, survey mode and respondents’ familiarity with the goods under consideration may also be expected to affect the stated ANA. Four ANA studies which have a level of task complexity, defined by the number of attributes,

26

T.C. Nguyen et al. / Economic Analysis and Policy 47 (2015) 22–33

Table 4 Share of respondents reportedly ignoring at least one attribute in previous studies and this study. Study

In developed countries (average)a Balcombe et al. (2011) Campbell et al. (2008) Carlsson et al. (2010) Kragt (2013)

This study

Country of study

Sample size

Share of respondents

Number of alternatives

Number of attributes (excluding cost)

Number of attributes’ levels

Number of Survey mode choice tasks

Goods or services under consideration

40.0%

Australia

1106

27.0%

3

3

2

8

Internet survey Beef

Ireland

564

36.0%

3

4

3

6

Face-to-face

Rural landscape

Sweden

955

53.2%

3

3–4

3–4

6

Mail survey

Australia

712

45.6%

3

3

4

5

Drop off/pick up

Vietnam

1014

41.3%

2

3

2–4

6

Face-to-face

Environmental quality Coastal catchment management Cyclone warning service

a The average share of respondents was weighted for the sample sizes using the following formula: the weighted-average = Σj (sizej /Σj sizej )sharej , where sizej is the sample size of study j, sharej is the share of respondents reportedly ignoring at least one attribute in study j.

attribute levels and choice tasks, similar to our DCE survey are summarized in Table 4. Comparison of the share of respondents reportedly ignoring at least one attribute shows that the share of 41.3% in our study is a little higher than the average equivalent share of 40.0% in four studies undertaken in developed countries. The following review focuses on examining the effect of the mode of survey and respondents’ familiarity with goods and services on stated ANA. Balcombe et al. (2011) used an internet survey to investigate preferences for beef produced from animals bred using new, stem cell technologies in Australia. The share of respondents who ignored at least one attribute in the Balcombe et al. (2011) study is 27%. The equivalent share is 36% in the Campbell et al. (2008) study, which applied faceto-face interviews for examining preferences for rural landscape in Ireland. Carlsson et al. (2010) investigated preferences for improvements in environmental quality in Sweden by using a mail survey. Kragt (2013) applied the drop off/pick up mode of survey to explore preferences for coastal catchment management in Australia. The share of respondents who ignored at least one attribute in Carlsson et al. (2010) and Kragt (2013) are 53% and 46%, respectively. The Balcombe et al. (2011) and Campbell et al. (2008) studies have relatively lower shares of respondents reportedly ignoring at least one attribute. A possible reason is the advantage of internet surveys and face-to-face interviews in providing additional information to help respondents understand the attributes. In internet surveys, respondents can ‘‘click on links to access more information about attributes’’ (Balcombe et al., 2011, p. 454); in face-to-face interviews, respondents can ask interviewers to clarify the attributes. The lowest rate of ANA is the Balcombe et al. (2011) study. A possible explanation is that the good under consideration, beef, is likely to be familiar to respondents. Taking into account all of the above factors, the Campbell et al. (2008) study could be used to compare with our study, which has a similar level of task complexity and also used face-to-face interviews. As seen in Table 4, the share (41.3%) of respondents ignoring at least one attribute in our study is larger than the equivalent share (36%) in Campbell et al. (2008). The comparison between the two studies suggests that ANA presents at least an equal but potentially a more serious problem for the validity of DCEs in developing countries than in developed countries. 4. Econometric analysis accounting for the stated attribute non-attendance The stated ANA can be accommodated in a number of ways. In our study, three models are estimated and compared to search for model specifications that best fit the data: Model 1 (Full attribute attendance): the implicit assumption is that all respondents are assumed to have full attendance to the attributes; Model 2 (Restriction of zero parameter): the parameters of attributes ignored by a respondent are restricted to zero. If a respondent i states that he/she ignored an attribute j in a choice situation, the attribute parameter βij will be constrained to zero; Model 3 (ANA accounted via interaction): an attribute parameter is conditioned on the self-stated ANA as though the reported ANA was a characteristic of the respondent who stated he/she ignored the attribute (Balcombe et al., 2011): βij = α0 + α1 zij + uij , where zij = 1 if the respondent i reportedly ignored the attribute j and 0 if otherwise.3 α0 and α1 are expected to have opposite signs. If α1 is significant, it indicates that there is a significant difference in the preferences

3 This way of accommodating for ANA may lead to endogeneity issues which could in turn result in biased parameter estimates. However, there is little research on how taking into account the endogeneity issue may improve the model performance and parameter estimates. Alemu et al. (2013) find no

T.C. Nguyen et al. / Economic Analysis and Policy 47 (2015) 22–33

27

Table 5 Results of Akaike information criterion (AIC), Schwarz’s Bayesian information criterion (BIC), and Consistent Akaike information criterion (CAIC) of Models 1–3. Model

Number of parameters (P)

Log likelihood at convergence (LL)

AICa

BICb

CAICc

Model 1 Model 2 Model 3

21 21 27

−2846.507 −2821.955 −2675.582

5735.014 5685.910 5405.163

2919.184 2894.632 2769.024

5859.369 5810.265 5565.048

a b c

AIC = −2(LL − P). BIC = −LL + 0.5P ln(N ). CAIC = −2LL + P[ln(N ) + 1]; N = 1014.

between the two groups of respondents who ignored and attended to the attribute. This approach to accommodating for ANA is appropriate as respondents may not fully ignore the attribute even though he/she has stated to have done so. The three models 1–3 are estimated based on mixed logit (ML) models (Train, 2009) using the data from the full sample of 1014 respondents. The ML models have the coefficients on the attribute variables, except for the cost attribute, specified as random parameters with normal distribution. Free correlation among the random parameters is also allowed in the ML models. The model that best represents the data is selected by using the information criteria, which are Akaike information criterion (AIC), Schwarz’s Bayesian information criterion (BIC), and Consistent Akaike information criterion (CAIC). Inspection of Table 5 reveals that Models 2 and 3 are a better fit of the specifications for the data than Model 1, which does not account for the ANA. This finding is in line with the ANA literature showing that accommodating for ANA improves model performance. With our data, Model 3 accommodating for ANA via interaction is statistically better than Model 2. Model 3, therefore, is selected for analyzing respondents’ preferences for the attributes in our investigation of the stated ANA. To explore the potential effects of the stated ANA, welfare estimates by Model 3 are compared with the equivalent values estimated by Model 1. The results of Models 1 and 3 are presented in Table 6. In these models, the alternative specific constant (ASC) is equal to 1 for improvement alternatives. Accuracy levels 2 and 3, which are dummy variables, are assessed relative to accuracy level 1; and after accounting for the ASC, the implied coefficient of accuracy level 1 is 0. The frequency of update and cost attributes are treated as continuous variables, while the attribute of mobile phone short message warning is modeled as an effects-coded variable. As a rule of thumb, well fitted discrete choice models should have a pseudo-R2 greater than 0.2 (Hoyos, 2010). Model fit is acceptable for all estimated models with pseudo-R2 of 0.33–0.37. All mean coefficients of attribute variables are statistically significant at the 1% level and have the expected signs. The positive coefficients of the warning attribute variables (i.e. accuracy, frequency of update, and mobile phone based warning service) indicate that respondents were more likely to opt for an improvement program offering more accurate and updated cyclone information, and the mobile phone short message warning service. The significant coefficients on the accuracy levels increase, indicating that the higher level of accuracy improvement is preferred to the lower level as expected. The negative sign of the coefficient on cost is consistent with our prior expectation that as the cost for supporting an improvement program increases, the demand for that program decreases. Unobserved heterogeneity in preferences for a certain attribute is represented by the estimated standard deviations, which are all shown to be significant in Table 6. The standard deviations are relatively large when compared with the mean coefficients, indicating that there is a probability that respondents have a reverse (negative) preference for the attributes. The probability of a negative sign means that some respondents do not prefer an improvement in a given attribute. This seems to be consistent with the fact that the ML model was estimated with the data from the full sample, including all zero bid respondents (151 respondents) who consistently voted ‘‘No’’ to improvements in any attribute. In the Model 3 results, the mean coefficient of an attribute variable, which is α0 when zij = 0, can be used to represent the preference of respondents who said they attended to the attribute. Interaction terms (α1 ) in Model 3 are Ignored Accuracy level 1, Ignored Accuracy level 2, Ignored Accuracy level 3, Ignored Frequency of update, Ignored Mobile phone based warning and Ignored Cost. The sum (α0 + α1 ) of a certain attribute, existing when zij = 1, represents the preference of respondents who stated they ignored the attribute. The significance of the interaction terms, except for the Ignored Accuracy level 2 variable, indicates that there are differences in preferences between the two groups of respondents who stated they ignored and attended to a given attribute. All the interaction terms (α1 ) have the expected opposite signs to the mean attribute coefficients (α0 ). As discussed in the Appendix, there is confounding between the ASC and the accuracy dummy variables in the econometric analysis for this DCE study. To estimate Models 1 and 3, accuracy level 1 variable is excluded, so that the preference for accuracy level 1 is captured by the ASC. Ignored Accuracy level 1, therefore, was measured by the interaction between the

improvements in model fit, very limited impact on parameter estimates when the endogeneity issue is accounted for. Another reason not to deal with the issue of endogeneity in our current study is that we wanted to ensure comparability of our results with previous studies examining stated ANA, which for the majority have not addressed the endogeneity issue. The main aims of our study is to provide an investigation into stated ANA in a developing country and to gain behavioral insights into why respondents state that they have ignored an attribute in a DCE exercise. Examining the performance of models taking endogeneity into account such as a latent variable scaling approach developed by Hess and Hensher (2013) could be an interesting topic for our future research.

28

T.C. Nguyen et al. / Economic Analysis and Policy 47 (2015) 22–33 Table 6 Results of mixed logit models 1 and 3. Variables

ASC Accuracy level 2 Accuracy level 3 Frequency of update Mobile phone short message warning Cost

Model 1 Standard deviation

Mean

Standard deviation

1.149*** (0.167) 0.556*** (0.133) 1.674*** (0.155) 0.080*** (0.017) 0.572*** (0.069) −0.027*** (0.001)

2.283*** (0.257) 1.063*** (0.145) 1.678*** (0.348) 0.072*** (0.020) 1.177*** (0.097)

1.357*** (0.166) 0.578*** (0.132) 1.753*** (0.158) 0.097*** (0.016) 0.906*** (0.078) −0.028*** (0.001) −3.587*** (0.746) −1.442 (1.051) −3.454** (1.420) −0.313*** (0.067) −0.979*** (0.107) 0.007*** (0.001) 1014 −2675.582 0.365

2.241*** (0.249) 1.038*** (0.147) 1.732*** (0.199) 0.058** (0.023) 1.126*** (0.125)

Ignored Accuracy level 1 Ignored Accuracy level 2 Ignored Accuracy level 3 Ignored Frequency of update Ignored Mobile phone based warning Ignored Cost Number of respondents Log-likelihood Pseudo-R2

Model 3

Mean

1014

−2846.507 0.325

Standard errors are in parentheses. ASC: alternative specific constant, equal to 1 for improvement alternatives. ** Denotes 5% significance level. *** Denotes 1% significance level.

ASC and the ignored accuracy dummy variable (zi = 1 if the accuracy attribute is reportedly ignored by respondent i and 0 if otherwise). In addition to the preference for accuracy level 1, the ASC represents respondents’ utility of moving away from the status quo option. The negative sign of Ignored Accuracy level 1 suggests that respondents who stated they ignored the accuracy attribute are less likely to choose an alternative specifically offering accuracy level 1 and also less likely to support an improvement alternative in general. Among 63 respondents who reported they ignored the accuracy attribute in our DCE survey, there are 58 respondents (92% of this group) who consistently voted ‘‘No’’ to all improvement alternatives in the series of six choice tasks. Based on an examination of public opinion on the attributes of cyclone warning services in Vietnam, Nguyen et al. (2013) suggest that the accuracy of forecast information appears to be the most important attribute of an improved cyclone warning service. Hence, it is reasonable that respondents who ignored the accuracy attribute are less likely to support improvements in cyclone warning services compared with respondents attending to the accuracy attribute. Ignored Accuracy level 2 and Ignored Accuracy level 3 were measured by the interactions between the ignored accuracy dummy variable and the variables of Accuracy level 2 and Accuracy level 3, respectively. In the econometric models, the preferences for accuracy levels 2 and 3 are assessed relative to the preference for accuracy level 1. The negative sign of Ignored Accuracy level 2 and Ignored Accuracy level 3 implies that respondents who ignored the accuracy attribute are less likely to choose accuracy levels 2 and 3 over accuracy level 1, while respondents who attended the accuracy attribute are more likely to opt for accuracy levels 2 and 3 that are better improvements. With regard to the other attributes, Ignored Frequency of update, Ignored Mobile phone based warning and Ignored Cost were measured by the interactions between the relevant ignored dummy variable and the corresponding attribute variables. The negative sign of Ignored Frequency of update and Ignored Mobile phone based warning suggests that respondents who ignored the attributes of updating frequency and mobile phone based warning are less likely to choose a better alternative (i.e. an option containing higher levels of frequency of update or including the mobile phone short message warning service). The positive sign of Ignored Cost indicates that respondents ignoring the cost attribute are more likely to select an alternative with higher levels of cost. It is reasonable to suggest that when compared with respondents attending to the attributes, respondents who ignored the attributes in making their decisions would have made comparatively irrational choices. If the ANA were not taken into account, the irrational choices of respondents who ignored the attributes would present a potential threat to the validity of DCE results. Table 7 reports WTP estimates by Models 1 and 3. For Model 1, which does not apply any treatment for ANA, the WTP was estimated for the whole sample. To be comparable with Model 1, the WTP estimated by Model 3 is the unconditional WTP measured for the whole sample. To estimate the unconditional WTP, we use the information in Table 2, which reports the share of respondents who ignored a certain attribute. The unconditional mean coefficient of an attribute is calculated

T.C. Nguyen et al. / Economic Analysis and Policy 47 (2015) 22–33

29

Table 7 Willingness-to-pay estimated by Model 1 and Model 3 (1000 Vietnamese dong). Model 1 Unconditional mean WTP for a change in the attributes Accuracy level 1 41.140*** (5.688) Accuracy level 2 61.043*** (5.005) Accuracy level 3 101.070*** (5.973) Frequency of update 2.890*** (0.618) Mobile phone short message warning 41.011*** (4.422) Total WTP Medium improvementb Maximal improvementc

113.614*** (4.874) 165.201*** (5.337)

Model 3

Resampling testa

40.654*** (5.767) 58.185*** (5.253) 95.823*** (6.418) 2.484*** (0.624) 39.492*** (4.599)

p = 0.476

107.614*** (4.856) 155.188*** (5.952)

p = 0.176

p = 0.336 p = 0.282 p = 0.330 p = 0.401

p = 0.102

Standard errors are in parentheses, and are based on the Krinksy–Robb simulation using 1000 draws. *** Denotes 1% significance level. a Resampling tests (Poe et al., 1994) were used to test the null hypothesis that the WTP estimates by Model 1 are not different from the equivalent values estimated by Model 3. b Medium improvement program includes accuracy improvement level 2, update frequency of 12 times and a mobile phone short message warning. c Maximal improvement program includes accuracy improvement level 3, update frequency of 16 times and a mobile phone short message warning.

by the following formula: (the share of respondents who attended the attribute) × (α0 ) + (the share of respondents who ignored the attribute) × (α0 + α1 ). The unconditional WTP for each attribute is obtained by dividing the unconditional mean coefficient of the attribute by the unconditional mean coefficient of the cost attribute. Based on resampling tests (Poe et al., 1994), the comparison of the WTP values estimated by Model 3 and the equivalent values estimated by Model 1 did not find any significant differences. This means that in our study the WTP estimates derived from the model accounting for ANA are statistically similar to those estimated by the model with the standard assumption of full attendance. This finding is in sharp contrast to the findings of Hensher et al. (2005), Campbell et al. (2008) and Campbell and Lorimer (2009) but it is similar to the findings of Carlsson et al. (2010) and Hole et al. (2013). 5. Determinants of the stated attribute non-attendance The results of the interaction terms in Model 3 indicate that respondents who stated they ignored one or more attributes have different preferences from respondents who said they attended to the attributes. To a gain better understanding of the reasons for the differences, this study continues to examine characteristics of respondents who stated they ignored a certain attribute. Dependent variables in this examination have two possible outcomes (1 if a respondent ignored a given attribute, 0 if otherwise). There are four attributes in our study, and a respondent may ignore one or more attributes when making choices. This implies that there are correlations between the four dependent variables representing the four attributes. Following Carlsson et al. (2010), multivariate probit models were applied to take into account the correlations between the dependent variables. A constant is included in a multivariate probit model to represent unobserved factors (i.e. the part not represented by the covariates) affecting the likelihood that a respondent belongs to each ANA group. In the multivariate probit models, respondents’ certainty about their WTP, household income, responses to the follow-up question ‘‘Choice questions are hard?’’, age and risk aversion were modeled as continuous variables, while central region residency, male, education, mobile phone use and having children less than 10 years old were treated as dummy variables. The multivariate probit models were estimated with 500 draws using the GHK (Geweke, Hajivassiliou, Keane) simulator provided in the NLOGIT 5.0 package. Table 8 reports the results of the models. Inspection of Table 8 reveals that the variables of certainty about willingness-to-pay, household income and central region residency have a negative effect on the likelihood of ignoring all four attributes, although household income is not statistically significant in the model for the ‘ignored cost’ group and central region residency is not significant in the results for the ‘ignored accuracy’ and ‘ignored cost’ groups. Respondents’ certainty about their WTP was measured using a 10-point rating scale with 1 labeled ‘‘very uncertain’’ and 10 labeled ‘‘very certain’’. The respondents’ certainty score shows how sure they were that they would actually pay the amount they had stated. Respondents having a higher certainty score, higher household income and living in the Central region, which are more frequently hit by cyclones, are less likely to ignore the attributes when making choices. A reason may be that these respondents could be more willing to pay for improvements in cyclone warning services; and their attendance to all the attributes may reflect their support. It has been shown in experiments that if respondents who were willing to pay for goods and services in a SP setting with a hypothetical payment and who had a higher stated certainty were more likely to actually pay for the goods and services in a real payment setting (Ready

30

T.C. Nguyen et al. / Economic Analysis and Policy 47 (2015) 22–33

Table 8 Multivariate probit models by groups of respondents ignoring each attribute. Variables

Ignored accuracy of forecast

Ignored frequency of update

Ignored mobile phone based warning

Ignored cost

Constant

−0.847

−0.502

−0.510

−0.545

(0.607) −0.079*** (0.030) −0.094** (0.039) −0.236 (0.170) 0.265*** (0.096) 0.002 (0.008) 0.109 (0.240) 0.065 (0.204) −0.029 (0.231) 0.023 (0.169) −0.052 (0.054) 63

(0.503) −0.070** (0.029) −0.076*** (0.027) −0.293** (0.134) 0.193** (0.078) −0.001 (0.007) −0.001 (0.181) 0.272* (0.154) 0.011 (0.177) −0.039 (0.133) −0.020 (0.043) 90

(0.337) −0.040* (0.023) −0.058*** (0.013) −0.339*** (0.087) 0.175*** (0.055) 0.018*** (0.004) 0.079 (0.116) 0.054 (0.096) −0.333*** (0.112) 0.274*** (0.089) 0.043 (0.029) 368

1.000 0.949 0.579 0.788

1.000 0.529 0.658

1.000 0.537

Certainty about willingness-to-pay Monthly household incomea (million Vietnamese dong) Central region residency Choice questions are hard Age (years) Male Education (above high school) Using mobile phone Having children < 10 years old Risk aversion Number of respondents Correlation matrix Ignored Accuracy of forecast Ignored Frequency of update Ignored Mobile phone based warning Ignored Cost

(0.394)

−0.097*** (0.026)

−0.029 (0.022)

−0.145 (0.112) 0.249*** (0.069) 0.001 (0.005) 0.198 (0.172) 0.064 (0.125) −0.145 (0.143) 0.124 (0.118) 0.074** (0.036) 127

1.000

Standard errors are in parentheses. To avoid excluding the respondents in the multivariate probit regression, mean income value was used to fill in the missing information. * Denotes 10% significance level. ** Denotes 5% significance level. *** Denotes 1% significance level. a In the full sample, there are 23 respondents who did not provide information about income.

et al., 2010). Findings in Nguyen et al. (2013) also indicate that respondents having higher household income and living in the Central region of Vietnam were more likely to choose options with improvements in cyclone warning services over the status quo option. The present study had a follow-up question asking whether respondents found the choice questions confusing. Respondents could choose: Yes (coded 2), Maybe (coded 1) and No (coded 0). The choice questions are hard variable is significant and positive in all four models. If choice questions are hard to choose, it is more likely that respondents ignore attributes in order to simplify choice tasks. This finding is consistent with ANA being an example of a simplifying strategy applied by respondents, possibly due to a lack of understanding about the attribute (Carlsson et al., 2010). Some variables are only significant in groups of respondents ignoring specific attributes. The variables age, using mobile phone, and having children less than 10 years old are significant in the results for the group of respondents ignoring mobile phone based warnings. Some of these results are immediately intuitive, as older people are more likely to ignore the mobile phone services, possibly because they are not comfortable with the technology, and people regularly using a mobile phone would be less likely to ignore the mobile phone based warnings. Concerning the positive sign of the having children less than 10 years old variable, a reason may be that when a cyclone is approaching, evacuation of young children would be an option with high priority. People with young children, therefore, may not rely on the use of mobile phone based warnings to update cyclone information in order to make the decision about evacuation. The education variable is significant and positive in the group of respondents ignoring frequency of update. This implies that people with higher education are more likely to ignore the attribute of updating frequency. A possible explanation is that respondents with higher education may have better jobs which do not require a high level of exposure to cyclone risk relative to risky jobs (e.g. fishing). Therefore, they would not need to rely on updated information from a cyclone warning service and might be satisfied with the current number of 8 updates per day. To assess respondents’ attitude toward cyclone risk, a risk aversion score was measured using factor analysis of respondents’ answers to four statements reported in Nguyen and Robinson (2015). The four statements are specifically related to the respondents’ attitude toward cyclone risk and their behavior in response to cyclones. Higher scores show an increase in respondents’ risk aversion. The risk aversion variable is significant in the model for the group of respondents ignoring cost. The positive sign indicates that the more risk averse the respondent, the more likely it is that he/she ignored the cost attribute. This may reflect their lower willingness to make a trade-off between money and cyclone related safety.

T.C. Nguyen et al. / Economic Analysis and Policy 47 (2015) 22–33

31

6. Conclusions In recent years, there is growing recognition that respondents do not attend to all the presented attributes when making their choices in a DCE exercise. To the best of the authors’ knowledge, this paper presents the first investigation of the issue of stated ANA in the developing country context. With control on the choice task complexity, four DCE studies conducted in developed countries were selected and the rate of ANA in these studies was compared with the corresponding rate in our DCE study in Vietnam, a developing country. The share of respondents who stated that they ignored at least one attribute in our DCE exercise is comparatively higher than the equivalent share in the Campbell et al. (2008) study, which has a similar survey design and implementation, and also higher than the average share of the four ANA studies undertaken in developed countries. Future research using the same questionnaire for split samples in developing and developed countries is needed to test this finding. Nevertheless, the preliminary investigation of the extent of the stated ANA from our study suggest that ANA could be a potential challenge to the application of the DCE approach in developing countries. In line with previous literature, this study shows that accommodating for ANA resulted in considerable improvement in model performance. Compared with the model assuming full attendance, the models accounting for the stated ANA were statistically a better fit for the data. The results of the model accounting for the stated ANA via interaction terms, which was the best model fitting the data, indicate that respondents who stated they ignored one or more attributes could have relatively irrational choices when compared with respondents who said they attended to the attributes. However, the examination of WTP estimates cannot provide conclusive support for the assumption that the stated ANA affects value estimates. Our examination of determinants of stated ANA in the Vietnam case study found that there is a relationship between the stated ANA and respondents’ WTP. Respondents having higher levels of certainty about their willingness-to-pay, higher household income and living in the Central region, which are more frequently hit by cyclones, are less likely to ignore the attributes when making choices. This suggests that the attendance of these respondents to the attributes may reflect their higher level of WTP for improvements in cyclone warning services. This finding highlights the need for a better approach to deal with the endogeneity problem as discussed in Hess and Hensher (2013). The investigation of ANA determinants also indicates that ANA could be an example of a simplifying strategy of respondents who thought that the choice questions were difficult. In addition, the results confirm that respondents ignored certain attributes which were not relevant to their situation. We found that older people are more likely to ignore the mobile phone services, households with children less than 10 years old who need to be evacuated as the first priority did not seem to be interested in the use of mobile phone based warnings to update cyclone information, and people with higher education who might be satisfied with the current number of 8 updates per day are more likely to ignore the attribute of frequency of update. It is also interesting to discover that the more risk averse the respondent, the more likely it is that he/she ignored the cost attribute, since the respondent might not be willing to make a trade-off between money and cyclone related safety. In order to reduce the ANA and its potential threat to DCE validity in the developing country context, the results of this study confirm that DCE practitioners should keep choice task design at a minimum level of complexity and pay close attention to the selection of relevant attributes. Acknowledgments This is a part of a research project ‘‘Estimating the benefits of an improved tropical cyclone warning service in Vietnam: An application of choice modelling’’ carried out with the aid of a grant from the Economy and Environment Program for Southeast Asia (EEPSEA) (Grant No 106612-99906060-008). The authors are grateful to Vic Adamovicz, University of Alberta; Pham Khanh Nam, University of Economics Ho Chi Minh City for their valuable comments and suggestions on the research proposal and analysis. The authors would like to thank Le Thanh Hai, Truong Tuyen for their assistance in conducting the surveys. Appendix. The confounding between the alternative specific constant and the accuracy dummy variables In this DCE study, there is a confounding between qualitative parameters of the accuracy attribute and an alternative specific constant (ASC) in the utility function (Nguyen et al., 2015). This confounding is caused by a decision not to include the status quo level of the accuracy attribute in the improvement options. A main reason for this decision is that not including the status quo level into the improved options is expected to make choice options more realistic. In 2010, the Government of Vietnam endorsed the Strategy for Development of Hydro-meteorological Service which is operational until 2020. Under the Strategy, one improvement program ratified on 25/06/2010 by the Prime Minister was to modernize forecasting technologies and monitoring systems for hydro-meteorological services in 2010–2012. It is expected that the investment in forecasting technologies and monitoring systems would create improvements in forecast accuracy. The Government’s strategy and the improvement program were briefly introduced to respondents at the beginning of the survey to emphasize the survey’s creditability to respondents. If no improvement in accuracy was included in the improvement alternatives, respondents would think that no improvement in accuracy was unrealistic, particularly because the Government’s improvement program commenced one year before the commencement of the survey. In developing a DCE exercise, it is very important to attempt to construct the choice task with as much realism as possible (Alpízar et al., 2001; Bennett and Adamowicz, 2001). Unrealistic choice options could result in zero-bid protest votes against all proposed program (Boxall et al., 2012). In addition,

32

T.C. Nguyen et al. / Economic Analysis and Policy 47 (2015) 22–33

some respondents, who think the options are unrealistic and do not believe that the survey is credible, may still vote ‘‘yes’’ for some reasons (e.g. to please the interviewers) or may have random answers to not only choice questions but also followup questions. In that case, the responses may introduce unidentified biases to the results. However, it is acknowledged that the design of the accuracy attribute in this research causes problems in the econometric analysis of this DCE survey. If the accuracy attribute was treated as a quantitative variable, there would not be a confounding problem. However, a body of literature on valuation of meteorological services has indicated that accuracy and forecast value have a non-linear relationship (Mjelde et al., 1993; Letson et al., 2007; Millner, 2008). The accuracy attribute should be measured as a qualitative variable to capture the non-linear relationship between WTP values and the accuracy of cyclone forecasting. The accuracy attribute has four levels, including the status quo level and three improvement levels 1, 2, and 3. These four levels require three accuracy dummy parameters: accuracy level 1, accuracy level 2 and accuracy level 3. Since the accuracy parameters are confounded with an ASC, the dummy parameter representing the preference associated with accuracy level 1 is excluded in order to estimate econometric models in this study. In the results of econometric models, the ASC is reported with the implicit assumption that the mean coefficient of accuracy level 1 is 0. With the exclusion of the accuracy level 1 parameter, the ASC captures the preference for accuracy level 1, so that the ASC represents both respondents’ utility of moving away from the current situation and their preference for accuracy improvement level 1. The approach to modeling the accuracy attribute used in this study is similar to the approach used in the Boxall et al. (2012) study, where there is a similar situation where a constant is confounded with the least improved program (program A). References Alemu, MohammedHussen, Mørkbak, MortenRaun, Olsen, SørenBøye, Jensen, CarstenLynge, 2013. Attending to the reasons for attribute non-attendance in choice experiments. Environ. Resour. Econ. 54 (3), 333–359. Alpízar, Francisco, Carlsson, Fredrik, Martinsson, Peter, 2001. Using choice experiments for non-market valuation. Econ. Issues 8 (1), 83–110. Balcombe, Kelvin, Burton, Michael, Rigby, Dan, 2011. Skew and attribute non-attendance within the Bayesian mixed logit model. J. Environ. Econ. Manag. 62 (3), 446–461. Bennett, Jeff, Adamowicz, Vic, 2001. Some fundamentals of environmental choice modelling. In: Bennett, J., Blamey, R.K. (Eds.), The Choice Modelling Approach to Environmental Valuation. E. Elgar Pub, Cheltenham, UK, Northampton, MA, USA, pp. 37–69. Bennett, Jeff, Birol, Ekin, 2010. Choice Experiments in Developing Countries: Implementation, Challenges and Policy Implications. Edward Elgar, Cheltenham, UK, Northampton, MA. Boxall, Peter, Adamowicz, W.L., Olar, M., West, G.E., Cantin, G., 2012. Analysis of the economic benefits associated with the recovery of threatened marine mammal species in the Canadian St. Lawrence Estuary. Mar. Policy 36 (1), 189–197. Campbell, Danny, Hensher, David A., Scarpa, Riccardo, 2011. Non-attendance to attributes in environmental choice analysis: a latent class specification. J. Environ. Plan. Manag. 54 (8), 1061–1076. Campbell, Danny, Hutchinson, George, Scarpa, Riccardo, 2008. Incorporating discontinuous preferences into the analysis of discrete choice experiments. Environ. Resour. Econ. 41 (3), 401–417. Campbell, Danny, Lorimer, Victoria S., 2009. Accommodating attribute processing strategies in stated choice analysis: do respondents do what they say they do? In: European Association of Environmental and Resource Economists Annual Conference, Amsterdam, June 2009. Carlsson, Fredrik, Kataria, Mitesh, Lampi, Elina, 2010. Dealing with ignored attributes in choice experiments on valuation of Sweden’s environmental quality objectives. Environ. Resour. Econ. 47 (1), 65–89. Hensher, David, Greene, William, 2010. Non-attendance and dual processing of common-metric attributes in choice analysis: a latent class specification. Empir. Econom. 39 (2), 413–426. Hensher, David A., Rose, John, Greene, William H., 2005. The implications on willingness to pay of respondents ignoring specific attributes. Transportation 32 (3), 203–222. Hensher, David, Rose, John, Greene, William, 2012. Inferring attribute non-attendance from stated choice data: implications for willingness to pay estimates and a warning for stated choice experiment design. Transportation 39 (2), 235–245. Hess, Stephane, Hensher, David A., 2010. Using conditioning on observed choices to retrieve individual-specific attribute processing strategies. Transp. Res. B 44 (6), 781–790. Hess, Stephane, Hensher, David A., 2013. Making use of respondent reported processing information to understand attribute importance: a latent variable scaling approach. Transportation 40 (2), 397–412. Hole, Arne Risa, Kolstad, Julie Riise, Gyrd-Hansen, Dorte, 2013. Inferred vs. stated attribute non-attendance in choice experiments: A study of doctors’ prescription behaviour. J. Econ. Behav. Organ. 96 (0), 21–31. Hoyos, David, 2010. The state of the art of environmental valuation with discrete choice experiments. Ecol. Econ. 69 (8), 1595–1603. Kragt, Marit E., 2013. Stated and inferred attribute attendance models: A comparison with environmental choice experiments. J. Agric. Econ. 64 (3), 719–736. Letson, David, Sutter, Daniel S., Lazo, Jeffrey K., 2007. Economic value of hurricane forecasts: An overview and research needs. Nat. Hazards Rev. 8 (3), 78–86. Mangham, Lindsay J., Hanson, Kara, McPake, Barbara, 2009. How to do (or not to do)... Designing a discrete choice experiment for application in a lowincome country. Health Policy Plan. 24 (2), 151–158. Millner, Antony, 2008. Getting the most out of ensemble forecasts: A valuation model based on user–forecast interactions. J. Appl. Meteorol. Climatol. 47 (10), 2561–2571. Mjelde, James W., Peel, Derrell S., Sonka, Steven T., Lamb, Peter J., 1993. Characteristics of climate forecast quality: implications for economic value to midwestern corn producers. J. Clim. 6 (11), 2175–2187. Nguyen, Thanh Cong, Robinson, Jackie, 2015. Analysing motives behind willingness to pay for improving early warning services for tropical cyclones in Vietnam. Meteorol. Appl. 22 (2), 187–197. Nguyen, Thanh Cong, Robinson, Jackie, Kaneko, Shinji, Komatsu, Satoru, 2013. Estimating the value of economic benefits associated with adaptation to climate change in a developing country: A case study of improvements in tropical cyclone warning services. Ecol. Econ. 86, 117–128. Nguyen, Thanh Cong, Robinson, Jackie, Kaneko, Shinji, Nguyen, The Chinh, 2015. Examining ordering effects in discrete choice experiments: A case study in Vietnam. Econ. Anal. Policy 45, 39–57. Poe, Gregory L., Severance-Lossin, Eric K., Welsh, Michael P., 1994. Measuring the difference (X - Y) of simulated distributions: A convolutions approach. Am. J. Agric. Econ. 76 (4), 904–915. Puckett, Sean M., Hensher, David A., 2009. Revealing the extent of process heterogeneity in choice analysis: An empirical assessment. Transp. Res. A 43 (2), 117–126. Ready, Richard C., Champ, Patricia A., Lawton, Jennifer L., 2010. Using respondent uncertainty to mitigate hypothetical bias in a stated choice experiment. Land Econ. 86 (2), 363–381.

T.C. Nguyen et al. / Economic Analysis and Policy 47 (2015) 22–33

33

Scarpa, Riccardo, Gilbride, Timothy J., Campbell, Danny, Hensher, David A., 2009. Modelling attribute non-attendance in choice experiments for rural landscape valuation. Eur. Rev. Agric. Econ. 36 (2), 151–174. Scarpa, Riccardo, Thiene, Mara, Hensher, David A., 2010. Monitoring choice task attribute attendance in nonmarket valuation of multiple park management services: Does it matter?. Land Econ. 86 (4), 817–839. Train, Kenneth E., 2009. Discrete Choice Methods with Simulation. Cambridge University Press, New York. Windle, Jill, Rolfe, John, 2011. Comparing responses from Internet and paper-based collection methods in more complex stated preference environmental valuation surveys. Econ. Anal. Policy 41 (1), 83–97.