Agricultural Systems 103 (2010) 345–350
Contents lists available at ScienceDirect
Agricultural Systems journal homepage: www.elsevier.com/locate/agsy
Probabilities for decision analysis in agriculture and rural resource economics: The need for a paradigm change J. Brian Hardaker a, Gudbrand Lien b,c,* a
School of Business, Economics and Public Policy, University of New England, Armidale, NSW 2351, Australia Norwegian Agricultural Economics Research Institute, PO Box 8024 Dep., No.-0030 Oslo, Norway c Lillehammer University College, PO Box 952, No.-2604 Lillehammer, Norway b
a r t i c l e
i n f o
Article history: Received 9 March 2009 Received in revised form 11 January 2010 Accepted 19 January 2010 Available online 18 April 2010 Keywords: Decision analysis Risk and uncertainty Subjective probabilities
a b s t r a c t The notion that we can rationalize risky choice in terms of expected utility appears to be widely if not universally accepted in the agricultural and resource economics profession. While there have been many attempts to assess the risk preferences of farmers, there are few studies of their beliefs about uncertain events encoded as probabilities. We may attribute this neglect to scepticism in the profession about the concept of subjective probability. The general unwillingness to embrace this theory and its associated methods has all too often caused researchers to focus on problems for which frequency data are available, rather than on problems that are more important where data are generally sparse or lacking. In response, we provide a brief reminder of the merits of the subjectivist approach and extract some priorities for future research should there be a change of heart among at least some of the profession. Ó 2010 Published by Elsevier Ltd.
1. Introduction There is a strong case for the need to improve the quality of risky decisions made by farmers and other rural land managers. Similarly, there is a case that the handling of risk in policy-making in the agricultural and resource sectors leaves scope for improvement. There is also a need to gain a better understanding of how these decision-makers actually make risky choices. Perhaps these choices are most starkly illustrated by the risks to the rural sector (and the planet) from climate change. Why was there, and why is there still, so much disagreement about the need to limit greenhouse gas emissions? In addition, what do we know about how land and other natural resource managers will respond to climate change, with or without economic incentives such as carbon trading? Most decisions entail a degree of uncertainty about the consequences, and if the possible differences in consequences are important, we define such decisions as risky. Decision analysis is widely used in the analysis of such risky decisions. There are two assessment tasks in decision analysis: namely, assessing beliefs about the chances of occurrence of the uncertain outcomes (probabilities); and assessing relative preferences for the outcomes (utilities). These components of the analysis are then integrated to reach a decision.
* Corresponding author. Address: Lillehammer University College, PO Box 952, No. 2604 Lillehammer, Norway. Tel.: +47 9248 8335; fax: +47 2236 7299. E-mail address:
[email protected] (G. Lien). 0308-521X/$ - see front matter Ó 2010 Published by Elsevier Ltd. doi:10.1016/j.agsy.2010.01.001
There has been much discussion in the agricultural and resource economics (ARE) literature about the methods and problems in assessing the preferences of farmers and others for risky outcomes, but relative neglect of discussion of the assessment of probabilities. Yet arguably, probability assessment is often the more important analytical component (Hardaker et al., 2004, pp. 113–118). This is the focus of this paper. We argue that the general unwillingness of ARE professionals to embrace the theory of subjective probability has too often caused the researcher to focus on problems for which frequency data are available, rather than on more important problems for which data are generally sparse or lacking. Changing the way ARE professionals think about probability will require a significant paradigm shift. The main aim of this paper, therefore, is to argue the case for a shift in the way probabilities are regarded and used for decision analysis in ARE. We aim to do this by contrasting what has happened in the past, based on the prevailing view of probabilities as relative frequencies, with a possible future in which the subjectivist view would prevail. We first summarize what we see as the ‘state of play’ in decision analysis in ARE. We then turn to the future to outline what we hope might be the way forward. In this context, we outline the two main competing schools of thought about the nature of probabilities. We point out some of the unfortunate consequences of the predilection for frequency-based probabilities among the ARE profession and summarize the case for the subjectivist view. We conclude by suggesting new priorities for future research and development based on better, more thoughtful ways of deriving
346
J.B. Hardaker, G. Lien / Agricultural Systems 103 (2010) 345–350
probabilities for decision analysis. Inevitably, our treatment is highly opinionated, but we trust that the paper will stimulate a deeper consideration and discussion of these important issues. 2. Where are we now? To provide a comprehensive description of the status of decision analysis in ARE would require an extensive literature review too voluminous to report here. We have not attempted such a review. Instead, we briefly outline where we judge matters stand today, leaving it to others to accept or reject our views. 2.1. Theory After much early debate and toying with a range of models of risky choice, it appears that expected utility theory has been widely if not universally adopted in ARE as the best basis for decision analysis. It is true, however, that there remains considerable and justifiable mistrust about attempts to elicit from most decision-makers utility functions that can be confidently regarded as properly representing their attitudes to risk. Partly for this reason, and to make the results applicable to decision-makers with differing risk attitudes, many research studies use stochastic efficiency criteria to partition choice options into risk-efficient and dominated sets for a plausible range of risk attitudes. Most efficiency criteria, though not all, are derived from or consistent with the expected utility hypothesis. Reliance on the expected utility hypothesis to model risky choice is cast into some doubt by the considerable evidence that it often fails to explain how people act when faced with particular risky choices. Although given different terms in different contexts, there is substantial evidence of ‘loss aversion’, meaning that people often appear to place much more weight on avoiding losses than they assign to equivalent gains (e.g. Rabin and Thaler, 2001). Such evidence challenges not only the expected utility hypothesis but also contingent valuation methods, as well as the indifference curves that underpin the foundation of demand theory. Curiously, a number of econometric studies of the risky choices made by farmers have yielded estimates of risk aversion coefficients much lower than implied if loss aversion is widespread (e.g. Antle, 1987; Oude Lansink, 1999; Lence, 2000; Lien, 2002). To our knowledge, these apparent contradictions remain unresolved. We suspect that loss aversion is significant when decision-makers face small decisions, but that farmers do not generally exhibit the same extreme aversion to possible losses when making more strategic choices. Indeed, farming in much of the world is so risky that universally loss-averse people would surely not take it up. While human behavior is so diverse and inconsistent that no modelling approach is likely to predict all outcomes accurately, it would appear that expected utility theory has been widely though not universally thought to be a ‘good enough’ basis for the study of risktaking behavior by farmers and other decision-makers (e.g. Just and Pope, 1979; Newbery and Stiglitz, 1981; Pope and Just, 1991; Mahul, 2001). Judging from the literature, there is less widespread acceptance, at least among the ARE profession and allied sciences, of the essentially subjective nature of the probabilities used in decision analysis. Yet the foundation of decision analysis is the subjective expected utility hypothesis, so called because it embodies subjective or personal probabilities (Savage, 1954). The widespread discomfort with the notion of subjective probabilities means that we find published studies based on historical frequencies that are often of dubious relevance to the modelled risky phenomena. Those who can only entertain probabilities based on historical frequencies appear not to consider the possibility of the non-stationarity of risky
phenomena, despite the fact that all kinds of change in the world imply that many (perhaps most) are non-stationary processes. For example, the study by McCarl et al. (2008) of the impact of climate change on crop yield distributions shows that stationarity does not hold. Likewise, the recent global financial crisis illustrates all too starkly that risk assessments based on frequencies from the recent past can be seriously flawed. Contrasting with the strong interest over the years in the preferences (attitudes to risk) of farmers and other decision-makers, there have been few studies of the arguably more important aspect of their beliefs about risk, encoded as probabilities. Included among these few are the review by Norris and Kramer (1990) of subjective probability elicitation methods and a number of studies of farmers’ adoption of innovations using Bayesian learning models (e.g. Lindner and Gibbs, 1990; Marra et al., 2003; Roberts et al., 2006). However, we are aware of very few studies that have sought to develop and test methods of eliciting ‘good’ probabilities from agricultural decision-makers or experts. Certainly, such studies in ARE are rare indeed, and most of the general work made along these lines was many years ago (e.g. Winkler, 1972). We are also unaware of many studies of how farmers and others actually form probability assessments about the risks they face. However, several surveys have sought to elicit from farmers and others the perceived risks they face and the main strategies used to deal with these risks (e.g. Patrick and Musser, 1997; Meuwissen et al., 2001; Flaten et al., 2005; Lien et al., 2006; Patrick et al., 2007; Størdal et al., 2007; Greiner et al., 2009). Unfortunately, it would appear that there is often a poor connection between what are identified as important risks and the risk management strategies nominated by respondents as essential. We have found too little discussion of why this should be so. 2.2. Methods There have been relatively few major innovations in the methods of decision analysis over recent decades, with most of those used today developed by the 1960s or 1970s. Since then there have been major advances in computerization. The advent of personal computers and specialized decision analysis software has dramatically expanded the scope for routine decision analyses for research or decision support. For example, computer-based decision tree analysis, stochastic simulation and mathematical programming applications are now many times more powerful and user-friendly. In the analysis of farm production responses accounting for risk, the Just and Pope (1979) production function allows the statistical determination of the influence of inputs on both the mean and variance of output. This pioneering work has also been extended to account for skewness/downside risk aversion (e.g. De Falco and Chavas, 2006), the relationship between output variance and technical inefficiency (e.g. Kumbhakar, 2002), and analyses of optimal hedge ratios under price and output uncertainty (e.g. Alghalith, 2006). However, the parametric estimation of production models with risk is often driven by the choice of functional form. Accordingly, recent studies avoid the assumption of a parametric function through non-parametric estimation of econometric risk models (e.g. Kumbhakar and Tsionas, 2010). Chambers and Quiggin’s (2000) publication of a volume on the state contingent approach appeared to provide both a theoretical advance and the promise of a new and better set of methods for decision analysis. However, adoption of this approach appears to have been slower than expected, perhaps because of difficulties in implementation, notably data limitations for econometric applications. Nevertheless, it is still too early to judge whether its early promise will be fulfilled. Among the few empirical studies known, we particularly refer to O’Donnell and Griffiths (2006) and Chavas (2008).
J.B. Hardaker, G. Lien / Agricultural Systems 103 (2010) 345–350
Application of Bayesian statistical methods, either founded on, or consistent with, subjective probability theory, has become more common. Such studies entail use of both pre-sample information (e.g. information from economic theory or subjective information) in the form of prior probabilities and sample information (information contained in the data) summarized in the likelihood function (Koop, 2003). These two types of information are combined using Bayes Theorem – an uncontentious, logical way to update prior probabilities in light of evidence. There has been an encouraging recent rapid growth of applications of Bayesian econometrics that we can largely attribute to the computing revolution. Simulationbased Monte-Carlo techniques are used in the estimation of Bayesian models, and these permit estimation of complex likelihood functions that were previously intractable. According to Zellner (2007), further use of the Bayesian approach in econometrics will promote more rapid progress in ARE and economic science in general. 3. Looking to the future Our aim here is not to foretell the future but rather to suggest where priorities for future work in decision analysis and support in ARE should lie. Only time will tell whether these matters will attract the attention we suggest they deserve. 3.1. Getting the context right Of course, ARE risk research should focus on problems where risk is likely to have important effects (Just, 2003). Hence, it is sensible to set a farmer’s risk issues in the context of the overall asset position of the business; on this basis, only risks that threaten the asset base need be taken seriously (Hardaker, 2006). Nonetheless, research to date has often focused on small risks, such as shortrun production decisions, chosen mainly because they are the risks for which relatively abundant and more or less relevant data exist. In the future, we suggest that longer term and more substantive risks should receive increased focus. Examples include the risks of major disease and pest outbreaks, institutional risks (such as reductions in farming subsidies and the collapse of markets from political intervention) and risks associated with major farming investments (including the purchase of a farm or additional land and the construction of expensive buildings). They also include the risk of natural catastrophes (such as prolonged drought, unusually severe winter conditions, floods and fires), and risks to the health and well-being of the farming family. These all potentially lead to the risk of financial failure. Regrettably, in the longer term, more substantive risks are nearly all those for which there are fewer or no relevant data, simply because they are rare or novel events. Importantly, if we are to address such risks more systematically, there is a need to adopt a more professional approach to subjective probability assessment (see below). With some significant exceptions, the ARE profession and its associated disciplines appear to be lagging in the business of risk assessment. We believe it is time for a change. 3.2. Two different views about probabilities As discussed, it is our contention that probabilities are a neglected issue in decision analysis in ARE. We therefore urge that priority be given to this aspect in future work. It appears clear that one of the main reasons for the relative neglect of probabilities is because most agricultural and resource economists, along with many physical and biological scientists working in related fields, are taught that probabilities are objective measures of relative frequencies. Yet this is just one of the two
347
main philosophic views about the definition and meaning of probability. The frequentist view draws on a definition of probability as the limit of a relative frequency ratio. (Knight (1921) referred to such probabilities as objective probabilities, a term we prefer not to use as it implies a degree of precision that is seldom fully justified.) Of course, ‘the limit’ is approached only as the number of observations approaches infinity, so operationally, believers of this view effectively measure probabilities as the actual frequency ratio in a finite sample. By contrast, the subjectivist view relies on a definition of probability as the degree of belief in an uncertain proposition. Clearly, this definition recognizes that different people can have different degrees of belief in the same proposition, so the adjective ‘personal’ is sometimes applied. Although we are advocates of the second of these views, this is not the place to argue the case for the subjectivist view at length. Instead, we merely offer the following observations. 1. Some distinguished thinkers have supported the notion of probabilities as subjective. The theory is based on plausible axioms and sound logic and there are well-tried methods of implementation (e.g. Ramsey, 1931; de Finetti, 1964, 1974; Savage, 1954, 1971; Staël von Holstein, 1974). 2. Deriving probabilities from relevant, reliable and reasonably abundant relative frequency data is entirely consistent with the subjectivist view, so that subjective probabilities include probabilities based on relative frequencies as a subset, provided the assessor believes the data to be reliable and relevant. 3. By assuming that the frequencies observed in historical data will apply in the future, frequentists are actually making a subjective judgement about probability, although usually they neither recognize nor admit this fact. 4. Rejection of subjective probabilities implies that no systematic analysis is possible to support the many important risky choices faced by decision-makers for which frequency data are sparse or absent. 5. The need for, and appropriateness of, subjectivity in decision analysis has found wide acceptance (e.g. Raiffa, 1968; Anderson et al., 1977; Morgan and Henrion, 1990; Wright and Ayton, 1994; Clemen, 1996). Arrow (1951) wrote that ‘the uncertainty of the consequences . . . is basically that existing in the mind of the chooser’. It is worth emphasizing that we are not advocating ‘picking numbers out of the air’. Probabilities used for important risky choices must be well-considered and consistent with available evidence. However, from our subjectivist viewpoint we observe some unfortunate consequences of the dominance in the ARE profession of the more narrow relative frequency school of thought. First, and as argued above, many of the analyzes of risky decision problems (or at least, those accepted for publication by editors and reviewers potentially antagonistic to the use of subjective probabilities) are relatively trivial attempts for which frequency data are available. Yet many of the largest risks in ARE deal with change – technical, economic, social and environmental (not the least, climate change). By the nature of change, historical data alone cannot reflect future risks. Therefore, with widespread professional distrust of subjective methods, there is a corresponding neglect of more important issues. For example, there are few published studies of catastrophic risks, those characterized by low probabilities of severe consequences. We believe this bias in research towards easy, but all too often trivial risky problems, is unfortunate. A related problem arising from the widespread unwillingness in our profession to countenance subjectively derived probabilities is the tendency to use data that are, at best, of dubious reliability and relevance. For example, data of an inappropriate spatial or
348
J.B. Hardaker, G. Lien / Agricultural Systems 103 (2010) 345–350
temporal relevance may be used with no attempt to correct for the implicit bias. Surprisingly, aggregated data will be often used as the basis for farm-level risk analysis, introducing the obvious underestimation of the risk the farmer faces (Just, 2003). Worse still, few researchers see the need to discuss the question of the relevance of the frequencies in the data they have used to estimate the probabilities for the future risks being investigated (and it seems that most editors and reviewers also fail to see the need to demand such discussion). Of course, it is not sufficient to criticize the use of only relative frequencies if we do not have a coherent solution on how to proceed when such data are absent, sparse or not wholly relevant. Moreover, we must admit that there is ample evidence that humans are generally bad at assessing probabilities. In a recently published volume, Gardner (2008) has provided a readable yet informative guide to the many sources of bias to which we are all vulnerable in risk assessment generally, and in probability assessment more specifically. This particular contribution draws on the large body of psychological experimentation that has revealed these biases, notably Slovic (2000) and Gilovich et al. (2002). At least we now know what the problems are. Yet there has been rather little work to overcome these sources of bias. Gardner (2008) makes a useful distinction between risk assessments based on what he calls ‘gut’ compared with ‘head’, i.e. intuitive compared with thoughtful and considered judgements. He correctly notes that most of us use ‘gut’ most of the time for everyday risky choices such as crossing the road or deciding whether to board a plane. In all likelihood, most farmers also make most of their decisions intuitively. Moreover, most of the time intuition works well enough. But now and then some more major risky issue comes along for which, given the many forms of bias to which we are all vulnerable, it may be best to consider more deeply and systematically what is at stake. How can we then make ‘good’ probability judgements in such a situation? Systematic thinking about uncertainty is especially important for ‘public’ choices, such as in policy-making when many people may be affected by the decisions reached (e.g. Pidgeon and Gregory, 2004; Hardaker et al., 2009). Here, we require ‘public’ probabilities that will be broadly accepted. Hence, the need to rationalize the assessment is even more apparent. 3.3. Improving probability assessment In a previous paper, we sought to set out some suggested principles of best practice in probability elicitation (Hardaker and Lien, 2005). At this stage, we certainly do not have all the answers. However, we strongly believe that analysts need to reflect more deeply on how they go about the assessment task. Such a considered approach contrasts with the all too often ill-considered use of whatever frequency data come to hand. The probability assessment task can be divided into a number of not entirely discrete steps, namely:
Structuring the problem. Information gathering. Analysis. Judgement. Communication. Feedback and review.
The first step of structuring the problem involves getting to grips with the issues, typically identifying the choice options, the sources of risk and the possible outcomes. This step may not be easy. Sometimes there may be much indeterminacy or ambiguity, perhaps so much that nothing can be done but wait until more clarity is achieved. This may also require further investigation,
perhaps even scientific advances (Wynne, 1992). In fact, it may not be possible to proceed to the steps following until a reasonable understanding of the issues has been gained. Note, however, with some similarity to the Precautionary Principle (United Nations, 1992), we suggest that the absence of a firm basis for probability assessment is not a good reason for inaction if the costs of delay are sufficiently high. Information gathering involves assembling relevant quantitative and qualitative information about the risky prospect(s) under consideration. There may be some historical relative frequency data that are relevant, if not for the particular phenomenon under consideration then for some related phenomenon. For example, for a crop newly introduced in a given area, there will be no long historical sequence of yield records, but there may be historical weather records that can be helpful in assessing the distribution of future yields. When moving from the general to the particular, for example, to assess the risk of weather damage at a given location, it is important to start with reliable or well-considered prior probabilities that are revised in the analytical phase to make them relevant to the given case. This can help avoid or reduce the common form of bias known as the ‘neglect of priors’ (Slovic, 2000). When relevant frequency data are lacking, the analyst is well advised to seek expert advice. Careful and logical thinking by expert assessors would appear to offer the best approach in such situations. Their efforts may also be supported with the use of influence diagrams or knowledge maps (e.g. Clemen, 1996; Hardaker et al., 2004). Some training in probability assessment using almanac-type questions and proper scoring rules may help to reduce the common bias of overconfidence (e.g. Savage, 1971; Matheson and Winkler, 1976; van Lenthe, 1993). Consulting a few experts is nearly always better than just one, and information sharing among the experts is sensible. Research generally suggests that the best results are obtained by forming a panel of experts who may consult with each other, but who should then be asked to make their probability judgements individually. These can then be combined into an overall assessment. While several ways of combining probability assessments have been proposed and tested, the current consensus is that a simple average is generally as good as the more complex procedures (Clemen and Winkler, 1987, 1999). To avoid the bias that can arise from the known dysfunction of group risk assessments, it may be prudent to keep any interaction among group members anonymous using some form of Delphi technique (e.g. Linstone and Turoff, 2002). Just how serious group dysfunction can be was amply illustrated by the fiasco resulting from the meeting of US security experts considering the risk that Saddam Hussein had weapons of mass destruction. A US Select Senate Committee on Intelligence (2004) attributed the failure of the assessment to a well-known source of bias they called ‘group think’ (Janis and Mann, 1977). While dysfunction in the risk assessments of groups may be manifested in regards to both probabilities and preferences for consequences (if indeed the two assessments are considered separately), there is still a need to try to minimize the problems when probabilities are considered. The analysis step is concerned with improving the understanding of the uncertain phenomena of interest. Here, trends may be investigated. Analysis generally needs to focus on causality. In forecasting economic phenomena, such as prices, it is common to use some form of econometric analysis the application of which, of course, depends on some understanding or assumptions about the processes affecting the uncertain variables of interest. Of course, we would prefer to see such econometric work performed using a Bayesian approach. Influence diagrams can be useful in conceptualizing and quantifying causality (e.g. Clemen, 1996). In the analytical phase, it is also important to keep in mind the many
J.B. Hardaker, G. Lien / Agricultural Systems 103 (2010) 345–350
types of bias that can influence probability assessment in order to avoid or minimize such bias. Somewhere between analysis and judgement arises the question of how best to handle sparse or biased frequency data. Hardaker et al. (2004), drawing on earlier work from Schlaifer (1969) and others, offer some ideas about how to deal with sparse data. For example, it is often reasonable to smooth out irregularities in sparse data by fitting a distribution. However, before any smoothing, all supplementary information that can make the process more trustworthy should be considered. The fitting of a distribution function to the sparse data can be done using a simple subjective method such as hand smoothing or a range of more ‘automated’ methods. These include non-parametric methods, including spline and kernel methods (e.g. Richardson et al., 2006) and some timeseries statistical techniques (e.g. Musshoff and Hirschauer, 2007). Yet there have been few tests of the relative reliability of these methods. Future developments and innovations within this field would be welcome. Dealing with bias in data first requires that careful attention be given to the question of whether bias is indeed likely to be present. That is, do the relative frequencies embodied in the data reliably reflect the chances of the occurrence of the uncertainty that lies in the future? A verdict that bias exists leads directly to the much trickier question of what to do about it. There is a considerable literature on sampling bias, but much less has been written about systematic bias, particularly the sorts of systematic bias likely to be common in ARE, such as temporal or spatial differences between the phenomena represented in the data and the risky prospect to be faced. While there is some work bearing on these matters (e.g. Smith and Mandac, 1995; Hansen and Jones, 2000; Just and Pope, 2003; Larrick, 2004; Woodard and Garcia, 2008), more is needed to guide future research. Increasingly, stochastic simulation methods are used to model uncertain future events and consequences. For example, there are presently over 20 major different Atmosphere–Ocean General Circulation Models used for modelling climate change (Murray, 2007). Unsurprisingly, there are differences among their predictions. For instance, while each may model the historical variability and change in global climate reasonably well, we suspect that few, if any, include estimates of the probability that the underlying scientific assumptions are not wholly valid when forecasting the future. Hence, there is ongoing scepticism among some observers about forecasts of planetary catastrophe based on these models and, more justifiably, scepticism about forecasts about what is going to happen to the climate in particular locations (Australian Greenhouse Office, 2005). In the future, there may be a need for a greater emphasis on model validation (completeness, accuracy, and forecasting ability) prior to the use of results from simulation models in decision analysis. Ultimately, of course, from the subjectivist perspective, probability assessment always comes down to judgements, regardless of whether relative frequency data provide the basis for these judgements. Minimally, we argue that analysts need to acknowledge this reality with a sentence or two explaining the reasons for their judgements, including whether they were implicit or explicit. More generally, we believe that probability judgements can only be justified by analysts explaining what they did and why they made these assessments. Such transparency certainly imposes an additional burden on analysts; surely, this is no bad thing in that they are thereby obliged to think more carefully about what they do. It follows also that communication with the decision-maker(s) or other interested parties is a further important step in probability assessment. As Gardner (2008) has pointed out, often carefully derived probability assessments will be quite different from the affective and less well-considered judgements that most people
349
make. Hence, communication is important in persuading and allowing the users of the assessments to review and ideally revise their prior beliefs. Moreover, as Plough and Krimsky (1987) and Tetlock (2005) have pointed out, significant tensions have emerged among experts and the public around issues of risk. For example, stoked by emotive reporting in the media, the public increasingly distrusts expert assurances about such things as food safety (notably GMOs), the risks from nuclear power plants, and a wide range of other health and environmental hazards. Public outcry reaches the ears of politicians who may respond with policies and programmes that may help calm public anxiety, but may fall short of being the best way of using resources. This is, of course, not to imply that the experts are always right, but does point to the need for much better communication about risk in general and more specifically about probability assessment for risky phenomena. Consequently, risk communication is an issue that we believe needs much more attention in ARE than hitherto. The final important step of probability assessment is feedback and review. Especially for repeated assessments, for example, in weather or price forecasting over time, it is possible to ‘calibrate’ assessments against the frequencies of outcomes. Nevertheless, even for non-repeated assessments, assessors can learn from their mistakes in order to improve their performance. In summary, we know that humans are poor at assessing probabilities and we know quite a lot about what not to do. We know a few things to do to make better assessments, but there is a need for more work on calibrating probabilities obtained in different ways to make the process more reliable. We have found very few studies in ARE that have drawn on what is already known about probability assessment, and none that have explored better ways of doing the task. We would like to see these deficiencies corrected in future. Similarly, it appears to us that there is an asymmetry among extensive studies of the goals of farmers, including many attempts to measure their attitudes to risk, yet very little work on their beliefs as encoded in probability judgements. Surely, this is an area with many opportunities for useful research. 4. Concluding comment Decision analysis has been around for some time now, and we have argued that some elements of the approach have become part of ‘normal science’ for many ARE workers. Yet there remains considerable scope for further development if the method is to fulfil its promise. We can see the main scope for improvement lying in the area of probability assessment. However, we fear that this opportunity to expand the application of the method to more relevant and important problems will not be seized while so many practitioners are ignorant or fearful of the theory of subjective probability and the methods of its application. These constraints mean that the power of that theory and the existing literature on how to exploit that power are widely overlooked. It is our hope that this paper will provoke discussion on these issues leading to improvement in the way risks in ARE are addressed, assessed and managed. References Alghalith, M., 2006. Hedging under price and output uncertainty: revisited. Applied Financial Economics Letters 2, 243–245. Anderson, J.R., Dillon, J.L., Hardaker, J.B., 1977. Agricultural Decision Analysis. Iowa State University Press, Ames. Antle, J., 1987. Econometric estimation of producers’ risk attitudes. American Journal of Agricultural Economics 69, 509–522. Arrow, K.J., 1951. Alternative approaches to the theory of choice in risk-taking situations. Econometrica 19, 404–437. Australian Greenhouse Office, 2005. How Reliable are Climate Change Models? Department of the Environment and Heritage.
.
350
J.B. Hardaker, G. Lien / Agricultural Systems 103 (2010) 345–350
Chambers, R.G., Quiggin, J., 2000. Uncertainty, Production, Choice, and Agency: The State-Contingent Approach. Cambridge University Press, New York. Chavas, J.-P., 2008. A cost approach to economic analysis under state-contingent production uncertainty. American Journal of Agricultural Economics 90, 435– 446. Clemen, R.T., 1996. Making Hard Decisions: An Introduction to Decision Analysis, second ed. Duxbury Press, Belmont, California. Clemen, R.T., Winkler, R.L., 1987. Calibration and combining precipitation probability forecasts. In: Viertl, R. (Ed.), Probability and Bayesian Statistics. Plenum, New York, pp. 97–110. Clemen, R.T., Winkler, R.L., 1999. Combining probability distributions from experts in risk analysis. Risk Analysis 19, 187–203. (see also the Letter to the Editor by S. Kaplan, 2000. Risk Analysis 20, 155–156). De Falco, S., Chavas, J.-P., 2006. Crop genetic diversity, farm productivity and management of environmental risk in rainfed agriculture. European Review of Agricultural Economics 33, 289–314. de Finetti, B., 1964. Foresight: its logical laws, its subjective sources. In: Kyburg, H.E., Smokler, H.E. (Eds.), Studies in Subjective Probability. Wiley, New York, pp. 93–158. de Finetti, B., 1974. Theory of Probability. Wiley, New York. Flaten, O., Lien, G., Koesling, M., Valle, P.S., Ebbesvik, M., 2005. Comparing risk perceptions and risk management in organic and conventional dairy farming: empirical results from Norway. Livestock Production Science 95, 11–25. Gardner, D., 2008. Risk: The Science and Politics of Fear. Scribe, Melbourne. Gilovich, T., Griffin, D., Kahneman, D. (Eds.), 2002. Heuristics and Biases: The Psychology of Intuitive Judgment. Cambridge University Press, Cambridge. Greiner, R., Miller, O., Patterson, L., 2009. Motivations, risk perceptions and adoption of conservation practices by farmers. Agricultural Systems 99, 86–104. Hansen, J.W., Jones, J.W., 2000. Scaling-up crop models for climate variability applications. Agricultural Systems 65, 43–72. Hardaker, J.B., 2006. Farm risk management: past, present and prospects. Farm Management 12, 593–612. Hardaker, J.B., Lien, G., 2005. Towards some principles of good practice for decision analysis in agriculture. Norwegian Agricultural Economics Research Institute, WP 2005:1. . Hardaker, J.B., Huirne, R.B.M., Anderson, J.R., Lien, G., 2004. Coping with Risk in Agriculture, second ed. CAB International, Wallingford. Hardaker, J.B., Fleming, E., Lien, G., 2009. How should governments make risky policy decisions? Australian Journal of Public Administration 68, 256–271. Janis, I.L., Mann, L., 1977. Decision Making: A Psychological Analysis of Conflict Choice and Commitment. Free Press, New York. Just, R.E., 2003. Risk research in agricultural economics: opportunities and challenges for the next twenty-five years. Agricultural Systems 75, 123–159. Just, R.E., Pope, R.D., 1979. Production function estimation and related risk considerations. American Journal of Agricultural Economics 61, 276–284. Just, R.E., Pope, R.D., 2003. Agricultural risk analysis: adequacy of models, data and issues. American Journal of Agricultural Economics 85, 1249–1256. Knight, F.H., 1921. Risk Uncertainty and Profit. Houghton Miffin, Boston, Mass. Koop, G., 2003. Bayesian Econometrics. Wiley, Chichester, UK. Kumbhakar, S.C., 2002. Specification and estimation of production risk, risk preferences and technical efficiency. American Journal of Agricultural Economics 84, 8–22. Kumbhakar, S.C., Tsionas, E.G., 2010. Estimation of production risk and risk preference function: a nonparametric approach. Annals of Operations Research 176, 369–378. Larrick, R.P., 2004. Debiasing. In: Koehler, D.J., Harvey, N. (Eds.), Blackwell Handbook of Judgement and Decision Making. Blackwell Publishing, Oxford. Lence, S.H., 2000. Using consumption and asset return data to estimate farmers’ time preferences and risk attitudes. American Journal of Agricultural Economics 82, 934–947. Lien, G., 2002. Non-parametric estimation of decision makers’ risk aversion. Agricultural Economics 27, 75–83. Lien, G., Flaten, O., Jervell, A.M., Ebbesvik, M., Koesling, M., Valle, P.S., 2006. Management and risk characteristics of part-time and full-time farmers in Norway. Review of Agricultural Economics 28, 111–131. Lindner, R., Gibbs, M., 1990. A test of Bayesian learning from farmer trials of new wheat varieties. Australian Journal of Agricultural Economics 34, 21–38. Linstone, H.A., Turoff, M. (Eds.), 2002. The Delphi Method: Techniques and Applications. New Jersey Institute of Technology, Newark, New Jersey. . Mahul, O., 2001. Optimal insurance against climatic experience. American Journal of Agricultural Economics 83, 593–604. Marra, M., Pannell, D.J., Abadi Ghadim, A., 2003. The economics of risk, uncertainty and learning in the adoption of new agricultural technologies: where are we on the learning curve? Agricultural Systems 75, 215–234. Matheson, J.E., Winkler, R.L., 1976. Scoring rules for continuous probability distributions. Management Science 22, 1087–1096. McCarl, B.A., Villavicencio, X., Wu, X., 2008. Climate change and future analysis: is stationarity dying? American Journal of Agricultural Economics 90, 1241–1247.
Meuwissen, M.P.M., Huirne, R.B.M., Hardaker, J.B., 2001. Risk and risk management: an empirical analysis of Dutch livestock farmers. Livestock Production Science 69, 43–53. Morgan, M.G., Henrion, M., 1990. Uncertainty: A Guide to Dealing with Uncertainty in Quantitative Risk and Policy Analysis. Cambridge University Press, New York. Murray, B., (Ed.), 2007. Evaluating Climate Models. . (accessed 29.10.08). Musshoff, O., Hirschauer, N., 2007. What benefits are to be derived from improved farm program planning approaches? – the role of time series models and stochastic optimization. Agricultural Systems 95, 11–27. Newbery, D.G.M., Stiglitz, J.E., 1981. The Theory of Commodity Price Stabilization: A Study of the Economics of Risk. Clarendon Press, Oxford. Norris, P.E., Kramer, R.A., 1990. The elicitation of subjective probabilities with applications in agricultural economics. Review of Marketing and Agricultural Economics 58, 127–147. O’Donnell, C.J., Griffiths, W.E., 2006. Estimating state-contingent production frontiers. American Journal of Agricultural Economics 88, 249–266. Oude Lansink, A., 1999. Area allocation under price uncertainty on Dutch arable farms. Journal of Agricultural Economics 50, 93–105. Patrick, G.F., Musser, W.N., 1997. Sources of and responses to risk: factor analyses of large-scale US cornbelt farmers. In: Huirne, R.B.M., Hardaker, J.B., Dijkhuizen, A.A. (Eds.), Risk Management Strategies in Agriculture, vol. 7. Wageningen Agricultural University, Wageningen. Patrick, G.F., Peiter, A.J., Knight, T.O., Coble, K.H., Baquet, A.E., 2007. Hog producers’ risk management attitudes and desire for additional risk management education. Journal of Agricultural and Applied Economics 39, 671–688. Pidgeon, N., Gregory, R., 2004. Judgement, decision making and public policy. In: Koehler, D.J., Harvey, N. (Eds.), Blackwell Handbook of Judgement and Decision Making. Blackwell Publishing, Oxford. Plough, A., Krimsky, S., 1987. The emergence of risk communication studies: social and political context. Science, Technology, and Human Values 12, 4–10. Pope, R.D., Just, R.E., 1991. On testing the structure of risk preferences in agricultural supply analysis. American Journal of Agricultural Economics 73, 743–748. Rabin, M., Thaler, R.H., 2001. Anomalies: risk aversion. Journal of Economic Perspectives 15, 219–232. Raiffa, H., 1968. Decision Analysis. Addison-Wesley, Reading, Mass. Ramsey, F.P., 1931. Truth and probability. In: Braithwaite, R.B. (Ed.), The Foundations of Mathematics and other Logical Essays. The Humanities Press, New York. Richardson, J.W., Lien, G., Hardaker, J.B., 2006. Simulating multivariate distributions with sparse data: a kernel density smoothing procedure. In: Poster Paper Contributed to the 26th International Conference of Agricultural Economics. Gold Cost, Australia, August 12–18, 2006. Roberts, R.K., English, B.C., Gao, Q., Larson, J.A., 2006. Simultaneous adoption of herbicide-resistant and conservation-tillage cotton technologies. Journal of Agricultural and Applied Economics 38, 629–643. Savage, L.J., 1954. Foundations of Statistics. Wiley, New York. Savage, L.J., 1971. Elicitation of personal probabilities and expectations. Journal of the American Statistical Association 66, 783–801. Schlaifer, R., 1969. Analysis of Decisions under Uncertainty. McGraw-Hill, New York. Slovic, P., 2000. The Perception of Risk. Earthscan, London. Smith, J., Mandac, A.M., 1995. Subjective vs objective distributions of production risk. American Journal of Agricultural Economics 77, 152–161. Staël von Holstein, C.-A.S. (Ed.), 1974. The Concept of Probability in Psychological Experiments. Reidel, Dordrecht. Størdal, S., Lien, G., Hardaker, J.B., 2007. Perceived risk sources and strategies to cope with risk among forest owners with and without off-property work in eastern Norway. Scandinavian Journal of Forest Research 22, 443–453. Tetlock, P.E., 2005. Expert Political Judgment. Princeton University Press, Princeton, NJ. United Nations, 1992. United Nations Conference on Environment and Development, Rio, 1992 (the ‘Rio Declaration’). . US Select Senate Committee on Intelligence, 2004. Report on the US. Intelligence Community’s Prewar Intelligence Assessments on Iraq. (accessed 29.10.08.). van Lenthe, J., 1993. ELI: The Use of Proper Scoring Rules for Eliciting Subjective Probability Distributions. DSWO Press, Leiden University, Leiden. Winkler, R.L., 1972. An Introduction to Bayesian Inference. Holt, Rinehart and Winston, New York. Woodard, J.D., Garcia, P., 2008. Basis risk and weather hedging effectiveness. Agricultural Finance Review 68, 99–117. Wright, G., Ayton, P. (Eds.), 1994. Subjective Probability. Wiley, Chichester, UK. Wynne, B., 1992. Uncertainty and environmental learning–recovering science and policy in the preventive paradigm. Global Environmental Change 2, 11–127. Zellner, A., 2007. Philosophy and objectives of econometrics. Journal of Econometrics 136, 331–339.