Developing and applying a framework to evaluate participatory research for sustainability

Developing and applying a framework to evaluate participatory research for sustainability

EC O LO GIC A L E CO N O M ICS 6 0 ( 2 00 7 ) 7 2 6 –7 42 a v a i l a b l e a t w w w. s c i e n c e d i r e c t . c o m w w w. e l s e v i e r. c o...

444KB Sizes 0 Downloads 37 Views

EC O LO GIC A L E CO N O M ICS 6 0 ( 2 00 7 ) 7 2 6 –7 42

a v a i l a b l e a t w w w. s c i e n c e d i r e c t . c o m

w w w. e l s e v i e r. c o m / l o c a t e / e c o l e c o n

METHODS

Developing and applying a framework to evaluate participatory research for sustainability K.L. Blackstock a,⁎, G.J. Kelly b , B.L. Horsey b a

Socio-Economics Research Programme, Macaulay Institute, Craigiebuckler, Aberdeen, AB15 8QH, Scotland, United Kingdom Resource Futures Program, CSIRO Sustainable Ecosystems, Gungahlin Homestead, GPO Box 284, Canberra City, ACT 2601, Australia

b

AR TIC LE I N FO

ABS TR ACT

Article history:

The normative implications of participatory research imply ongoing social learning that

Received 25 August 2005

ought to lead to personal and institutional transformation. Sustainability science also

Received in revised form

requires reflexive scientific practice in order to enable the co-generation of solutions that

2 May 2006

take account of uncertainty and multiple forms of knowledge. However, there is little

Accepted 2 May 2006

published peer-reviewed material on how to assess to what degree the rhetoric regarding

Available online 17 August 2006

the benefits of participatory research are achieved in practice, particularly with regard to participatory research for sustainability. This paper outlines how linking the rationales for

Keywords:

participatory research and for sustainability science to the principles of evaluation can

Evaluation

deliver a conceptually coherent evaluation framework for assessment. The approach for

Participatory research

evaluating participatory research in this context consists of framing the evaluation, i.e.,

Sustainability Science

setting boundaries on the subject within its social, political, environmental and institutional

Methodology

context and selecting appropriate criteria, methods and data sources. The application of the framework, using a summative evaluation of participatory research for sustainability in north-east Australia, illustrates its strengths and weaknesses, concluding with a consideration of its applicability to further participatory sustainability science. © 2006 Elsevier B.V. All rights reserved.

1.

Introduction

Sustainability principles now firmly underpin rural development policy in Australia (Dore and Woodhill, 1999; RIRDC, 2003) echoing moves elsewhere (see Kasemir et al., 2003 for example). Sustainability requires an integrated and holistic systems approach, whereby bio-physical processes have to be considered in the context of their social–economic drivers and responses. Furthermore, sustainability generally requires institutional and personal transformation in understanding and practice, thus it relies on enhancing social capital and the collective capacity to respond positively to sustainability challenges. In response to this need for a



change in practice, practitioners of ‘sustainability science’ (Kates et al., 2001) advocate participatory and collaborative approaches to environmental decision making. Sustainability science can be understood as the integration and application of knowledge about natural and social systems, taking account of long term, uncertain and non-linear relationships. Sustainability science is embedded within broader social processes of understanding and applying sustainability, thus sustainability science contributes to socio-political decision making processes through information provision (especially analyses of risks and consequences) derived from emergent interdisciplinary inquiry (Kasemir et al., 2003).

Corresponding author. Tel.: +44 1224 498200; fax: +44 1224 498205. E-mail address: [email protected] (K.L. Blackstock).

0921-8009/$ - see front matter © 2006 Elsevier B.V. All rights reserved. doi:10.1016/j.ecolecon.2006.05.014

EC O L O G IC A L E C O N O M IC S 6 0 ( 2 0 07 ) 72 6 –7 42

Table 1 – Reasons for active involvement by stakeholders and the public Normative:

Encouraging social and individual learning enriches both society and individual citizens Substantive: Encouraging multiple perspectives improves understanding of the issues, and therefore the selection of appropriate solutions Instrumental: Encouraging collaborative relationships assists with implementation and with defusing conflict

The rationale for sustainability science reinforces the arguments for citizen and stakeholder social learning processes that should deliver increased understanding of complex systems, more durable and equitable solutions and increased capacity for active citizenship (Fischer, 2000; SLIM, 2004a). These arguments echo the benefits claimed for discursive or deliberative democracy (Dryzek, 2000; O'Neill, 2001; Stirling, 2004) which have been summarised in Table 1. However, there is a lack of published peer-review literature evaluating whether

727

the benefits of participatory approaches for sustainability science are achieved in practice. In order to redress this lack of research, a framework to evaluate participatory research in the context of regional sustainability was developed and applied. This paper outlines the development of that framework, including the basis for the selection of evaluation methods and the outcomes of applying the framework, to illustrate to what extent the ideals of participatory sustainability science were realised by a project in north-east Australia. The paper is focussed on the challenge of adopting a framework to evaluate the processes of participatory research and the extent to which these literatures (participatory research and evaluation) might contribute to sustainability science. The use of a single case study signals that the paper does not seek to prove a relationship between the process of research and its outcome, but to explore, in-depth, the issues that arise from designing and piloting the framework and to reflect on how these issues further illuminate the synergies between participatory research, qualitative evaluation and sustainability

Fig. 1 – A framework to guide the design of evaluation of participatory research.

728

EC O LO GIC A L E CO N O M ICS 6 0 ( 2 00 7 ) 7 2 6 –7 42

Table 2 – Guidance on choosing different levels of public involvement Rungs on ladder Inform when

Factual information is needed but the decision is effectively made Consult when The purpose is to listen and get information (when decisions are being shaped and information could improve them) Co-decide Two way information is needed because when individuals and groups have an interest in and/or are affected by outcomes and there is still an opportunity to influence the final outcome Delegate when Stakeholders have capacity, opportunity and influence to shape to policy that affects them Support when Institutions want to enable and have agreement to implement solutions by stakeholders, stakeholders have capacity and have agreed to take up the challenge to developing solutions Source: OECD, 2004:11.

science. 1 In this way, it builds on existing work evaluating science for sustainability (e.g. Becker, 2004; Dovers, 2004) but places the participatory research literature, with its emphasis on co-construction of knowledge, in a central position. The paper commences with a literature review in two areas: participatory research (PR) and evaluation. PR describes participants collaborating to problem solve and produce new knowledge in an ongoing learning and reflective process, whereas evaluation is a broad topic, ranging from large, quantitative approaches to global programs to more in-depth qualitative approaches to bounded projects. This review revealed an apparent dearth of literature explicitly focusing on the process of evaluating PR.2 Further, with a few exceptions (see Bellamy et al., 2001; Grant and Curtis, 2004) examples of evaluation of PR3 in the area of regional collaborative sustainability projects are virtually non-existent. The majority of the paper focuses on the methodological development of the framework — see Fig. 1 indicating the various stages; and Table 4 for the results of the application — with the outcomes of the evaluation for the case study region reported in Kelly, Blackstock and Horsey (forthcoming). The paper concludes by discussing how the framework could be used for further participatory sustainability science evaluations in order to support the transition to sustainability (O'Riordan, 2001).

1 Note that the project did contribute to wider programme review (see Kelly et al., 2005) but the purpose of the paper is to focus on the depth made possible by a single case approach, rather than generalise across a number of projects. 2 Our project is similar to the AIRP-SD as both recongise it is important to evaluate SD-orientated research by recognising the extent to which research outcomes are influenced by the research processes and research context. The project links science for sustainability to evaluation approaches but differs from our work in that the project did not focus on participatory research literature. (See www.airp-sd.net/_docs/executive%2020_final_.pdf). 3 This topic, the evaluation of participatory research, does not encompass participatory evaluation, which is also applicable to any program, project and activity (this is discussed further below).

2. Participatory research for sustainability science The notion of PR builds on the debate around the meaning of participation (see Wandersman, 1981; Coenen et al., 1998 for a review; Richards et al., 2004; Steelman and Ascher, 1997), whereby public or stakeholder involvement covers the spectrum from the one way flow of information from the decisionmaker to others to full community or stakeholder control of the process (see Arnstein, 1969; OECD, 2004; Table 2). Participation has been defined as processes in which individuals take part in decision-making that affects them (Wandersman, 1981); often with the assumption that participation is an active choice to get involved in order to shape the future (Wilcox, 2000; Rowe et al., 2004). Thus, whilst recognising that a range of definitions regarding participation, and therefore participatory ‘positions’ exist, we take participatory research to refer to processes on the two highest rungs of the ladder (delegation and support). The assumption that these higher levels of participation is a ‘good thing’ (see Beierle and Konisky, 2001) often arises from a combination of the substantive, instrumental and normative possibilities arising from active involvement by the public and stakeholders (see Table 1 ; Pellizzoni, 2001; Stirling, 2004). The aim of the evaluation was to assess to what extent these ideals for PR are realised in practice, and as this paper describes, how an appropriate evaluation approach was developed to enable this judgement. Participatory research has been comprehensively reviewed by a number of different authors (e.g. Kemmis and McTaggart, 2000; Johnson et al., 2004; Pain, 2004; Kelly and Measham, 2005) and there is general agreement that research can be defined as either (a) the generation of new ideas, theories, methods, or techniques; or (b) the review, verification, adaptation or refining of existing ideas, theories, methods, or techniques through empirical studies. PR implies collaboration to problem solve, highlighting the centrality of co-production of new knowledge through sharing perspectives and experiences. PR promotes social learning — the interplay between individual and situational factors in generating human understanding and action (Maarleveld and Dangbégnon, 1999). Social learning encompasses more than an individual's learning in a social situation but should be a transformative process whereby changes (in the particular situation and wider social conditions) come about through people coming to understand their own and others' interests, values, experiences, beliefs and feelings, and through this understanding, acting for the collective good (Webler, 1995) — in short, the normative rationale for PR. Thus PR requires a self-reflective community of inquiry 4 (Wallerstein, 1999). The “keystone of PR is that it involves those conventionally ‘researched’ in some or all stages of research” (Pain, 2004: 652), differing from traditional ‘expert research’ in the degree to which it is controlled by the researcher. Returning to the metaphor of the ladder above, Biggs' (1989) four modes of participation — contractual, consultative, collaborative, collegiate — provide a useful summary of this continuum to 4

Pini (2004) emphasises the importance of reflexivity (theoretical, analytical, critical) to the process of individual and social learning.

EC O L O G IC A L E C O N O M IC S 6 0 ( 2 0 07 ) 72 6 –7 42

consider the degree of power sharing at various stages of the research. A further delineation relevant for the current study is whether the participatory research is ‘research driven’ (its main aim is to advance research objectives) or ‘development driven’ (focused on empowerment objectives) (Martin and Sherington, 1997). Whereas in ‘expert research’, the research is designed and carried out by the researcher/s alone, PR involves a (relatively) egalitarian partnership between expert researchers and other participants (Greenwood et al., 1993; Wallerstein, 1999; Pain, 2004). This requires sharing multiple understandings to co-generate knowledge; developing reciprocal understanding that invokes all forms of rationality, not just objective scientific approaches (after De Marchi and Ravetz, 2001; Davies and Burgess, 2004). Okali et al. (1994) illustrate the creative interface of local and formal knowledge can empower ‘non-experts’ through enhanced respect for local knowledge and their enhanced ability to use the co-generated knowledge. Therefore, PR should empower the participants, not just provide information for the research team (Greenwood et al., 1993; Okali et al., 1994; MacNaughten and Jacobs, 1997; Wallerstein, 1999). The emerging area of sustainability science combines research from environmental sciences and economic, social and developmental studies (Kasemir et al., 2003). Whilst sustainability remains an elusive and contested concept (O'Riordan, 2001) it can be characterised as an approach that attempts to consider social, environmental and economic factors when planning and managing the use of resources. The focus on linkages between the ‘triple bottom line’ aspects implies a systems perspective on the socio-ecosystem (Gunderson and Holling, 2002; Walker et al., 2004; Mayumi and Giampietro, 2006). The emphasis on both current and future generations highlights the importance of adaptive management and co-responsibility (for both ‘experts’ and other participants) for resource stewardship (Becker, 2004; Bellamy and McDonald, 2005; Mayumi and Giampietro, 2006). In short, we define participatory sustainability science as the cogeneration of knowledge about socio-ecological systems drawing on multiple understandings in an ongoing collective dialogue in order to transform practice, where academics and stakeholders are all co-researchers. Sustainability science advocates PR for substantive reasons, in light of the uncertainty due to imperfect scientific knowledge and the indeterminacy of complex processes, which can be partially addressed by embracing a plurality of voices, knowledge forms and values (after Van den Hove, 2000; Smith, 2001; Pellizzoni, 2003; Muller, 2003). Therefore, public and stakeholder participation is sought for the generation of knowledge as much as deliberating over possible solutions (see Bellamy, 2004; see also Strager and Rosenberger, 2006; Mayumi and Giampietro, 2006; Reed et al., 2005). The pressing need to implement sustainability science principles (see Dore and Woodhill, 1999 for example) illustrates the instrumental rationale for PR. As Bellamy (2004) highlights with regard to the emerging regional partnership approaches, legitimate and inclusive participation, collaborative or consensual decisionmaking and enhanced geographic and cross-government cooperation are vital for stable, durable and equitable implementation of sustainability policies (see also Cash et al., 2003 on salience, legitimacy and credibility for sustainable development). The collective and individual learning implied

729

in the discussion above indicate the normative reasons for developing and supporting ongoing collaborative relationships, which in turn allow the benefits of the emergent knowledge (created through the substantive aspects of PR) to be acted on. Thus, whilst each rationale has independent merit, well-planned processes can generate synergies that amplify all three benefits. There a number of studies that consider participation as an aspect of evaluating science for sustainability (see, for example, Cash et al., 2003; Reed et al., 2005; Becker, 2004; Weaver, no date). However, these studies do not focus specifically on the extent to which the degree of participation (rungs on the ladder), the process of participation or the individuals doing the participation affect the outcomes. Equally, there are an increasing number of PR projects investigating sustainability, but there appears to be no corresponding systematic evaluation of their impact or much sustained reflection on the lessons learnt from these case studies. Until such literature exists, the assumptions regarding the substantive, instrumental and normative benefits of PR, outlined above, remain unexplored. This lack of published reflection exists despite the plethora of literature on evaluation, which highlights the importance of formalised reflection for accountability, effective management, knowledge building and organisational learning (Chelimsky, 1997; Hoverman, 2005). Evaluating PR provides an opportunity to unpack the assumed relationships between participatory research for sustainability science and the transition to sustainability, in turn providing recommendations for future practice (see Kelly et al., in press).

3.

Evaluation of participatory research

This paper focuses on the evaluation of PR rather than the practice of participatory evaluation,5 although questions about who should design and implement evaluation processes will be discussed later. Whilst there are few references to the evaluation of an explicitly labelled PR project, literature on the evaluation of partnership, coalition and community based research projects (see, for example, Rowe and Frewer, 2000; Brinkerhoff, 2002; Schulz et al., 2003; Kenyon, 2005) proved a useful guide to the development of an evaluation framework. Work on evaluation of sustainability (e.g. Becker, 2004; Bossel, 1997; Hinterberger et al., 2003) also highlights the importance of having a coherent conceptual framework, and codify many of the issues highlighted in the above literature, albeit from a different perspective. A comprehensive literature review of community and collaborative resource management (see Blackstock, 2005b) highlighted the important aspects that needed to be explored within the framework. Four important distinctions — specifically bounding the topic, timing, purpose and focus — need to be considered when designing an evaluation process. The

5

Evaluation can be conducted as a participatory process where those involved in the research select the evaluation criteria and decide on the process to ensure it is accountable to those using the research and those involved in it (see Patton, 1997; Wallerstein, 1999).

730

EC O LO GIC A L E CO N O M ICS 6 0 ( 2 00 7 ) 7 2 6 –7 42

Table 3 – Evaluative criteria for participatory research in alphabetical order from the literature Criteria Access to resources

Description

Source

Referring to provision of support to allow participants to engage and meet expectations for their roles

Asthana et al., 2002; Brinkerhoff, 2002; Laverack, 2001; O'Meara et al., 2004; Richards et al., 2004; Rowe and Frewer, 2000; Webler et al., 2001 Accountability Referring to whether the representative's core constituencies Asthana et al., 2002; Brinkerhoff, 2002; Richards et al., 2004; are satisfied, including expectations Scott, 1998 Capacity Referring to developing relationships and skills to enable Grant and Curtis, 2004; Brinkerhoff, 2002; O'Meara et al., 2004 building participants to take part in future processes or projects Capacity to Referring to the participant's ability to influence the process Abelson et al., 2003; Brinkerhoff, 2002; Grant and Curtis, 2004; influence (being heard, competencies in technical and process O'Meara et al., 2004; Richards et al., 2004; Rowe and Frewer, techniques, influence on others) 2000; Schulz et al., 2003; Wallerstein, 1999; Webler et al., 2001 Capacity to Referring to the individual's ability to value different points Abelson et al., 2003; Brinkerhoff, 2002; Grant and Curtis, 2004; participate of view and willingness to learn as well as their competence Kenyon, 2005; O'Meara et al., 2004; Richards et al., 2004; Schulz et al., 2003; Wallerstein, 1999; Webler et al., 2001 Champion/ Referring to the both internal leadership and champions Asthana et al., 2002; Brinkerhoff, 2002; Laverack, 2001; leadership but also the role of the critical outsider O'Meara et al., 2004 Conflict Referring to the degree of conflict between participants Asthana et al., 2002; Brinkerhoff, 2002; Grant and Curtis, 2004; resolution and the way in which this was resolved during the process MacNeil, 2002; Richards et al., 2004; Schulz et al., 2003 Context Referring to the political, social, cultural, historical, Asthana et al., 2002; Botcheva et al., 2002; MacNeil, 2002; environmental context in which the process/project occurs Richards et al., 2004; Schulz et al., 2003; Thurston et al., 2005 Cost Referring to the improvements created through the process Asthana et al., 2002; Brinkerhoff, 2002; Davies et al., 2004; effectiveness in relation to the costs accrued Kenyon, 2005; Rowe and Frewer, 2000 Develop a Referring to the creation of an agreed and clearly defined Asthana et al., 2002; Brinkerhoff, 2002; Grant and Curtis, 2004; shared vision vision, objectives and goals for the process/project. Gibbon et al., 2002; Laverack, 2001; O'Meara et al., 2004; and goals Richards et al., 2004; Schulz et al., 2003; Wallerstein, 1999 Emergent Referring to the influence of local knowledge Asthana et al., 2002; Grant and Curtis, 2004 knowledge on the outcome of the research Legitimacy Referring to whether the outcomes and process Beierle and Konisky, 2001; Bellamy et al., 2001; Fischer, 2000; are accepted as authoritative and valid Kenyon, 2005; Rowe and Frewer, 2000; Scott, 1998 Opportunity Referring to the participant's opportunity to influence Brinkerhoff, 2002; Grant and Curtis, 2004; Laverack, 2001 to influence (enough time; involved early enough; access to policy makers O'Meara et al., 2004; Richards et al., 2004; Rowe and Frewer, and leaders; organisational structure) 2000; Schulz et al., 2003; Wallerstein, 1999; Webler et al., 2001 Ownership Referring to whether there is an enduring Beierle and Konisky, 2001; Brinkerhoff, 2002; Grant and Curtis, of outcomes and widely supported outcome 2004; Richards et al., 2004; Scott, 1998; Webler et al., 2001 Quality of Referring to the establishment and maintenance Beierle and Konisky, 2001; Brinkerhoff, 2002; Gibbon et al., decision of agreed standards of decision making 2002; Grant and Curtis, 2004; MacNeil, 2002; Richards et al., making 2004; Rowe and Frewer, 2000; Schulz et al., 2003; Thurston et al., 2005 Quality of Referring to the adequacy, quality and quantity Asthana et al., 2002; Beierle and Konisky, 2001; Grant and information of information provided Curtis, 2004; Scott, 1998 Recognised Referring to whether participants perceive Kenyon, 2005; Davies et al., 2004; Richards et al., 2004 impacts that changes occur as a result of the participatory process Relationships Referring to issues of social capital through new and existing Asthana et al., 2002; Brinkerhoff, 2002; Davies and Burgess, social networks developed during the process/project e.g. 2004; Gibbon et al., 2002; MacNeil, 2002; O'Meara et al., 2004; trust, reciprocity and collaboration Richards et al., 2004; Schulz et al., 2003; Thurston et al., 2005 Representation Referring to the spread of representation from affected Abelson et al., 2003; Beierle and Konisky, 2001; Brinkerhoff, interests; including how legitimate the representation 2002; Grant and Curtis, 2004; Laverack, 2001; O'Meara et al., seen to be; the diversity of views not just representatives 2004; Richards et al., 2004; Rowe and Frewer, 2000; Schulz et al., 2003; Scott, 1998; Thurston et al., 2005; Wallerstein, 1999 Social justice Referring to the distributive dimension of the costs Asthana et al., 2002; Brinkerhoff, 2002; Grant and Curtis, 2004; and benefits associated with the outcomes Patton, 1997; Richards et al., 2004; Scott, 1998; Wallerstein, 1999 Social learning Referring to the way that collaboration has changed individual Asthana et al., 2002; Brinkerhoff, 2002 values and behaviour, in turn influencing collective culture and norms Transparency Referring to both internal, whereby participants understand Beierle and Konisky, 2001; Bloomfield et al., 2001; Davies et al., how decisions are made; and external, whereby observers can 2004; Fischer, 2000; Kenyon, 2005; Rowe and Frewer, 2000; audit the process Thurston et al., 2005; Webler et al., 2001

review also highlighted the importance of selecting the appropriate criteria, illustrated in Table 3; appropriate data collection methods so that the purpose and focus of the evaluation can be realised; and sharing evaluation results with the participants and sponsors (see Fig. 1). This frame-

work has many similarities with the AIRP-SD evaluation framework for research on sustainable development innovations (Hinterberger et al., 2003), which emphasises the connection between research outcomes, research design and process, and research context.

EC O L O G IC A L E C O N O M IC S 6 0 ( 2 0 07 ) 72 6 –7 42

3.1.

Bounding, timing, purpose and focus

Evaluation must begin by clearly delineating the objective of the participatory research and of the evaluation itself (Martin, 2001; Holloway, 2001; Becker, 2004). This can be difficult as the nature of participation varies between projects and also within projects, the intensity and inclusivity of involvement ebbing and flowing through time (Thurston et al., 2005). The nature of PR means that it is likely to have multiple objectives, given different weights by different participants. If the goal is capacity building or empowerment, then it is more difficult for an evaluation process to capture something that is designed to be organic and result in multiple outcomes (Wallerstein, 1999; Rowe et al., 2004). The AIRP-SD project emphasises the need to connect the project objectives to the drivers for the project, its initiation and its management context (Osorio-Peters, 2003). There is also the problem of drift between the project's stated objectives and the actual practices that occur during implementation (Bellamy et al., 2001). Sustainability science projects often require considerable boundary management in order to capture the complexity of socio-ecological systems whilst maintaining a manageable project (Cash et al., 2003; Reed et al., 2005)! Therefore, it is often difficult to define a clear evaluation topic for participatory sustainability science. The second distinction refers to the timing of the evaluation. Ex ante evaluation considers the policy, project or program prior to its implementation. Process evaluation, on the other hand, focuses on its operation (Holloway, 2001; Martin, 2001), and as Patton (1987) suggests, this focuses on how the outcome is produced rather than the outcome itself in order to improve the processes and build on strengths. When it is a reflective review to ensure that the project is ‘going in the right direction’ and to allow a change of direction and/or to end the project early, it can be called formative evaluation (Martin, 2001; Patton, 1987:18). Ex post evaluation determines the effects of the policy, project or program and is undertaken to demonstrate outcomes and to improve the design of future processes. It is called summative evaluation when it reflects on the aspects that went well, the aspects that went less well, and the things to do differently next time, with an emphasis on learning and knowledge accumulation (Patton, 1997; Martin, 2001; Holloway, 2001). Formative and summative evaluations also indicate the purpose and focus of the evaluation. According to EasterbySmith (1998: 159, quoted in Holloway, 2001:16), evaluation has four purposes. These are proving (illustrating efficiency or value); controlling (monitoring quality control); improving (reaching objectives) and learning (transforming the individual participant). As discussed above, PR should facilitate social learning, empowerment and transformation and therefore its evaluation should be designed to identify learning and improving. This highlights the recent trend to seeing evaluation itself as an iterative process that is an important component for organisational and personal learning (Patton, 1998; Holloway, 2001; Osorio-Peters, 2003). The focus of the evaluation can be strategic to investigate whether the process achieves the intended results in line with the project or organisation's overall objectives; or operational to monitor the timing, costs and quality of the planned activities (Holloway, 2001; Martin, 2001). The focus also has implications for the evaluation process. For example, Asthana et al.'s (2002) framework for partnership evaluation was focussed on the

731

tangible outcomes of a working partnership, rather than the process outcomes regarding relationships, ownership and empowerment. Likewise, Okali et al. (1994: 119) found the few examples of evaluation of participatory farmer research had focussed on the ‘standard technical indicators of success’ rather than the participants' experiences. The lack of attention to the experiences and relationships involved in the co-generation and dissemination of knowledge is particularly problematic as these are the processes that formative or summative evaluation of PR must analyse in order to understand how processes might contribute to specific outcomes. For example, the AIRP-SD conclusions draw attention to the importance of leadership, communication and decision making processes for sustainable development outcomes (Osorio-Peters, 2003; see also Cash et al., 2003). The focus on relationships, experiences and perceptions has implications for criteria, as we discuss below.

3.2.

Evaluation criteria

Patton (1997) argues that all evaluation processes need clear criteria, selected with reference to the type of evaluation employed and objectives for which the evaluation is being carried out. Bellamy et al. (2001) note that the multi-objective nature of many participatory processes creates problems in terms of the breadth and selection of criteria to be included in the analysis. Often there are no acceptable, valid and reliable quantitative measures for variables of interest to PR (Patton, 1987), or for evaluating research on sustainability more generally (Osorio-Peters, 2003), thus choosing the criteria and the data by which to judge can be the most contentious aspect of evaluation (Holloway, 2001). This is particularly true for the contextual and nuanced process of PR, making the selection and operationalisation of criteria a fundamental challenge in developing PR evaluation frameworks. Table 3 provides a summary of possible criteria for evaluating PR drawn from the literature review. In many instances the same criteria could be used to measure both process and outcome, depending on the purpose, focus and especially the timing of the evaluation. For example, capacity building and empowerment are applicable to both process and outcome — in a formative evaluation they could be process criteria, whereas in an ex post evaluation they would be outcome criteria. Process and outcomes are interrelated as ineffective participatory processes suppress the desired outcomes or even lead to undesirable outcomes, such as increased distrust, conlict and/or apathy by citizens and scientists (Fischer, 2000). As the AIRP-SD project reminds us, both process and outcomes are related to the research context (Hinterberger et al., 2003). Recognising the contested nature of choosing and using criteria helps to emphasise the importance of inter-subjectivity for evaluating participatory research (and qualitative research in general, see Pini, 2004 for example). However, Webler et al. (2001) draw attention to how different participants' criteria for a ‘good’ process may vary. This means that analysis must pay attention not only to the criteria selected, but also to the ways in which different participants construct these criteria as being important (see also Rowe et al., 2004). This illustrates the subjective nature of evaluating PR, whereby many issues connected with the process of learning, transformation

732

EC O LO GIC A L E CO N O M ICS 6 0 ( 2 00 7 ) 7 2 6 –7 42

and empowerment cannot be objectively measured but have to be captured via participants' or experts' perceptions of impact (Kenyon, 2005). The multiple values and voices involved has implications for the choice of methods and for the data sources used in evaluation processes.

3.3.

Methods and data collection

The choice of evaluation methods depends on the objective and focus of the research project. The evaluation of PR requires a naturalistic, inductive approach. Firstly, most evaluation is a form of naturalistic inquiry as evaluators do not attempt to manipulate processes, but seek to observe processes as close to their normal state as possible in order to capture what happened (Patton, 1987). Thus, the methods have to be context-sensitive (Hinterberger et al., 2003). Secondly, evaluation of PR relies on understanding the processes of cogenerating knowledge, understanding the communication, translation and mediation processes taking place and transforming individuals, organisations and institutions (Cash et al., 2003). Therefore, the quantitative methods like cost–benefit evaluation and resource allocation evaluation found in the literature (e.g. Alston et al., 1995) are inappropriate for assessing the relationships and processes in question (Okali et al., 1994) and are often at odds with the objectives within communityinitiated action research (Martin and Sherington, 1997). Qualitative methods allow the study of a case in depth and detail, capturing the richness of people's perceptions and experiences in their own terms and developing an analytical understanding through the aggregation of these individual accounts. A qualitative approach implies a focus on explaining the variation in perceptions and experiences describing how participants perceive and link cause and effect rather than proving cause and effect 6 (after Patton, 1987). As Pini (2002, 2004) illustrates, qualitative approaches involve making visible those invisible, taken-for-granted processes (particularly those with regard to perception and judgement), creating a new politics of knowledge. The selection of methods depends not only upon what you want to measure but also on the focus, purpose and timing of the evaluation. The subjects for evaluation are generally too complex to be captured by one variable, measure or method (O'Meara et al., 2004). Multiple methods are particularly effective to shed light on different aspects of empirical reality (Patton, 1987), whereas single method data collection (and single data sources) can oversimplify and distort information, and often fail to identify underlying dynamic processes (Wicker, 1989). Multiple methods and data sources increase the probability of an in-depth understanding, thus enhancing the validity of the evaluation (after Denzin, 1989).

6 There are a number of challenges regarding cause and effect including: the spatial and temporal separation of action and outcome (Brinkerhoff, 2002); unanticipated interactions with other processes that are occurring at the same time (Ekboir, 2003; Thurston et al., 2005); the multi-dimensionality of impacts and the multiple levels at which these can be evaluated (Bellamy et al., 2001; Rowe et al., 2004); and the dynamic and complex context in which real world (participatory) research takes place (Patton, 1997).

The literature drawn upon in Table 3 uses a combination of observation that generates field notes, electronic and print document analysis using ‘checklists’, open ended interviews and closed ended survey questions. Analysis of archived formal recorded data (including the media) allows comparison over time, provides the ability to track how decisions were made, offers an alternative view of the past and an understanding of contextual conditions. Interviews are a rich source of data that can highlight the perceived links between cause and effect, although they are time consuming and complex to analyse. The combination of recorded (field notes, documents) and reported data (interview and survey responses) gathered at multiple points throughout the process is especially useful to capture the evolution of the process (see Rowe et al., 2004). In particular, triangulation helps avoid the over-reliance on recall in reported data, particularly when doing summative evaluation. When evaluating participatory processes, it is important to gather data from a variety of stakeholders to capture the diversity of views about objectives, criteria and outcomes (Bellamy et al., 2001; Martin, 2001; Rowe et al., 2004). Given the importance of participant perception, learning and interaction in evaluating PR, the evaluation must be informed by the voices of the participants themselves. This does not necessarily require participatory evaluation, whereby participants design and implement the evaluation, but does require some form of qualitative inquiry that captures these views. Thus, as with any rigorous social research process, evaluation must pay careful attention to the sample selection, and the validity and reliability of the data collected. Equally, analysis must take account of the inter-subjectivity within the data collection process — reflecting on the researcher's own identity, how the participants interpret the researcher's identity and how the researcher interprets the participants' identities (Pini, 2004).

3.4.

Summary of developing the framework

As summarised in Fig. 1, deciding on the purpose and focus of the evaluation, taking into account its timing, is a critical first step in planning any evaluation, and has flow-on implications for the choice of the measurement criteria, methods and data sources. The appropriate criteria have to be selected in order to measure the process, outcomes and the context. These criteria have to be operationalised using appropriate methods, and the very act of measuring (when and by whom) has to be thought through. The analysis of the data collected against the criteria provides insights into both the process and the outcomes of the PR. Finally, the evaluation results have to be shared and reflected upon if evaluation is to achieve its purpose (proving, controlling, improving and/or learning) outlined above. The process is not linear but an evolving cycle of negotiated learning.

4.

Applying the framework

This section describes the application of the framework outlined in Fig. 1, to a post project evaluation of a regional sustainability project, Douglas Shire Sustainable Futures (DSSF). The DSSF was

733

EC O L O G IC A L E C O N O M IC S 6 0 ( 2 0 07 ) 72 6 –7 42

Table 4 – Criteria, data sources and results of evaluation applied to the DSSF participatory research Criteria Process Champion/leadership Internal leadership and champions and the role of the critical outsider

Method & data source Interviews; field notes

Communication The quality and flow of information to participants

Interviews; analysis of project objectives and meeting minutes

Conflict resolution Degree of conflict between participants; resolution during the process; could include quality of decision making

Interviews; field notes

Influence on the process Participant's opportunity and capacity to influence (could include resource issue)

Interviews

Representation Spread of representation; perceived legitimacy; diversity of views

Interviews; analysis of project objectives and meeting minutes

Context Political, social, cultural, historical, environmental context in which the process/project occurs

Interviews; media analysis; meeting minutes; archival documents field notes

Outcomes Accountability Analysis of meeting minutes and Whether the participant's core constituencies reports to stakeholders; interviews are satisfied; also the perceived legitimacy of the process

Capacity building Developing relationships and skills so that participants can take part in future processes

Interviews

Emergent knowledge Influence of local knowledge on the outcome of the research

Analysis of conference papers, proposals, reports and meeting minutes; interviews

Recognised impacts Perceptions of changes due to participatory research

Analysis of archival documents and reports; interviews

Evaluation resultsa Patchy Important distinction between visionary leader and coordinator/facilitator not well implemented — some argued personal agendas tainted coordination roles; strong support for paid coordination and professional facilitation to support process Problematic CWG unaware of progress post 2002; all participants unsure of outcomes of DSSF; quality of information affected both the ability to effectively participate in deliberation and to learn from process Inadequate Most enjoyed learning from one another but some felt that there were internal asymmetries of power within the groups, and therefore decision making; all made reference to conflict and power inequalities in the wider context e.g. local politics and impotent position in multi-level governance Adequate All felt were given some opportunity to access and influence the process; but evidence of different capacities to influence discussion due to different competences and/ or some voices more dominant than others; ambiguity about influence on OUTCOMES Mixed CWG open access but missing voices (business, tourism, indigenous) and preaching to converted (self selection of representatives); JVP targeted formal representatives of partner stakeholder groups; some felt DSSF lost social/ community focus when groups merged Inappropriate to rank Impact of local political conditions (polarised council typical of growing coastal regions); impact of rural restructuring on cane industry and inability of local actors to ensure key infrastructure and development decisions made in their favour

Problematic CWG participants couldn't update their constituents due to uncertainty regarding implementation; some JVP constituencies struggled with shifting objectives for partnership; in both cases poor communication affected ability to judge if needs were met or process considered valid Satisfactory Most perceived an improved individual capacity to engage in future processes (largest increases in JVP participants); many offered constructive solutions and some expressed commitment to revitalising process but less evidence to illustrate joint ownership of implementation Inconclusive Participants felt their input was valued and examples such as farmer involvement in water quality monitoring illustrate where co-generation of knowledge occurred but difficulties in judging impacts created problems evaluating this criterion. Problematic No monitoring of implementation or attitudes to sustainability and poor communication of achievements; little individual responsibility for implementation demonstrated (therefore no empowerment or transformation); contested notions of Shire's sustainability (continued on next page)

734

EC O LO GIC A L E CO N O M ICS 6 0 ( 2 00 7 ) 7 2 6 –7 42

Table 4 (continued ) Criteria Social learning Change in individual values and behaviour due to collaboration, in turn influencing collective culture and norms

Transparency Both internal, whereby participants understand how decisions are made; and external, whereby observers can audit the process

Method & data source

Evaluation resultsa

Interviews

Satisfactory All participants highlighted the exchange of ideas and knowledge; with 75% indicating quite a lot to extensive learning; and the learning process was seen to improve relations between different groups and sectors; but limited evidence of this learning leading to transformation of actual practices Analysis of meeting minutes and Inadequate reports to stakeholders; interviews; Most highlighted lack of clarity regarding how and why field notes decisions were made, such as the inconsistent implementation of the CWG's draft strategy and the shift in JVP objectives from agriculture to land use; gaps in document trail prevent external observers tracing decision making rationales.

a

Judgements based on authors' analysis — space does not allow full elaboration of the analysis leading to these results — for fuller discussion see Kelly et al., in press.

funded by the local government, CSIRO,7 industry partners and state government agencies, until June 2005. The context for the research was the Douglas Shire8: a local government area in rural north Queensland, Australia, which encompasses two contiguous World Heritage listed areas (the Wet Tropics and the Great Barrier Reef Marine Park). Similar to many other coastal locations throughout Australia, the Douglas Shire is facing rapid change, largely due to development pressure from tourism industry expansion and migration fuelled population growth (Blackstock, 2005a; Kelly and Haslam-McKenzie, submitted for publication; Sherlock, 2002). The viability of the sugar industry is being challenged due to sustained economic pressure and scrutiny about its environmental impact (Walker et al., 2004). These drivers of change, together with expected climate change, have caused the Douglas Shire to focus on issues of sustainability (in its broadest sense), and to invest in research to assist the region to manage social, environmental and economic change into the future. A fuller discussion of the case study context and its results can be found in Kelly et al. (in press) as this paper concentrates on the experience of applying the methodology presented in Fig. 1.

4.1.

Bounding, focus, purpose and timing

The first step in the evaluation process was to ‘bound’ the focus by clearly defining the project under review. The DSSF project combined two different participatory approaches throughout its five-year history. Firstly, there was an inclusive process involving a 45-strong community working group (CWG) that met regularly to develop a sustainability strategy

7

Commonwealth Scientific and Industrial Research Organisation, one of Australia's largest research and development organisations. 8 Within Australia, a Shire refers to an area, and its population, using local government boundaries, whereas a Shire Council refers to the elected members and their officials who govern this local area and population.

for the Douglas Shire,9 using desktop research reports provided by the coordinator. Secondly, the joint venture (industry/government/research organisation) partnership (JVP) was convened to secure a sustainable future for the agricultural industry within the region. This partnership used a combination of formal meetings; field days; monitoring and infrastructure projects and feasibility studies to achieve their goals. Using Biggs' (1989) modes of participation, the development of the sustainability strategy which relied on a voluntary community working group, could best be described as collegiate where the researchers and residents worked together as colleagues, with community members within the group controlling the process and content. The JVP fits a collaborative mode of participation (researchers and local people working together, with the project's design, initiation and management being controlled by the researchers) as it originally involved the collaboration between invited partners from the agricultural sector. The two processes were merged into the DSSF project late in 2001. Despite the meetings being open to the broader community, the greatest input continued to be provided by the original JVP participants with occasional input from the environmental sector. The merging of these two quite distinct processes under the banner of DSSF; the subsequent evolution of the DSSF objectives over the five years; and the DSSF's broad objectives relating to building capacity and diffusing conflict, which were often interwoven with other ongoing projects; illustrate the difficulty in definitively bounding the ‘project’ to be evaluated. The objectives of the DSSF evaluation were to: review the process of developing the DSSF projects, particularly the emphasis on community and stakeholder partners; review the perceived outcomes of being involved in the DSSF; and develop a framework to evaluate the outcomes of projects with

9 The draft Sustainability Strategy was completed in December 2001.

EC O L O G IC A L E C O N O M IC S 6 0 ( 2 0 07 ) 72 6 –7 42

community and stakeholder partners. The evaluation purpose was to reflect on the successes and failures of the participatory research processes in order to inform future research work, therefore the central purpose of this evaluation was learning and improving (Easterby-Smith, 1998 cited in Holloway, 2001; Martin, 2001). Furthermore, PR requires us to think about who was learning, and how lesson might differ depending on the perspectives and needs of the different participants. While individual learning was one of our criteria, the overall purpose of the evaluation was to improve the design of future research collaborations. This coupled with the strategic focus to determine if the project has met its intended objectives through the use of participatory approaches, extended the notion of learning to that of organisational learning.10 The context, and history of previous collaborations, will have shaped the initiation, design and implementation of the DSSF project itself, and the evaluation allowed these influences on the process and potential outcomes to become more visible. As the project was drawing to a close at the time of the evaluation, it is best described as an ex post summative evaluation — it sought to take stock of what had gone well, what had not, and to reflect on what could be done differently next time (Martin, 2001; Patton, 1997).

4.2.

Appropriate criteria, methods and data collection

The clarification of purpose, focus and timing drove the choice, firstly of the measurement criteria, and together with the selected criteria, influenced the choice of methods and data sources (see Table 4). For reasons outlined above (see footnote 6, see also Rowe et al., 2004) the team focused on developing criteria that would provide an understanding of the perceived project outcomes, including impacts on individuals, groups and the broader region as described by participants, as well as capturing the implications of the PR process and the key aspects of the project context. The DSSF evaluation used multiple data collection methods, mainly document analysis and face to face semistructured interviews. The document review included an analysis of both primary and secondary documentation (project proposals, reports, minutes of meetings, conference presentations; Council minutes, local media) (see Horsey, 2005). The document analysis provided a useful means of tracking the project management, decision making and communication within the groups and also between the groups and wider constituencies (e.g. the local government councillors and workers; the local community; the scientific and policy making community). For example, the silences and gaps illustrated power relations underlying resource deployment and prioritising of formal, technical and positivist knowledge (see Kelly et al., in press). The media analysis helped to highlight what aspects of sustainability were in vogue and helped place many of the participants' comments in context by highlighting the recent political, social and environmental events. These recorded data provided a useful 10 These reflections mirror the discussion by Rowe et al., 2004 when they conclude that effective participation does require communication and learning, which are in turn linked to issues of influence and transformation.

735

triangulation point for the reported data, when some participants were recalling events from three or four years ago. The reported data came from semi-structured interviews (see Appendix A) with a purposeful sample of participants and partners. The interviews covered both our process and outcome criteria but were open-ended enough to enable the data to be analysed for emerging and repeated themes not necessarily related to these pre-defined criteria. The interactive nature of the interviews meant that the researchers were better able to consider which criteria were important to the participants, meeting Webler et al.'s (2001) and Grant and Curtis' (2004) recommendation that evaluation should pay attention to the goals of the participants as well as the evaluators. This allowed the framework to respond to the different values and attitudes held by the sample participants, rather than mechanically applying a set of criteria without reference to the research context or heterogeneity of the individuals involved. The interviews were face to face (except for one telephone interview) and lasted between 30 and 90 min. The credibility of the final evaluation report in part depends upon ensuring that evaluation data are gathered from multiple stakeholders in order to capture the diversity of perceptions. The sample selection was particularly important for the DSSF project because such a broad spectrum of participation had occurred. Interviewees were chosen to reflect the four categories of participants — broad community input through the CWG, industry partners involved in the JVP component, scientists undertaking academic research within DSSF, and local and state government representatives. Whilst the sample was not exhaustive due to time and resource constraints, purposeful sampling techniques were used to seek maximum variety in the demographic and associational characteristics of individuals (n = 29), meaning common themes are less likely to be merely an artefact of shared circumstances. Additionally, saturation in terms of themes arising from answers was achieved after about 15 interviews, despite the variety of participants targeted, providing confidence in the robustness of the interview data despite the small sample size. Participants in the DSSF who were unable to take part came from all four categories and comments from interviewees suggest that these ‘missing’ people would have been unlikely to express markedly different views on the process.

4.3.

Evaluation staff

Selection of the evaluation team is also fundamental to the credibility of the final evaluation report. Evaluation can be undertaken by external evaluators who have not been involved in the PR process in order to ensure that important issues are not hidden or ignored due to local politics and dominant voices (Martin, 2001; Holloway, 2001). However these same authors argue it is often difficult to ‘make sense of the data’ without ‘insider’ understanding. There is a need to find a balance between key informants and those who have not been involved in the PR process, so the evaluation design, data collection and analysis can draw on both formal and local knowledge. Thus, our team consisted of both researchers uninvolved in the project and a researcher who had worked on the project prior to her current employment. Furthermore, the evaluators need to be an interdisciplinary team to ensure that

736

EC O LO GIC A L E CO N O M ICS 6 0 ( 2 00 7 ) 7 2 6 –7 42

data collection and analysis can engage with the spectrum of issues likely to be involved in participatory sustainability science; our team consisted of an environmental scientist, a psychologist and a sociologist. These comments draw attention to the reflexive aspects of evaluation, whereby the researchers must recognise the influence of their multiple, unstable and messy identities on the process (Pini, 2002). The researchers' field notes, comprising of reflections on how the researchers' own values,11 prejudices, experience and local knowledge might frame each stage of the evaluation process, were a further source of data. The field notes captured discussions about selecting criteria; designing the document review and the interview instrument; and details about the interview dynamic that may have influenced the data collected. Pini (2004) illustrates that reflexivity ought to consist of selfknowledge by the researchers, but also how they see you and how you see them. Field notes allowed us to do that to some degree. These reflections allow a more transparent consideration of the problematic and contradictory power relationships between the evaluator(s) and the participants (Wallerstein, 1999). They were particularly important given the insider:outsider composition of our evaluation team. However, these reflections are central to any evaluation of participatory sustainability science for two reasons. Firstly, the contested and complex sustainability knowledge practices requires active communication, translation and mediation (Cash et al., 2003) — highlighting the degree of interpretation required by those evaluating. Secondly as evaluation itself requires theoretical, critical and analytical deconstruction of the intersection of context, design and implementation practices and how these may influence the evaluation outcomes (Funtowicz et al., 2003).

4.4.

Analysis and feedback

The purpose of the analysis was to consider how the framework illuminated the relationships between the process of PR, embedded in the context of the Douglas Shire, and the project objectives. Our analysis12 consisted of comparing and contrasting interview transcripts, cross referenced to the different groups, to draw out the repeated themes, and a content analysis of the document review and field notes. An overview of the results is presented in Table 4, although space does not allow a nuanced discussion of these findings (these can be found in Kelly et al., in press). The findings, highlighting difficulties with representation, transparency and accountability, power relationships, poor communication of outcomes and therefore diminished ability to judge what had been achieved, resonate with other results in the literature (such as Grant and Curtis, 2004; Bellamy and McDonald, 2005; Funtowicz et al., 2003; Cash et al., 2003). Whilst the framework (see Fig. 1) might suggest deductive analysis i.e., the document review and the interview transcripts are analysed for evidence of the criteria being met, the

11 Indeed, the ‘local’ researcher struggled with personal sensations of guilt, and of frustration, at what she saw as a lack of progress on the social and indigenous aspects of the project (see also Pini, 2002). 12 Analysis was carried out by the three researchers independently and then reviewed as a team to ensure reliable interpretation (see Rowe et al., 2004).

choice of methods (document review and interviews) allowed iterative, inductive analysis, whereby themes emerging from the data could be considered. The chosen criteria were robust but some aspects emerged as more or less important to participants. For example, key themes emerging from the data were the importance of communication; the need for effective monitoring and implementation; local governance; and the importance of the role of the coordinator to organise learning and transformation. Furthermore, many participants emphasised the impact of contextual conditions and discussed how the DSSF PR relationships were influenced by factors external to the project. For example, JVP partners tended to focus on global and national economic drivers on their industries whilst the community members, particularly those feeling disconnected from later stages of DSSF, stressed local politics (Kelly et al., in press). Whilst these are captured in the criteria (capacity and opportunity to influence, conflict, context), their relative importance only emerged when in the field. In this particular case, these particular criteria required a deeper consideration than some of the literature (see Table 3) might suggest (see also Rowe et al., 2004; Grant and Curtis, 2004). This experience reinforces earlier comments regarding multiple perspectives on evaluation criteria and illustrates how the evaluation is also a process of co-producing knowledge. The DSSF evaluation forms part of a larger review of similar research projects on regional sustainability conducted across Australia (see Kelly et al., 2005) that is contributing to significant organisational learning within CSIRO. The DSSF evaluation process was part of an exit strategy for the research organisation and proved useful to help bring closure to the project. The results were presented to the participants and sponsors of the evaluation within CSIRO along with a summary of the important practical lessons for the future design and implementation of participatory sustainability science research e.g. the need for separating the leadership, coordination and facilitation roles; investment into ongoing communication; and clearly recorded decision making processes. As others (e.g. Funtowicz et al., 2003) argue, communication of evaluation outcomes is essential for ongoing learning for sustainability (and was a problem in our case study, see Kelly et al., in press). Indeed, the evaluation became part of the PR process as the participants constructed different interpretations of what had happened and why; often engaging in thoughtful and emotionally honest reflection on personal and collective lessons learnt from the process. However, because the review was done as the project was finishing, many participants argued that it was too early to say what the long term impacts of the process would be; although many were keen to see ongoing monitoring and evaluation of any changes. This resonates with literature on evaluating natural resource and sustainability policies, for example Weaver (no date) recommends expost evaluation as this is the only kind that might have data available to analyse outcomes, although Cash et al. (2003) suggest there can be a lag of a decade or more between intervention and changed social and institutional outcomes. Equally, at the time of writing, it is not clear how the project managers and key stakeholders (particularly the partners) will use the results in the next cycle of participatory research for sustainability (see Kelly et al., in press).

EC O L O G IC A L E C O N O M IC S 6 0 ( 2 0 07 ) 72 6 –7 42

5.

Adapting the framework for future research

The paper has illustrated how the framework can be used for summative evaluation, but the framework is equally applicable to formative evaluation. We believe the rationale for the choice of criteria, methods and data sources would remain consistent, in that the framework emphasises ongoing reflection on the relationship between purpose, context and methods. Formative evaluation is likely to have the same difficulties regarding responding to the participants' desire for tangible evidence of positive impact, which as discussed above, is difficult due to the lack of direct causal relationships within sustainability science. The framework could also be used for an ex ante evaluation, although some criteria may require adaptation. Indeed, the different timing of evaluation are pertinent to the design of the evaluation, but should not obscure that evaluation at any stage is only a snap shot of an ongoing set of relationships and practices (Cash et al., 2003; Reed et al., 2005). Ideally evaluation is an iterative and longitudinal process whereby the project design, development, implementation and ongoing processes would be regularly audited, and this would provide the data for a summative evaluation when the project was completed. Our flexible framework supports this iterative approach, in that it requires the evaluators to think carefully about how to delineate the process under evaluation and the implications that the issues of boundaries, purpose, focus and timing has for evaluation design. Equally, the framework could be implemented in a participatory manner. Although the DSSF evaluation was informed by the verbal and written views of the participants involved in the process, they did not take part in designing or implementing the framework. Involving the participants and sponsors of the research more closely with the decisions made at each stage in Fig. 1 would enhance the learning aspect of the evaluation process (see discussion on typologies of participatory research earlier). Whether using participatory evaluation or conventional evaluation undertaken and controlled by researchers, practicing reflexivity using field notes and group de-briefings, emerged as an important component of framework and should be used to maximise the learning aspects of evaluation. Finally, the DSSF evaluation could have been supplemented by additional methods. Firstly, a survey of the wider communities (both of place and of interest) to establish the awareness of sustainability and of the specific project (and any perceived outcomes arising from the project) could have provided another triangulation point for assessing the perceived links between process, context and outcome, particularly with regard to the impact criterion. Secondly, if evaluation of PR requires attention to group processes, particularly in terms of whether participants were able to participate and whether their voices are heard in group discussion, then it could be interesting to discuss the evaluation findings in focus groups. This would both provide an opportunity to observe group dynamics and to allow richer insights into the ‘way forward’ through collective deliberation of the evaluation results.13

13

These options were not explored for the DSSF project as they were not seen as particular cost-effective for this particular evaluation, given its timing, focus and purpose.

6.

737

Concluding discussion

This paper has illustrated the development and application of a framework to evaluate a case study of participatory sustainability science research and indicates how the evaluation provided some pertinent lessons for future processes. Whilst this paper focuses on the detail of the development of the framework and its implementation in one case study, the theoretical foundation for the methodology suggests the framework could be transferred to other settings to support further evaluations of participatory sustainability science. The framework built on three distinct literatures regarding PR, sustainability science and evaluation of partnership processes. This encouraged the evaluation process to stay true to the principles of participatory research rather than using traditional quantitative evaluation techniques that do not capture the subjective experiences of participants. For example, within the ‘bounding, focus, purpose and timing’ stage, careful attention was paid to the different modes of participation taking place; and the methods were chosen to allow multiple voices to inform the evaluation. Thus, the relationships between framing the evaluation (bounding the topic, timing, purpose and focus); the criteria by which to analyse data; and the methods and data sources used to evaluate the project; were considered within one coherent conceptual framework. The strengths of this framework can be summarised as conceptual coherence, flexibility and functionality (which are the qualities that Becker (2004) and Weaver (no date) identify for evaluating sustainability projects). We believe it could be used for ex ante, formative or summative evaluation, and in a participatory manner if required, as the framework is designed to provoke ongoing reflection about design choices (fit for purpose, fit for context) at each stage. The application illustrated how the criteria were used, including reflecting on how different choices could have been made. The case study discussion highlighted the importance of conducting transparent and reflexive evaluation practices to maximise benefits of undertaking any form of evaluation. The weaknesses of the framework relate to its utility, as it requires some familiarity with the principles and underlying theory in order to make informed choices at these stages, although we believe that a recipe book approach is inappropriate in the context of PR evaluation. Earlier, we defined participatory sustainability science as the co-generation of knowledge about socio-ecological systems drawing on multiple understandings in an ongoing collective dialogue in order to transform practice. The evaluation illustrates that learning from exchange of multiple perspectives was achieved, but that there were many obstacles to translating this learning into new practice. Many of these related to process, such as poor conflict resolution and insufficient facilitation. Others related to the outcomes, such as the perceptions of impacts, accountability and transparency. These examples are likely to impede any ongoing collective transition to sustainability. The relative importance of the socio-political context serves to reinforce the observation that participatory sustainability science can support and contribute, but not replace or over-ride, governance mechanisms at the local, regional, national and global levels. Despite the

738

EC O LO GIC A L E CO N O M ICS 6 0 ( 2 00 7 ) 7 2 6 –7 42

significant efforts made by all participants (sponsors, scientists and locals), the ambivalent evaluation results highlight the challenge of embarking on a participatory sustainability science project, albeit a challenge with important personal, social and organisational rewards. The principles of sustainability science demand ongoing collective and reflexive learning in order to inform the implementation of sustainability principles (see Mayumi and Giampietro, 2006). The literature surveyed makes a convincing case for respecting the co-generation of knowledge, drawing on multiple values and perspectives, leading to social learning and enhanced capacity to enable transformative personal and institutional change. However, evaluation is essential to learn how, and under what conditions, these ideals can best be achieved. As very few published accounts consider how to deliver such an evaluation this paper has played a small contribution by promoting the need for effective evaluation and disseminating the lessons we learnt. However, this area remains an important research area for the future. We feel more evaluation is essential if participatory sustainability science is truly going to contribute to the transition to sustainability.

Acknowledgements The team from CSIRO and the Macaulay Institute sincerely thank all of those who gave their time to be interviewed as part of the DSSF review. The research was supported by a CSIRO SEI Emerging Science Area Network Grant and funding from a Macaulay Institute project ‘Rural Economic Development’ (RO203909) that is funded by the Scottish Executive Environment and Rural Affairs Department.

Appendix A. Interview questions and directions to researchers Evaluating Participatory Research Project Douglas Shire Sustainable Futures

Interviewee: Interviewer: Type of recording:

Date: Time: □ Recorder switched on

Before starting the interview (1) Make interviewee aware that the session is being recorded and get their consent via the interview consent form. (2) Remind interviewee that □ data will be confidential; □ analysis will not attribute statements to individuals □ they are free to speak ‘off the record’; □ they can end the interview at any time. (3) Give interviewee the handout with all the questions and briefly outline the structure of the interview. Check on their time allocation so we can prioritise questions if they have to leave. Thank you for agreeing to take part in our research. As we explained when we contacted you, our research aims to:

• To review the process of developing the Douglas Shire Sustainable Futures, particularly the emphasis on community and other stakeholder involvement • To review the outcomes of the Douglas Shire Sustainable Futures projects • To develop a framework to evaluate the outcomes of projects with community and stakeholder involvement in order to help with designing future projects Remind them of the background paper sent out; provide one for reference if asked for. During the interview today, we will ask you for your personal opinion about the Douglas Shire Sustainable Futures process. The interview should take about 45 min, but you can speak for as long or as little time as you like. First series of questions: clarifying the nature of your involvement 1. We are interested in what your involvement in the project has been. Have you read the project summary we sent you? Are you comfortable about what we are referring to when we refer to projects and processes that have been a part of the ‘Douglas Shire Sustainable Futures’? Which part of the Douglas Shire Sustainable Futures process have you been involved in? 2. When did you first become involved in this/these parts of the DSSF? Probes: If not at the beginning, clarify exactly what stage. Get month and date. What was their first involvement — attending a meeting? Contacting project officer? 3. Why did you become involved in this/these parts of the DSSF? Probes: Did you volunteer or were you asked? Was it part of your paid role or a voluntary role? Did you understand the purpose of the DSSF project and how these objectives were set? 4. What was the nature of your involvement, the level of your participation? Probes: How much time do you think you contributed to this project in total (to date)? Were you happy with this level of involvement? What encouraged you to be involved at this level? 5. Are you still involved in the project? If not, why not? Probes: What and when was your last action with relation to the project? Why did you stop? What factors could have resulted in you staying actively involved? If still active, what do you foresee in terms of your involvement in the future? Why do you think this? 6. Do you feel that your contribution to the DSSF project is/ was valued? Probes: Did you feel that your personal opinions have been reflected throughout the project and in its outcomes? Why do they feel this way? Valued or not — by who? Check we know what they mean when they talk about their contribution. Do you think your level of involvement had an effect on the way the project developed? Second series of questions: perceived impact We are interested in what you think has been the impact of the project — on yourself personally/professionally, on the groups or partners you worked with, and finally on the Shire more broadly. 7. Do you feel you personally learned much from being involved in the DSSF project? Using Card 1 please indicate a

EC O L O G IC A L E C O N O M IC S 6 0 ( 2 0 07 ) 72 6 –7 42

number that best reflects your learning. Please explain the reasons for your answer. Probes: Why did you learn/not learn during the process? What could have been done differently for you to have learned more during your involvement? 8. Did/do you find your involvement in the DSSF project has changed the way you see the world? Using Card 2, please indicate a number that best reflects any changes. Please explain your answers. Probes: How did it contribute to any personal transformation? What could have been done differently to have made your experience more transformational? 9. Did/do you find that your involvement in the DSSF project has changed your actions (personal or professional) in your everyday life? Using Card 2, please indicate a number that best reflects any changes. Please explain the reasons for your answer. Probes: How did it contribute to a transformation? What could have been done differently to have made your experience more transformational? 10. Did/do you find your involvement in the DSSF project has improved your ability to take part in other collaborative projects or civic actions? Using Card 2, please indicate a number that best reflects any changes. Please explain the reasons for your answer. Probes: How did it contribute to a transformation? What could have been done differently to have made your experience more transformational? Impact on group involved 11. In relation to the partners/group that we were involved with for this project, do you feel that participation in the DSSF process resulted in learning within the group involved? Please explain the reasons for your answer. Probes: Why the group did learn/not learn during the process? What could have been done differently to have improved group learning? 12. Do you think that involvement in the DSSF project has changed the way other members in the group involved see the world? Use Card 2 to indicate a number to reflect this. Please explain reasons for your answer. Probes: How did the project contribute to a transformation? What could have been done differently to have made the group experience more transformational? 13. Did/do you think that involvement in the DSSF project has changed the actions that others in the group take in relation to their everyday life, personally or professionally? Use Card 2 to indicate a number. Please explain the reasons for your answer. Probes: How did it contribute to the group's changes? What could have been done differently to have made the group's experience more transformational? 14. Did/do you find your involvement in the DSSF project has improved other members in the groups' ability to take part in other projects or civic actions? Use Card 2 to indicate a number. Please explain the reasons for your answer. Probes: How did it contribute to changes? Shire and general impacts 15. How well do you think the outcomes of the project have been implemented? Use Card 3 to give a score of how well this has been done. Probes: Why have you given that score? What were the barriers or the facilitating aspects to implementation?

739

16. Do you think the project is contributing towards a more sustainable shire? Probes: Why do you think that? 17. Do you think the project has led to any particular further actions or projects in the Shire that will contribute towards the vision of a Sustainable Future? Probes: What projects? Who runs them? What is the relationship? 18. Do you feel that the outcomes from the project are being monitored and there is/will be opportunity to contribute to a review of priorities and actions? Probes: When? How? By who? Will they contribute to this review? 19. Has the DSSF empowered the particular groups in the Shire? Probe: Who are these groups? Why do you think this? Has the DSSF changed any group's agenda and actions? 20. Do you think there is a relationship between the process of developing the project and its success or otherwise in making Douglas Shire more sustainable? Probe: Why do you think this? If there is a relationship, try to get them to identify the elements contributing to this? If there is not, find out what is the main enabler or constraint to achieving rural sustainable development? Have the objectives/vision statement for DSSF to hand if they can't remember what the objectives are! How might the strategy been different if the CWG and joint venture partnership representatives had not been involved? Would it have been better or worse in terms of delivering a more sustainable shire? Third series of questions: benefits and challenges of the process 21. What were the best aspects of the process of developing (and implementing outcomes) the project? 22. What were the worst aspects of the process of developing (and implementing) the project? Probes: Was there enough time allowed? Were the process norms of how the meetings and decisions would be taking established? How were conflicts handled? Were different values/perspectives accommodated or ignored? Were all voices equal or did some seem to have more clout than others? Did they come to reasoned, informed and public spirited decisions? 23. Would you have liked to change anything about the project? Probes: What would you have like to seen changed? Was anything missing? Was anything extraneous? What did you think about the language of the research? How scientific/rigorous was the analysis? 24. What would you advise anyone trying to develop a similar partnership/participatory approach to sustainability elsewhere? 25. Has the DSSF helped to develop new relationships between different people or organisations, or transformed old relationships? Probes: In what ways? Has this new or transformed relationship helped or hindered the process of developing/implementing the project? 26. Do you feel that the DSSF project is an example of an open process that local people can influence and become part of? Probe: Why do you think this? Do you think that the people involved in developing the project feel they ‘own’ the project and its implementation now? Was local knowledge respected? 27. What would you look at in order to judge the success of the overall DSSF, or the project you were involved in, in terms of its impact:

740

EC O LO GIC A L E CO N O M ICS 6 0 ( 2 00 7 ) 7 2 6 –7 42

• Personally? • on the group involved? • on the Douglas Shire? Biographical details This section gets a few details about yourself that allows us to put your comments in context. Where do you live? Do you work? □ Yes □ No If so, doing what? How long have you lived in the Shire? Where did you live before moving to the Shire? What age group do you belong to? (Use Card 4 to indicate) Gender: MF Other comments Do you have any further questions about our research? Any other comments? Is there anyone else you think we should be interviewing with regard to this project? Warm down:Thank you for your time and your input to the project. We will send you a short summary of the views collected from our participants in the Douglas Shire during July or July 2005. We will be using this compiled data, along with data collected from reviews of other projects elsewhere in Australia, to make recommendations about future participatory research processes. Remind them that they can contact us with additional thoughts on the subjects raised in the interview anytime. Note total time of interview (min): ……………………………. □ Switch off recorder Note your observations: e.g. which question interviewee had problems with and why; did they feel relaxed or tense; was there any time pressure or interruptions.

REFERENCE LIST

14

Abelson, J., Forest, P., Eyles, J., Smith, P., Martin, E., Gauvin, F., 2003. Deliberations about deliberative methods: issues in the design and evaluation of public participation processes. Social Science & Medicine 57, 239–251. Alston, J.M., Norton, G.W., Pardey, P.G., 1995. Science Under Scarcity: Principles and Practice for Agricultural Research Evaluation and Priority Setting. Cornell University Press, Ithaca. Arnstein, A., 1969. A ladder of citizen participation. Journal of the American Institute of Planners 26 (4), 216–233. \Asthana, S., Richardson, S., Halliday, J., 2002. Partnership working in public policy provision: a framework for evaluation. Social Policy and Administration 36, 780–795. Becker, J., 2004. Making Sustainable Development Evaluations Work in Sust. Dev. 12 200–211 Beierle, T., Konisky, D., 2001. What are we gaining from stakeholder involvement? Observations from Environmental Planning in the Great Lakes. Environment and Planning C, Government and Policy 19, 515–527. Bellamy, J.A., 2004. Moving beyond “talk the talk”: lessons from community-based research partnerships for coastal zone management. Paper Presented at: Coastal Zone Asia Pacific

14

*For copies of unpublished papers, please contact the authors.

2004 Conference (CZAP 2004) ‘Improving the Quality of Life in Coastal Areas’, 5–9 September 2004. Bellamy, J.A., McDonald, G.T., 2005. Through multi-scaled lenses: a systems approach to evaluating natural resource management. Regional Natural Resource Management Planning: the challenges of evaluation as seen through different lenses. CIRM Social Dimensions of Natural Resource Management Working Group Symposium, 15th October, 2004, pp. 11–18. Bellamy, J.A., Walker, D.H., McDonald, G.T., Syme, G.J., 2001. A systems approach to the evaluation of natural resource management initiatives. Journal of Environmental Management 63, 407–423. Biggs, S., 1989. Resource-Poor Farmer Participation in Research: a Synthesis of Experiences From Nine National Agricultural Research Systems. OFCOR Comparative Study Paper, vol. 3. International Service for National Agricultural Research, The Hague. Blackstock, K.L., 2005a. A critical look at community based tourism. Community Development Journal 40, 39–49. Blackstock, K.L., 2005b. Evaluating the impact of participatory research: literature review. Unpublished discussion document⁎. Bloomfield, D.K., Collins, C.Fry, Munton, R., 2001. Deliberation and inclusion: vehicles for increasing trust in UK public governance. Environment and Planning. C, Government and Policy 19, 501–513. Bossel, H., 1997. Finding a comprehensive set of indicators of sustainable development by application of orientation theory. In: Moldan, B., Billharz, S. (Eds.), Sustainability Indicators: Report of the project on indicators of sustainable development. John Wiley and Sons, Chichester, pp. 101–109. Botcheva, L., Roller White, C., Huffman, L.C., 2002. Learning culture and outcomes measurement practices in community agencies. American Journal of Evaluation 23, 421–434. Brinkerhoff, J.M., 2002. Assessing and improving partnership relationships and outcomes: a proposed framework. Evaluation and Program Planning 25, 215–231. Cash, D.W., Clark, W.C., Alcock, F., Dickson, N.M., Eckley, N., Guston, D.H., Jager, J., Mitchell, R.B., 2003. Knowledge systems for sustainable development. PNAS 100 (14), 8086–8091. Chelimsky, E., 1997. The coming transformation in evaluation. In: Chelimsky, E.S., Standish, W.R. (Eds.), Evaluation for the 21st Century: a Handbook. Sage Publications, California. Coenen, F.J.H., Huitema, D., O'Toole, L.J., 1998. Participation and environment. In: Coenen, F.J.H., Huitema, D., O'Toole, L.J. (Eds.), Participation and the Quality of Environmental Decision Making. Kluwer Academic Press, Netherlands, pp. 1–22. Davies, G., Burgess, J., 2004. Challenging the ‘view from nowhere’: citizen reflections on specialist expertise in a deliberative process. Health and Place 10, 349–361. Davies, B.D., Blackstock, K.L., Brown, K.M. and Shannon, P. 2004. Challenges in creating local agri-environmental cooperation action amongst farmers and other stakeholders: final report Scottish Executive Environment and Rural Affairs Department Flexible Fund: MLU/927/03⁎. De Marchi, B., Ravetz, J., 2001. Participatory Approaches to Environmental Policy. EVE Policy Research Brief, vol. 10. Cambridge Research for the Environment, Cambridge. Denzin, N., 1989. The research Act: A Theoretical Introduction to Sociological Methods, 3rd edition. Prentice Hall, Englewood Cliffs, NJ. Dore, J., Woodhill, J., 1999. Sustainable Rural Development: Final Report. Greening Australia, Canberra. Dovers, S., 2004. Environment and Sustainability Policy: Creation, Implementation and Evaluation. The Federation Press, Canberra. Dryzek, J., 2000. Deliberative Democracy and Beyond: Liberals, Critics, Contestations. Oxford University Press, Oxford.

EC O L O G IC A L E C O N O M IC S 6 0 ( 2 0 07 ) 72 6 –7 42

Ekboir, J., 2003. Why impact analysis should not be used for research evaluation and what the alternatives are. Agricultural Systems 78, 166–184. Fischer, F., 2000. Citizens, Experts and the Environment: the Politics of Local Knowledge. Duke University Press, Durham, NC. Funtowicz, S., Guimaraes-Pereira, A., Lonza-Ricci, L., Wolf, O., 2003. Recommendations for Sustainability-Oriented European Research Programs, Deliverable 6, AIRP-SD Project, EC-STRATA Program. [accessed 2.2.06]http://www.seri.at/airp-sd/start/ _docs/Executive%20Summary%20D6.pdf. Gibbon, M., Labonte, R., Laverack, G., 2002. Evaluating community capacity. Health and Social Care in the Community 10, 485–491. Grant, A., Curtis, A. 2004. Refining Evaluation Criteria for Public Participation Using Stakeholder Perspectives of Process and Outcomes. Rural Society, 14: 142–162. Greenwood, D.J., Whyte, W.F., Harkavy, I., 1993. Participatory action research as a process and as a goal. Human Relations 46, 175–192. Gunderson, L.H., Holling, C.S. (Eds.), 2002. Panarchy: Understanding Transformations in Human and Natural Systems. Island Press, Washington, DC. 507 pp. Hinterberger, F., Bosch, G., Giljum, S., 2003. AIRP-SD Final Report: Executive Summary. [accessed 2.2.06] http://www.seri.at/airp-sd/ start/_docs/AIRP-SD%20Final%20Report_Executive%20summary. pdf. Holloway, J., 2001. Understanding Evaluation: Part 18 of Managing the Project. Open University Press, Milton Keynes. Horsey, B., 2005. An Evaluation of the Douglas Shire Sustainable Futures Projects: Document Review. CSIRO, Canberra⁎. Hoverman, S., 2005. The value of evaluation through the local implementation lens. In: Bellamy, J. (Ed.), Regional Natural Resource Management Planning: the Challenges of Evaluation as Seen Through Different Lenses: Papers From an Occasional Symposium held 15th October 2004. CSIRO Sustainable Ecosystems, Brisbane, pp. 35–42. Johnson, N., Lilja, N., Ashby, J., Garcia, J., 2004. The practice of participatory research and gender analysis in natural resource management. Natural Resources Forum 28, 189–200. Kasemir, B., Jaeger, C., Jaeger, J., 2003. Citizen Participation in Sustainability Assessments. In: Kasemir, B., Jaeger, J., Jaeger, C., Gardner, M. (Eds.), Public Participation in Sustainability Science. Cambridge University Press, Cambridge. Kates, R.W., Clark, W.C., Corell, R., Hall, J.M., Jaeger, C.C., Lowe, I., McCarthy, J.J., Schellnhuber, H.J., Bolin, B., Dickson, N.M., Faucheux, S., Gallopin, G.C., Grübler, A., Huntley, A., Jäger, J., Jodha, N.S., Kasperson, R.E., Mabogunje, A., Matson, P., Mooney, H., Moore III, B., O'Riordan, T., 2001. Sustainability Science. Science 292 (5517), 641–642. Kelly, G.J., Haslam-McKenzie, F., submitted for publication. Housing affordability in a sea change community. Housing Studies. Kelly G.J. Measham T., 2005. Participation in Participatory Research, unpublished working document, Canberra: CSIRO⁎. Kelly, G.J., Measham, T., Horsey, B., Leitch, A., Smith, T., 2005. Reflections on the Stream: a Review of the Community Capacity to Manage Systems Research Stream. CSIRO Sustainable Ecosystems, Canberra⁎. Kelly, G.J., Blackstock, K., Horsey, B., in progress. Limits to learning for developing a sustainable region: lessons from northeast Queensland. Under review by Society and Natural Resources. Kemmis, S., McTaggart, R., 2000. Participatory Action Research, In: Denzin, N.K., Lincoln, Y. (Eds.), Handbook of Qualitative Research, 2nd ed. Sage Publications, Thousand Oaks. Kenyon, W., 2005. A Proposed Framework for Evaluating Trust in Participatory Processes, Paper Given at the European Society for Ecological Economics Conference, Lisbon, Portugal, June 2005.

741

Laverack, G., 2001. An identification and interpretation of the organisational aspects of community empowerment. Community Development Journal 36, 134–145. Maarleveld, M., Dangbégnon, C., 1999. Managing natural resources: a social learning perspective. Agriculture and Human Values 16, 267–280. MacNaughten, P., Jacobs, M., 1997. Public identification with sustainable development — investigating cultural barriers to participation. Global Environmental Change: Human and Policy Dimensions 7, 5–24. MacNeil, C., 2002. Evaluator as steward of citizen deliberation. American Journal of Evaluation 23, 45–54. Martin, V., 2001. Completing and Evaluating the Project: Part 16 of Managing the Project. Open University Press, Milton Keynes. Martin, A., Sherington, J., 1997. Participatory research methods — implementation, effectiveness and institutional context. Agricultural Systems 55 (2), 195–216. Mayumi, K., Glampietro, M., 2006. The epistemological challenge of self modifying systems: governance and sustainability in the post-normal science era. Ecological Economics 57 (3), 382–399. Muller, A., 2003. A flower in full blossom? Ecological Economics at the cross-roads between normal and post-normal science. Ecological Economics 45, 19–27. OECD., 2004. Stakeholder Involvement Techniques: Short Guide and Annotated Bibliography, A report from Nuclear Energy Agency Forum for Stakeholder Confidence. Available from http://www.nea.fr/html/rwm/reports/2004/nea5418-stakeholder.pdf [Accessed 3rd February 2006]. Okali, C., Sumberg, J., Farrington, J., 1994. Farmer Participatory Research. Intermediate Technology Publications, London. O'Meara, P., Chesters, J., Han, G., 2004. Outside — looking in: evaluating a community capacity building project. Rural Society 14 (2), 126–141. O'Neill, J., 2001. Representing people, representing nature, representing the world. Environment and Planning. C, Government & Policy 19, 483–500. O'Riordan, T., 2001. Globalism, localism and identity. Fresh Perspectives on the Transition to Sustainability. Earthscan Publications Ltd, London. Osorio-Peters, S., 2003. AIRP-SD Work Package Six: Executive Summary. [accessed 2.2.06] http://www.seri.at/airp-sd/start/ _docs/Executive%20Summary%20_final_.pdf. Pain, R., 2004. Social geography: participatory research. Progress in Human Geography 28, 652–663. Patton, M.Q., 1987. How to Use Qualitative Methods in Evaluation. Sage, London. Patton, M.Q., 1997. Toward distinguishing empowerment evaluation and placing it in a larger context. Evaluation Practice 18, 147–163. Patton, M.Q., 1998. Utilization-Focussed Evaluation, 3rd edn. Sage, Thousand Oaks, CA. Pellizzoni, L., 2001. The myth of best argument: power, deliberation and reason. British Journal of Sociology 52, 59–86. Pellizzoni, L., 2003. Uncertainty and participatory democracy. Environmental Values 12, 195–224. Pini, B., 2002. Focus groups, feminist research and farm women: opportunities for empowerment in rural social research. Journal of Rural Studies 18, 339–351. Pini, B., 2004. On being a nice country girl and an academic feminist: using reflexivity in rural social research. Journal of Rural Studies 20, 169–179. Reed, M., Fraser, D.G., Morse, S., Dougill, A.J., 2005. Integrating methods for developing sustainability indicators to facilitate learning and action in ecology and society 10 (1) r3 [online] URL: http://www.ecologyandsocietyorg/vol10/iss1/resp3. Richards, C., Sherlock, K., Carter, C., 2004. Practical Approaches to Participation. SERP Policy Brief, vol. 1. Macaulay Institute, Aberdeen⁎.

742

EC O LO GIC A L E CO N O M ICS 6 0 ( 2 00 7 ) 7 2 6 –7 42

Rowe, G., Frewer, L., 2000. Public participation methods: a framework for evaluation in science. Technology and Human Values 25, 3–29. Rowe, G., Marsh, R., Frewer, L.J., 2004. Evaluation of a deliberative conference in science. Technology and Human Values 29, 88–121. Rural Industries Research and Development Corporation, 2003. RIRDC Corporate Plan 2003–2008. http://www.rirdc.gov.au/ corporateplan/2003. Schulz, A.J., Israel, B.A., Lantz, P., 2003. Instrument for evaluating dimensions of group dynamics within community based participatory research partnerships. Evaluation and Program Planning 26, 249–262. Scott, A.J., 1998. The contribution of forums to rural sustainable development: a preliminary evaluation. Journal of Environmental Management 54, 291–303. Sherlock (2002). Community matters: reflections from the field in Sociological Research Online Vol 7 (2). SLIM, 2004a. Developing Conducive and Enabling Institutions for Concerted Action: SLIM Policy Briefing No. 3, May 2004. http:// slim.open.ac.uk [accessed 18th October 2004]. Smith, G., 2001. Taking deliberation seriously: institutional design and green politics. Environmental Politics 10, 72–93. Steelman, T., Ascher, W., 1997. Public involvement methods in natural resource policy making: advantages, disadvantages and tradeoffs. Policy Sciences 30 (2), 71–90. Stirling, A., 2004. Opening up or closing down: analysis, participation and power in the social appraisal of technology. In: Leach, M., Scoones, I., Wynne, B. (Eds.), Science, Citizenship and Globalisation. Zed, London. Strager, M.P., Rosenberger, R.S., 2006. Incorporating stakeholder preferences for land conservation: weights and measures in spatial MCA. Ecological Economics 58 (1), 79–92.

Thurston, W., MacKean, G., Vollman, A., Casebeer, A., Weber, M., Maloff, B., Bader, J., 2005. Public participation in regional health policy: a theoretical framework. Health Policy 73 (3), 237–252. Van den Hove, S., 2000. Participatory approaches to environmental policy making: the European Commission Climate Policy Process as a case study. Ecological Economics 33, 457–472. Walker, D.H., Vela, K.J., Kotzman, M., 2004. Regional Planning and the Sugar Industry. CSIRO Sustainable Ecosystems, Canberra. Wallerstein, N., 1999. Power between the evaluator and the community: research relationships within New Mexico's healthier communities. Social Science & Medicine 49, 39–53. Wandersman, A., 1981. A framework of participation in community organisations. Journal of Applied Behavioural Science 17, 27–58. Weaver, P., no date. A methodological framework for evaluating sustainability science, paper for deliverable 3 [accessed 2.2.06] http://www.seri.at/airp-sd/start/_docs/AIRP-SD_Del3_Executive Summary.pdf. Webler, T., 1995. Right discourse and citizen participation: an evaluative yardstick. In: Renn, O., Webler, T., Wiedemann, P. (Eds.), Fairness and Competence in Citizen Participation: Evaluating Models for Environmental Discourse. Kluwer, pp. 35–86. Webler, T., Tuler, S., Krueger, R., 2001. What is a good public participation process? Environmental Management 27, 435–450. Wicker, A.W., 1989. Substantive theorizing. American Journal of Community Psychology 17, 531–547. Wilcox, D., 2000. The Guide to Effective Participation. [accessed 10th November, 2003] http://www.partnerships.org.uk/guide/ index.htm.