Aggregating sustainability indicators: Beyond the weighted sum

Aggregating sustainability indicators: Beyond the weighted sum

Journal of Environmental Management 111 (2012) 24e33 Contents lists available at SciVerse ScienceDirect Journal of Environmental Management journal ...

291KB Sizes 0 Downloads 91 Views

Journal of Environmental Management 111 (2012) 24e33

Contents lists available at SciVerse ScienceDirect

Journal of Environmental Management journal homepage: www.elsevier.com/locate/jenvman

Aggregating sustainability indicators: Beyond the weighted sum Hazel V. Rowley a, *, Gregory M. Peters b, Sven Lundie a, c, Stephen J. Moore d a

UNSW Water Research Centre, The University of New South Wales, UNSW Sydney, NSW 2052, Australia Department of Chemical and Biological Engineering, Chalmers University of Technology, SE-412 96 Göteborg, Sweden c PE International, Hauptstraße 111e113, 70771 Leinfelden-Echterdingen, Germany d School of Civil and Environmental Engineering, The University of New South Wales, UNSW Sydney, NSW 2052, Australia b

a r t i c l e i n f o

a b s t r a c t

Article history: Received 8 October 2010 Received in revised form 2 May 2012 Accepted 7 May 2012 Available online xxx

Sustainability analysts and environmental decision makers often overcome the difficulty of interpreting comprehensive environmental profiles by aggregating the results using multi-criteria decision analysis (MCDA) methods. However, the wide variety of methodological approaches to weighting and aggregation introduces subjectivity and often uncertainty. It is important to select an approach that is consistent with the decision maker’s information needs, but scant practical guidance is available to environmental managers on how to do this. In this paper, we aim to clarify the theoretical implications of an analyst’s choice of MCDA method. By systematically examining the methodological decisions that must be made by the analyst at each stage of the assessment process, we aim to improve analysts’ understanding of the relationship between MCDA theory and practice, and enable them to apply methods that are consistent with a decision maker’s needs in any given problem context.  2012 Elsevier Ltd. All rights reserved.

Keywords: Aggregation Decision making Life cycle assessment (LCA) Multi-criteria decision analysis (MCDA) Sustainability Weighting

1. Introduction 1.1. Background Quantitative environmental systems analysis tools such as environmental life cycle assessment (LCA) can be used to evaluate decision alternatives (e.g. products, sites, projects) on the basis of various environmental indicators. However, a remaining methodological challenge to environmental managers is how to construct a comprehensive judgement of ‘environmental performance’ from the many indicators assessed (Bengtsson, 2000; Curran, 2008; Hertwich and Hammitt, 2001). This challenge can be approached using multi-criteria decision analysis (MCDA) methods, conceptually introduced in the LCA framework and standards as ‘weighting’. Although the ISO standards (ISO, 2006a,b) provide for this optional weighting process, they give no guidance on the aggregation method to be used. Although the explicit aggregation of LCA results is not always done, it typically employs a simple weighted sum when it is (e.g. Goedkoop and Spriensma, 2001; Hermann et al.,

* Corresponding author. Tel.: þ61 2 9385 5017; fax: þ61 2 9313 8624. E-mail addresses: [email protected] (H.V. Rowley), [email protected] (G.M. Peters), [email protected] (S. Lundie), [email protected] (S.J. Moore). 0301-4797/$ e see front matter  2012 Elsevier Ltd. All rights reserved. http://dx.doi.org/10.1016/j.jenvman.2012.05.004

2007; Howard et al., 1999). The weighted sum is also frequently described and used in the context of environmental and sustainability decision-making frameworks (e.g. Gold Coast Water, 2005; Lundie et al., 2006; NSW DPWS, 2003), particularly in the water and building industries. The MCDA literature describes many methodological approaches to weighting and aggregation. Some authors (e.g. Cloquell-Ballester et al., 2007) have pointed to this variety as a reason why MCDA is not well-used in environmental decision making e it is not a ‘one-size-fits-all’ tool. However, by incorporating one method e the weighted sum e into our assessments, the sustainability assessment community has already entered the realm of MCDA. It is important to recognise that the very choice of an aggregation method introduces subjectivity into our analyses, and do our best to select the most appropriate method in any given application. Virtually every environmental systems analysis is undertaken to support a decision process of some kind. Accordingly, the traditional role of sustainability analysts is re-cast here so that it also incorporates the role of decision analyst. Clients are re-cast as decision makers or their representatives, and the analyst’s role “should be to interpret the information needs of the decision maker, and help him or her make methodological choices that are consistent with these needs and relevant from his or her point of view” (Bengtsson, 2000, p 47). This paper responds to calls from

H.V. Rowley et al. / Journal of Environmental Management 111 (2012) 24e33

various authors (e.g. Curran, 2008) to provide the environmental systems analysis community with clear, practical guidance on how to do this.

25

a proposed ideal workflow to be followed by sustainability analysts in their dual role as decision analysts. 3.1. Defining the decision maker

1.2. Definitions and notation Throughout this paper, reference will be made to various decision contexts in which decision alternatives (or simply alternatives) are assessed on the basis of their performance against various environmental decision criteria (or simply criteria). The basic data of the MCDA problem in quantitative sustainability assessment are thus defined as:  A set of n criteria C ¼ {c1,.,cn} (e.g. energy use; aggregated toxic emissions)  A set of m alternatives X ¼ {a1,.,am} (also sometimes called ‘technical options’ in LCA1)  An m by n evaluation matrix E (also called a ‘performance table’) containing the evaluation eki of each alternative ak on each criterion ci. Without loss of generality, one can assume that the maximum evaluation on each criterion represents the best outcome.2 The process by which the evaluation matrix is holistically interpreted to enable quantitative comparison of the alternatives is discussed in the literature as weighting, valuation or aggregation. These terms have different meanings to different audiences, and that none is perfect. Here, the term aggregation is used to describe the process by which the evaluation matrix is holistically interpreted to enable quantitative comparison of the alternatives (which may or may not involve producing a single overall score for each decision alternative; see Section 3.7); and the term weighting is used to represent the assignment of either importance coefficients or substitution rates to the criteria (see Section 3.10). Consistent with Bengtsson (2000), the term decision maker or client refers to any individual or group who uses sustainability assessment results. The term analyst refers to the specialist or specialist team performing the sustainability and decision analyses. 2. Aims The aim of this paper is to clarify the theoretical impl0ications of the choices an analyst makes in selecting and applying an MCDA method. By systematically examining the methodological decisions that must be made by the analyst at each stage of the assessment process, the aim is to improve sustainability analysts’ understanding of the relationship between MCDA theory and practice. Our ultimate aim is to provide a guide for sustainability analysts on how to ensure that their use of MCDA methods is consistent with the values and information needs of their clients in any given problem context. 3. Examination of methodological choices For this paper to be comprehensive, it has been necessary to include some sections related only to subsets of the available MCDA methods and applications. For clarity, readers should refer throughout to Fig. 1, which presents a schematic representation of

1 The term ‘technical options’ is used by authors including Palme et al. (2005) and Harsch et al. (1996). 2 Although in environmental systems analysis we generally seek to minimise environmental impacts, the indicators can be converted into maximising criteria as described later in this paper.

Virtually every sustainability assessment is undertaken to support a decision process of some kind. Sometimes the link to the decision process is strong (e.g. sustainable product design); sometimes it is further removed (e.g. government-driven corporate carbon footprinting). One of the first things that must be defined in any such project, therefore, is who the decision maker is (Fig. 1). It may be an individual; a committee with its members drawn from inside and/or outside the client; or another entity e for instance: ‘society’ (perhaps further defined locally, regionally or globally; or by non-geographic considerations); or an as-yet-unknown consumer. Although the definition of the decision maker may sometimes seem obvious or even trivial, it will affect future methodological decisions (e.g. if ongoing interaction with the decision maker is not possible, the ‘interactive’ aggregation methods may not be applicable). In particular, it is important to distinguish who will make decisions about the sustainability assessment project from who will make decisions about the alternatives. When this paper refers to the ‘decision-maker’, it is always referring to the latter. For example, the analyst may make the methodological decisions (e.g. how to elicit weights) about an MCDA for an infrastructure upgrade project, where the ultimate decision maker is a panel of stakeholders. Or the analyst may guide a panel of industry stakeholders’ (the client’s) methodological decisions about an eco-labelling project for which the final decision maker is ‘the consumer’. 3.1.1. Holistically representing the values of a decision-making committee If the decision maker is a committee,3 then the analyst must decide how to combine the individual members’ value-based information into a holistic representation of the committee’s values. An arithmetic mean is often used, although it cannot consider the spread or distribution of values, and by averaging per criterion it considers individual’s weights on one criterion in isolation from his or her other weights (Koffler et al., 2008). Depending on the aggregation method, this may not be desirable. If the decision maker is a committee, the analyst must also decide how to account for the committee’s composition. For example, an internal corporate committee consisting of 10 members from the team responsible for meeting emissions targets and 2 members from team responsible for reducing resource use may naturally assign emissions criteria more importance than resource use ones. If this does not reflect the client’s intention, the analyst may wish to account for this by either a) altering the composition of the committee to better reflect the client’s corporate values or b) mathematically weighting the input of the two groups to reflect, say, the proportion of employees they represent, or their overall importance in the context of the client’s corporate values. In so doing, the analyst has assumed that individuals’ values are primarily influenced by one attribute; in this case, membership of a particular business group. This may not be valid. It is important to recognise that many variables potentially play a role, and that such an assumption is therefore wholly subjective unless it is backed up by a comprehensive multivariate statistical analysis demonstrating that the values are influenced by group membership. An alternative, which is arguably no less arbitrary, is to accept

3 We note here the possibility of both formal and informal committees. For example, in the case where the decision maker is ‘society’, there is no formal notion of a committee but the implications are the same.

26

H.V. Rowley et al. / Journal of Environmental Management 111 (2012) 24e33 3.1 Identify the decision maker

Group

Type of decision maker?

No

Determine how to adjust for composition

Is the group representative?

Individual

Yes

3.2

Determine how to holistically represent values expressed within the group

Define the decision process objective(s)

3.3 Define the decision alternatives 3.4

Establish the set of evaluation criteria

No

Are appropriate natural attributes readily available to represent criteria?

3.4

Yes Select natural attributes

Select constructed attributes

Finish 3.6 3.6 Yes Apply the veto threshold model

Are there minimum requirements on each criterion?

Yes

Is the veto threshold approach sufficient?

No

No

3.10

Yes

Can the criteria be assigned cardinal weights (directly or indirectly)?

Finish

Apply a hierarchical approach Yes

3.6

Is a hierarchical approach feasible & sufficient?

No

No

3.7

3.10 Yes

Can the criteria be assigned a rank order (directly or indirectly)?

Is an interactive MCA method feasible (are stakeholders readily available) & desirable?

No

Yes

No

3.6

Select an interactive MCA method

3.7 Yes

Can the criteria be assumed equally important?

Implement method

Compensatory

No

Employ a ’data-driven’ method or do not use MCDA

Non-compensatory

What type of aggregation logic is appropriate?

3.11

Select a synthesising criterion aggregation scheme

Finish 3.7

3.8 Finish

Consider whether and how to normalise performance table

Select a synthesising preference relational aggregation scheme

3.7

3.11, 3.5 Collect weights appropriate for use as importance coefficients, and collect other required parameters (e.g. indifference threshold)

3.11 Collect weights appropriate for use as substiution rates Implement scheme

Implement scheme Finish Finish

Fig. 1. Workflow schematic for sustainability analysts using MCDA. Numerical references to the text are given where applicable.

3.7

H.V. Rowley et al. / Journal of Environmental Management 111 (2012) 24e33

the committee’s composition at face value and give equal consideration to each member’s contribution. In any case, the analyst should clearly document his or her method and justification.

27

minimised by clearly communicating all assumptions and methods (Munda, 2005). We identify five desirable properties of the criteria set based on Bouyssou (1990):

3.2. Defining the decision process objective(s) Environmental decision-makers are often charged with choosing one alternative (e.g. a technology, material, product, or management strategy) from the set of alternatives. However, other decision process objectives may also be encountered. Among the various classifications that have been described (Guitouni and Martel, 1998), Roy (2005) seems to best cover the decision situations typically encountered by sustainability analysts. He identifies (1996, 2005): 1. The choice problematic, in which the objective is to choose the most suitable alternative from a set of feasible alternatives; 2. The sorting problematic, which assigns each alternative to a predefined category based on its performance against certain benchmarks; 3. The ranking problematic, which places all or some of the alternatives into partially or completely ordered equivalence classes; and 4. The description problematic, which determines the overall performance of all or some alternatives in relation to the criteria set and preferences, without making any recommendations. Having a clearly defined decision process objective will help the analyst to make sensible methodological decisions throughout the analysis. For example, a hierarchical approach (see Section 3.6 below) may be appropriate for a choice problematic but not a description problematic. The decision process objective should not be confused with the project objective, where the latter expresses the desired outcome of the project in which the decision process takes place, e.g. ‘develop the site with minimal environmental burdens’. 3.3. Defining the decision alternatives The definition of the decision alternatives may be quite simple, or it may be deceptively non-trivial. As Miettinen and Hämäläinen (1997) identify, the subjects of a quantitative sustainability analysis may not directly correspond with the real decision alternatives e for example, the decision alternatives may be political strategies (e.g. tax structures) that affect the market share of the products analysed. The decision alternatives may also take into account secondary market effects such as the ‘rebound effect’, in which changes in functionality affect consumer behaviour, altering the net environmental outcome (Baumann and Tillman, 2005; Weidema, 2003). In the context of LCA, in particular, the definition of functional units is critical to understanding the decision alternatives. 3.4. Selecting evaluation criteria 3.4.1. Properties of the criteria set Life cycle analysts and others routinely select criteria for their analyses when defining the goal and scope, while in some cases, the evaluation criteria are predefined by the client. However, in our roles as decision analysts we must ensure that the criteria satisfy the technical requirements of the MCDA process. As noted by Munda (2005), this also applies when broader societal input is incorporated into the MCDA: although the evaluation criteria may be inspired by public participation, it is the analyst’s responsibility to ensure that the criteria are technically appropriate. The unavoidable influence of the analyst’s own subjectivity should be

 Exhaustive e contains every important dimension capable of discriminating between the alternatives and considered, by the decision maker, a sufficient basis to inform the decision;  Minimal e small enough in number for the analyst to interpret the inter-criteria relationships for the purpose of aggregation, and containing no unnecessary or redundant criteria;  Monotonic e enables consistency between partial and global preferences;  Cumulative e all else being equal, it is just as legitimate to compare alternatives on a subset of the criteria as on a single criterion; and  Independent e the criteria are not functionally related. It is not always possible to obtain a criteria set satisfying all of these attributes simultaneously (Bouyssou, 1990). In particular, it may be difficult to establish a suitably exhaustive set of independent criteria. The implications of this should be borne in mind when selecting an aggregation method as some methods are more susceptible to interference than others (e.g. the weighted sum is sensitive to the presence of dependent criteria in the form of ‘double-counting’). Life cycle impact categories in LCA are often divided into midpoint and endpoint indicators, depending on where they are located in the within the cause-effect chain (Guinée et al., 2001). For example, global warming potential is a midpoint indicator because it considers emissions of greenhouse gases, not the resulting changes in atmospheric concentration, temperature, sea levels, or other effects. Either type may be used as a decision criterion, although special care would be needed to avoid doublecounting if both types are used together in a single decision problem. 3.4.2. Types of attributes that may be used to evaluate performance against criteria When selecting the evaluation criteria, the analyst must consider the nature of the attributes on which the alternatives are evaluated. This may include natural or easily measured attributes, constructed attributes, which coalesce several natural attributes, and proxy attributes which are considered representative of other attributes. Many attributes modelled by LCA, for example, are constructed attributes, e.g. global warming potential combines the impact potentials of many different emission sources (Hertwich and Hammitt, 2001). However, the impact categories modelled by the LCA may not relate one-to-one with the evaluation criteria if they do not fulfil the properties identified above. For example, the EI-99 method constructs various natural and constructed attributes further into just three criteria (human health, ecological health and resource scarcity). As originally argued by Keeney (1992, p 104, cited in Hertwich and Hammitt, 2001), “[the] careful development of a constructed attribute, with the clarification of the value judgements that are essential to that attribute, may promote thinking and describe the consequences in a decision situation much better than the ‘subjective’ choice to use a readily available natural attribute.” It is up to the analyst to judge, and document, the degree of attribute ‘construction’ that is appropriate for each decision context. 3.4.3. Relationship between criteria and evaluations Some authors (Gaudreault et al., 2009) suggest deleting indicators from the criteria set if their evaluations do not ‘significantly’

28

H.V. Rowley et al. / Journal of Environmental Management 111 (2012) 24e33

discriminate between the alternatives, where ‘significant’ discrimination is defined by some threshold value. This is an extreme extension of an idea proposed by Miettinen and Hämäläinen (1997): that the relative discriminating power of criteria should be taken into account during weighting. We do not support this, preferring instead the use of a quasi- or pseudo-criterion model (explained Section 3.5), which achieves the same outcome while improving future applicability of the model to new alternatives. After following the processes outlined so far, Fig. 1 indicates that the analyst should next evaluate the applicability of various aggregation approaches to the specific decision context. To do so requires understanding some key concepts which are presented in the following two sections. 3.5. Comparing alternatives on the basis of each criterion There are three main models for comparing alternatives on the basis of a criterion ci (Bouyssou, 1990). The true-criterion model holds that cak ; al ˛A:



ak _i al 5eki >eli ak wi al 5eki ¼ eli

(1)

where _i (resp. wi) “is a binary relation that reads ‘is strictly preferred to (resp. indifferent to) considering the consequences taken into account in [the evaluation with respect to criterion ci]’” (Bouyssou, 1990). The principal disadvantage of this model is that it cannot account for any uncertainty or arbitrariness in the performance measurements: any difference, no matter how small, forces a strict preference relationship between the alternatives. For example, alternative a which emits 99.9 kg CO2-equivalent (CO2-e) is strictly preferred over alternative b which emits 100 kg CO2-e. To overcome this, the quasi-criterion model extends the indifference relationship to pairs of alternatives that differ by a margin smaller than some indifference threshold, q. The quasi-criterion model holds that cak ; al ˛A:



ak _i al 5eki  eli > qi ak wi al 5jeki  eli j  qi

(2)

In the above example, if the indifference threshold is set to 5 kg CO2-e, then both alternatives a and b are strictly preferred over alternative c which emits 120 kg CO2-e, but neither of alternatives a and b is preferred over the other e those two alternatives are considered to perform at ‘about the same’ level in terms of CO2-e emissions. A third model, the pseudo-criterion model, allows even greater flexibility by using a parameter t, the preference threshold, to define a kind of ‘buffer zone’ between strict preference and indifference. Pairs of alternatives that differ by a margin smaller than q and larger than t are characterised by a third binary relation, _ i , known e as weak preference. The pseudo-criterion model holds that cak ; al ˛A:

8 ak _i al 5eki  eli >ti > > > < ak _ i al 5qi < eki  eli  ti > e > > : ak wi al 5jeki  eli j  qi

(3)

Further to the above example, if the preference threshold is set to 10 kg, then both alternatives a and b are weakly preferred over alternative d which emits 109 kg CO2-e, while still being strictly preferred over alternative c and indifferent to each other. A fourth binary relation Ri, denoting incomparability, has also been described (Roy, 1991). We may judge that ak Ri al if we lack the

information required to compare them, for example if the emissions of an alternative are unknown; or we are not able to do so, for example due to our personal ethical frameworks or cognitive limits (e.g. where disparate analytical frameworks confound the comparison). The analyst must decide how to handle these incomparability relations within the framework of the aggregation method in use. It should be obvious that the pseudo-criterion model simplifies to the quasi-criterion and true criterion models respectively, by setting either q or both t and q to zero. Although “in many situations, every reasonable non-null value for t and q, leads to a model of preferences that seems more convincing than the . truecriterion [model]” (Bouyssou, 1990), the latter is used in many practical situations. Bouyssou (1990) speculates that this may have “to do with the traditional ‘culture’ of Operational Research”, but since it can be difficult to assign values to t and q, the true-criterion model may also simply be the most attractive from a datacollection viewpoint. Currently, the true criterion model e inherent in the weighted sum e seems to be the default methodological choice for many practitioners, often apparently without any consideration of these alternative models for expressing preference. This is somewhat surprising, given that the other models can account for more subtle distinctions of preference and also for the analytical uncertainty usually associated with LCA and other sustainability assessment results. 3.6. Establishing the relative importance of criteria The method of establishing the relative importance of criteria is primarily determined by the aggregation method used. However, we present it here because it might equally be said that the choice of aggregation method should be influenced by the desired method of establishing the relative importance of criteria. Benoit and Rousseaux (2003) identify three ways in which the analyst may establish that criteria are not equally important: a veto threshold, a hierarchical structure, or weighting. Each of these ways is operationalised in one or more aggregation methods. In a veto threshold model, a minimum performance benchmark is established for each criterion. If an alternative does not meet this threshold with respect to every criterion, it is omitted from the set of feasible alternatives. In a hierarchical structure, the criteria are arranged in order of importance and alternatives are successively assessed against each one.4 In weighting, each criterion is assigned a numerical value ui representing either its importance or its tradeoff strength under the decision maker’s values.5 As will be discussed (Sections 3.10, 3.11), these two types of weights have

4 This can be illustrated using a simple, non-environmental example: consider Susan, who is buying a dining table. Size is her most important criterion, followed by cost, and then material, and so on. First, all tables that are too small or too large are eliminated. Second, all tables exceeding her budget are eliminated. Third, all tables made of undesirable materials are eliminated. This process continues until a table is chosen. If the least important criterion is colour and she prefers red, her final decision might be between two tables that are both acceptable under all other criteria, where one is red and one is not. The hierarchical method ensures that non-red tables are not eliminated too early in the search, potentially causing less desirable results later on (if, for example, all red tables are too large). 5 In the case of Susan’s dining table, weighting enables us to express that she may be willing, for example, to pay more for a red table, even though colour is less important to her than price. Under the purely hierarchical method described above, this would not be possible. Each method applies variously well to different circumstances e for example, she cannot trade-off between colour and size for a red table that is too large for her dining room. Similarly in environmental assessment, we may not be able to trade-off work-days and construction noise if the noise of the quick construction method exceeds regulatory limits.

H.V. Rowley et al. / Journal of Environmental Management 111 (2012) 24e33

29

All MCDA approaches

MCDA approaches using MCAP

Interactive approaches

Elementary approaches (includes methods using the hierarchical structure and veto threshold approaches)

“Type 1” MCAP approaches based on a synthesizing criterion

“Type 2” MCAP approaches based on a synthesising preference relational system

(includes MAUT, SMART, TOPSIS, MACBETH, AHP)

(includes ‘outranking methods’ e.g. ELECTRE, MELCHIOR, trichotomic segmentation, PROMETHEE)

Other MCAP approaches

Fig. 2. Classification of MCDA approaches.

fundamentally different meanings, and which type is required depends on the aggregation method used.6 Some analysts prefer not to specify the relative importance of criteria, believing that this process introduces subjectivity. However, avoiding weighting requires the decision maker to apply an implicit, non-transparent valuation such as to assign each criterion equal importance. The determination of an equal value set is as subjective as any other, and has the further disadvantages of being both arbitrary and non-transparent, since it is often not even acknowledged (Finnveden, 1999); and therefore we cannot recommend it here. 3.7. Enabling formation of a ‘comprehensive judgement’ The core problem of MCDA is in forming a comprehensive judgement of one or more alternatives, which takes into account the alternatives’ performance on each of the n individual criteria (Roy, 2005). The field of MCDA is almost entirely devoted to formulating and applying methods by which this can be achieved. Although much attention has been paid in the LCA literature to reviewing and developing methods of weighting (or valuing) criteria, the choice of aggregation algorithm in which to apply those weights e or whether an aggregation algorithm is even appropriate at all e is neglected by even the most comprehensive reviews (e.g. Finnveden, 1999; Reap et al., 2008),7 given that the choice of aggregation algorithm has arguably more fundamental implications than the choice of weight elicitation procedures; and all the more so since the very meaning of the weights depends on the aggregation method used. We first make the distinction (Fig. 2), as also made by Roy (1996, 2005), between two main families of MCDA methods: those based

6 Although ‘weights’ are almost overwhelmingly linked to the concept of ‘tradeoff’ throughout the LCA literature (e.g. in Reap et al., 2008; Rahimi & Weidner, 2004; Gaudreault et al., 2009), this description is only technically accurate in the context of a compensatory MCAP (see Section 3.10). 7 Indeed, the overwhelming majority of LCA authors, in particular, seem to regard the weighted sum as the only available aggregation method, apparently unaware that the “pervasive use of [such methods] can lead to disappointing and/or unwanted results [and they] should in fact be restricted to rather specific situations that are seldom met in practice” (Bouyssou et al., 2006, pp 5e6). One exception is a paper by Ciroth et al. (2003) which introduces an approach that combines several aggregation methods.

on mathematically explicit multi-criteria aggregation procedures (MCAP) e which involve input from the decision maker at the outset, but are subsequently implemented by the decision analyst e and the so-called interactive methods e which are a class of methods that demand ongoing interaction with the decision maker in an iterative process of evaluating and refining hypothetical alternatives, and assume he or she will always express preferences that are consistent with some (unquantified) utility function (Vincke, 1992). There may be situations in which the latter is appropriate, and explicit ‘aggregation’ can be avoided altogether (e.g. Hatton MacDonald et al., 2005); although we have not seen these methods applied in an environmental systems analysis context. Also noted here is the existence of elementary methods such as the lexicographic method (which uses a hierarchical structure) and the conjunctive method (which uses the veto threshold approach), which belong to neither of these families (Guitouni and Martel, 1998). Within the MCAP methods, there are two main operational approaches (Figueira et al., 2005; Roy, 1996, 2005) which, for clarity, will be referred to as Type 1 and Type 2:  Type 1: Approaches based on a synthesising criterion, which enable the well-defined positioning of any alternative on an appropriate predefined scale and include multi-attribute utility theory (MAUT, usually an additive model), SMART, TOPSIS, MACBETH, and the analytical hierarchy/network process (AHP/ ANP) approaches; and  Type 2: Approaches based on a synthesising preference relational system, which do not work on alternatives in isolation but instead compare alternatives with each other to establish preference relations between them.8 Most of these methods can be labelled ‘outranking methods’ e.g. ELECTRE, PROMETHEE and trichotomic segmentation9; although other non-classical approaches also take this approach.

8 Note that although these methods are commonly referred to as ‘pair-wise’ aggregation methods, they do not imply a pair-wise weight elicitation procedure. 9 Other outranking approaches include (Martel and Matarazzo, 2005): those that are more-or-less based on the original ELECTRE formulation (e.g. QUALIFEX, REGIME, ORESTE, ARGUS, EVAMIX, TACTIC, MELCHIOR); those that take a Pair-wise Criterion Comparison Approach (PCCA) (e.g. MAPPAC, PRAGMA, IDRA, PACMAN); and those for stochastic data (e.g. Martel and Zaras’ method).

30

H.V. Rowley et al. / Journal of Environmental Management 111 (2012) 24e33

Type 1 approaches are only compatible with a true-criterion model: there is no room for subtler preference models or for incomparability between alternatives. Guitouni and Martel (1998) present a comprehensive list of aggregation methods along with brief explanations and further references. We also recommend Figueira et al. (2005) as a comprehensive initial reference on the theoretical and operational details of many methods. 3.7.1. Compensability In terms of the methods’ underlying logic, one characteristic stands out as fundamentally differing between methods: compensability. In the compensatory methods, “the possibility [exists] of offsetting a disadvantage on some criteria by a sufficiently large advantage on another criterion” (Munda, 2005). For example, Michael might buy shoes in a different colour than his ideal if this is offset by a favourable price. In the non-compensatory methods, no such trade-offs occur. For example, nothing could persuade Michael to buy shoes that do not fit his feet. In general, the Type 1 MCAP approaches (e.g. the weighted sum) are compensatory; while the Type 2 MCAP approaches (e.g. ELECTRE) are non-compensatory. The hierarchical structure and veto threshold approaches are also non-compensatory.10 For decision contexts involving sustainability, our choice of algorithm requires us to take a position on the definition of ‘sustainability’ itself as either weak or strong (see Costanza et al., 1997; Hawken et al., 1999). From a weak sustainability perspective, different forms of capital (e.g. financial, human and ecological capital) are substitutable. For example, the loss of a rainforest ecosystem (ecological capital) may be offset by the financial capital gained from the development erected in its place. From a strong sustainability perspective, this is not the case: “certain sorts of natural capital are deemed critical and not readily substitutable by man-made capital” (Munda, 2005).11 A compensatory aggregation method only makes sense from the weak sustainability perspective described above, since compensation validates substitution (Munda, 2005). To implement strong sustainability, a non-compensatory aggregation method must be used. The latter appears to more accurately represent the intentions of many sustainability decision-makers, although it is the analyst’s role to determine this perspective in each decision context. From a practical perspective, non-compensatory methods that utilise indifference and/or preference thresholds, and allow an incomparability relation (Section 3.5) are also better able to handle the data uncertainty (imperfect knowledge) typically associated with sustainability assessment results (Roy, 2005). Noncompensatory logic can also be applied to ordinal scores, which are not well-handled by the compensatory methods (Section 3.10). 3.8. Normalisation and scaling ‘Normalisation’, in LCA methodology, typically refers to a comparison between the environmental burdens in the performance table, and a set of benchmarks (e.g. global or national burdens) in order to assess their environmental significance. This operation may be problematic when combining LCA results with other sustainability aspects for which normalisation data do not

10 Further guidance on the compensability of specific methods is given by Benoit and Rousseaux (2003), Guitouni and Martel (1998), Lemaire et al. (2005), and Rowley and Peters (2009). This characteristic can generally also be identified either by examining the documentation or observing the logic of implementation for each method. 11 Furthermore, different forms of ecological capital are also often considered non-substitutable.

exist or make sense. In the MCDA sense, ‘normalisation’ (sometimes called ‘scaling’) can refer to any operation that converts diverseunit cardinal scores into dimensionless indicators (usually ranging between 0 and 1) with a common direction (i.e. a score of 1 is more desirable than a score of 0).12 This may or may not be a linear scaling, depending on the nature of the criteria (Dee et al., 1973). LCA-type normalisation is usually done during life cycle impact assessment; while MCDA-type normalisation is an operational prerequisite to the Type 1 MCAP approaches. There is no need for normalisation of either kind when using the Type 2 MCAP approaches or other non-compensatory methods. Available procedures for MCDA-type normalisation include, for each column i of the evaluation matrix E (Lundie et al., 2008; Pomerol and Barba-Romero, 2000): 1. ‘Zero-max’: The scores on criterion ci are expressed as a proportion of that criterion’s best performer: b e k ¼ ek =maxðek Þ: 2. ‘Min-max’: The scores are scaled linearly over the range of criterion ci : b e k ¼ ðek  minðek ÞÞ=ðmaxðek Þ  minðek ÞÞ: 3. The scores on criterion ci are expressed as a proportion of the P sum of all scores on that criterion: b e k ¼ ek = ek : 4. The scores on criterion ci are expressed as a fraction of the square root of the sum of squares (also the kth component of P the unit vector) for that criterion: b e k ¼ ek =ð e2k Þ1=2 . Each of these methods returns scaled evaluations b e k between 0 and 1 (or 1 and 1 where the evaluation is neither strictly positive nor strictly negative, e.g. where there are avoided products in LCA) for all alternatives, with all except Procedure 2 conserving proportionality. Lundie et al. (2008) also describe a generalisation of Procedure 2, known as ‘Ranges’, in which the lower and upper bounds are set according to some prior knowledge about the range of possible values, rather than being set to zero and the maximum current score. The loss of proportionality under Procedure 2 means that this procedure only seems reasonable if the decision maker knows the range of performances in advance and accounts for this in the weighting. This can be imagined in a hypothetical example in which one alternative performs fractionally worse (say 1% worse) on a given criterion than all the other alternatives, which perform equally well. In a minemax scaling, our ‘poor performer’ will score 0 and the other alternatives will all score 1. If that criterion is weighted highly, then the ‘poor performer’ may be severely penalised. It is difficult to imagine that this penalisation accurately represents the decision maker’s intentions. In addition, Procedure 2 is highly sensitive to the set of technical options included in an analysis. For example, adding an alternative with a considerably lower evaluation than the next-lowest evaluation on a given criterion will tend to ‘bunch’ the remaining scaled evaluations on that criterion closer together, thus reducing that criterion’s discriminating power. Analysts may be familiar with the experience of a client demanding the inclusion of a ‘straw man’ (an infeasible alternative that distorts the decision-making process or decision-makers’ perspective) in the set of alternatives. Procedure 2 is more susceptible to such diversions than the other procedures. The mathematics of LCA-type normalisation are identical to those of ranges scaling, using zero as the minimum and the normalisation data (e.g. annual per capita environmental burdens as used by Howard et al. (1999)) as the maximum. For criteria where a low score is desirable, the analyst must also decide how to reverse the order of the scaled scores. Pomerol &

12 Reap et al. (2008) and Gaudreault et al. (2009) also distinguish between these two meanings of ‘normalisation’.

H.V. Rowley et al. / Journal of Environmental Management 111 (2012) 24e33

Barba-Romero (2000, p 77) recommend taking the inverse of the scores (i.e. Max (1/eki)) rather than taking Max (eki), since the former operation preserves proportionality where the latter does not. The reciprocals can then be scaled in the same way as the other criteria. If all criteria are of this nature, it may be simpler to prefer low scores, rather than reversing them all and preferring high scores. The diversity of available scaling procedures introduces uncertainty and the analyst should bear this in mind by choosing carefully the procedure that best represents the decision maker’s point of view, by conducting sensitivity testing around the chosen procedure, and/or by negating the complication altogether by using an aggregation method other than a Type 1 approach. 3.9. Dependent or independent weighting scheme The generally held view in the sustainability assessment community is that the weighting set should be independent of the decision alternatives. The only support we have seen for dependent weights is from Miettinen and Hämäläinen (1997), who argue that the weights of criteria with small ranges should be negligible so that the choice between two alternatives cannot be determined by a criterion on which their scores are very similar.13 This outcome may be desirable, but the indifference relation (Section 3.5 above) can be used to achieve the same outcome, without the laborious (and in some cases impossible) task of deriving a separate weighting set for each decision problem (Rowley and Peters, 2009). 3.10. Characterising the type of weights and scores required For any given aggregation method, it is important to derive weights that are meaningful under that method. It has been established previously in the decision making literature that, from a mathematical perspective, weights used with compensatory aggregation methods represent substitution rates (i.e. they describe the capacity for tradeoffs between the criteria), while weights used in non-compensatory aggregation methods represent importance coefficients (i.e. they describe the relative importance of criteria) (Roy, 2005). The distinction between these two types of weights is critically important as it influences how each should be both derived and applied, and this distinction is explained further in the Supplementary Material. However, as Munda (2005, p 961) notes, in many cases this is poorly understood and “a theoretical inconsistency exists between the way weights are actually used and what their real theoretical meaning is . [in] environmental impact assessment studies . it is a common practice to aggregate environmental impact indicators by means of a linear rule and to attach weights to them according to the relative importance idea.” Such practice does not reflect the mathematical meaning of importance coefficients, and the results of such an aggregation are therefore not theoretically sound. As indicated in Fig. 1, the key driver for selecting an aggregation method may in fact be the availability of certain types of scores, and the capacity of the decision maker to provide a certain type of weight (lower LHS of Fig. 1), rather than the often-followed path of selecting an aggregation method first. Both the scores and the weights may be described as either ordinal if only their ranking is meaningful (e.g. an attribute rated on a scale of 1e10), or “cardinal

13 Note the similarity between this proposal and that of Gaudreault et al. (2009), who suggest deleting such a criterion altogether.

31

if their exact numerical value . plays a role” (e.g. emissions measured in kg) (Pomerol and Barba-Romero, 2000).14 Substitution rates must be cardinal, and must be used with cardinal scores (Pomerol and Barba-Romero, 2000). Importance coefficients are generally also cardinal15 (Pomerol and Barba-Romero, 2000), but they can be used with either ordinal or cardinal scores (Munda, 2005). So, for example, if only ordinal scores are available (e.g. scores of 1 ¼ ‘low’, 2 ¼ ‘medium’, 3 ¼ ‘high’ amenity), it may be more appropriate to use a non-compensatory aggregation method rather than attempting to convert these ordinal scores into cardinal ones,16 which would introduce uncertainty and may divert the representation away from the decision maker’s original viewpoint. The correct interpretation of the role of the weights (and scores) within the chosen aggregation method, by all parties involved, is crucial to ensuring that the weights are meaningful and that their application is consistent with the decision maker’s intentions.

3.11. Eliciting the weighting parameters Various authors (e.g. Finnveden, 1999; Lindeijer, 1996; Hofstetter, 1996) have classified methods of weighting in relation to LCA, in particular, with the major classes being: 1. Proxy approaches, which rely on one or a few indicators to represent total impact (e.g. carbon footprint in some applications; MIPS), effectively assigning a weight of 1 to that indicator (or group of indicators) and 0 to all others; 2. Monetisation methods (including methods based on willingness-to-pay, individuals’ revealed or expressed preferences, society’s willingness-to-pay, and other methods), in which environmental impacts are translated into common monetary units using discount rates as appropriate; 3. ‘Distance-to-target’ methods, which relate the scores to externally derived targets (e.g. political targets, scientifically estimated ‘limits’, thresholds); and 4. Panel methods, in which people are asked to assign weights. Panel methods can be further distinguished by method (e.g. questionnaires, interviews, group discussions), panel composition (e.g. experts, laypeople, stakeholders), and procedure (e.g. single-round, Delphi), and outcome (e.g. consensus, statistical analysis of results). This may include weights elicited indirectly (e.g. by ranking options). Other methods have also been proposed which do not directly involve the decision maker but instead extract a weighting scheme inherent to the dataset itself. These ‘data-driven’ methods include a data envelopment analysis-based alternative cross-evaluation method (Doyle, 1995), the entropy method (Pomerol and BarbaRomero, 2000) and the fuzzy integral method (Rowley et al., n.d). Although this may seem to contradict one aim of MCDA, which is to incorporate decision-maker values, such methods may be the only viable option if other methods are unsuitable or impractical.

14 For example, in the cardinal case, it is generally accepted that a 5 kg emission represents five times the environmental burden of 1 kg, while in the ordinal case a ‘visual amenity’ score of 5 may not be five times as good as a score of 1. Similarly, for ordinal weights, a criterion ranked 5th in importance does not generally have one-fifth the importance of the criterion ranked 1st. 15 Except for “some rare methods where it matters little whether the weights are ordinal or cardinal”, such as the lexicographical method (Pomerol and BarbaRomero, 2000, p 86). 16 Such a conversion would require that the ranking scale is re-formulated and are alternatives re-scored such that, for example, a ‘visual amenity’ score of 5 is five times as good as a ‘visual amenity’ score of 1.

32

H.V. Rowley et al. / Journal of Environmental Management 111 (2012) 24e33

We encourage analysts to be aware of the diversity of approaches and when they are best applied, in order to make sensible choices in their own decision contexts. Due to the popularity of panel methods (Koffler et al., 2008), some methods that are popular for eliciting parameters from a panel are now presented (Bouyssou et al., 2006):  Direct rating, in which the analyst asks the panel members to assign numerical values to the weights;  Simos’ cards method, in which the criteria are each written on a card and the panel members must each place them in order of importance, with blank cards being inserted between them to represent larger differences;  Ranking the criteria, to give ordinal rather than quantitative weights;  Pair-wise comparison, as popularised by the analytic hierarchy process (AHP) method;  The classical multi-attribute value theory (MAVT) method presented by Bouyssou et al. (2006), which may be adapted for other procedures. Each elicitation method has its own advantages and drawbacks, which will affect its suitability to any given decision context. A key criticism of analysts using the compensatory methods, in which weights represent substitution rates, is that “most [analysts] are wrongly and ingenuously inclined to [ask the decision maker to] assign weights without reference to the units chosen for fixing the values of [the scores] as though they could be independent” (Pomerol and Barba-Romero, 2000, p 86). For example, the decision maker may express that global warming and water use are equally important, but what does this mean? Is 1 kg of CO2-e emissions as important as 1 L of water use, or should that be 1 kL of water use, or some other amount? Examining the mathematical application of the weights, it is clear that interaction between the units of measurement of the scores and weights is critically important, and the weights must therefore be derived with reference to the performance table. In LCA methodology, this is partially overcome by normalisation. Nevertheless, when weighting normalised LCA scores, decision makers must understand that their weights reflect the importance of the issue at the normalised scale (e.g. they are weighing up ‘annual greenhouse gas emissions per capita’ against ‘annual water use per capita’). If different scales are used (e.g. global warming is normalised on a global scale and water use on a national scale), then this also needs to be taken into account (i.e. in this case, they are weighing up ‘worldwide annual greenhouse gas emissions per capita’ against ‘national water use per capita’). Furthermore, as mentioned in Section 3.8, for indicators not derived by LCA, such a normalisation process rarely makes sense. In the non-compensatory methods, where weights represent importance coefficients, this problem does not occur. Depending on the elicitation method, there may be further methodological aspects to consider. For example, participants in a direct rating exercise may find it easier if their weights do not have to add to a strict total (the analyst can linearly scale the weights without changing their meaning (Pomerol and BarbaRomero, 2000, p 78)). Each participant might also be given a pencil and eraser, rather than a pen, to encourage them to iteratively refine their individual responses (Howard et al., 2010). In any of the methods, it may also be important to randomise the order of criteria being presented, and to avoid researcher bias or ‘framing effects’ (wherein the answer to a question is influenced by the way in which it is asked).

4. Conclusions & recommendations When formulating recommendations on the sustainability performance of decision alternatives, it is important for analysts to recognise that applying MCDA introduces subjectivity not only explicitly, through the incorporation of subjective values, but also implicitly, through the analyst’s methodological choices at each stage of the process. The weighted sum is commonly used to form a comprehensive judgement on the sustainability performance of decision alternatives. However, this is only one of the many MCDA methods available. Each of the methods operationalises different assumptions, which affects their suitability for different problem contexts. This paper has presented a novel perspective on how environmental and sustainability analysts can approach the challenge of choosing a suitable MCDA method, supported by the workflow illustrated in Fig. 1. To do so, it has systematically examined the major methodological choices required to select and apply an MCDA method, and has clarified the theoretical implications of those choices. First, analysts were encouraged to carefully define the decision maker, decision process objectives, and decision alternatives. Clearly defining these aspects helps ensure a decision approach meets the decision maker’s needs. Desirable properties of a set of evaluation criteria were then identified, such as the need to select non-interacting criteria. Second, this paper highlighted the variety of approaches to forming an overall judgement on decision alternatives, and introduced some concepts to help analysts gauge the feasibility and desirability of each approach. It was shown that the weighted sum operationalises some quite restrictive assumptions and may not always be an appropriate methodological choice. Third, the methodological requirements of the main approaches were further explored and some potential pitfalls were identified. In particular, this paper highlighted the importance of clearly identifying the role of weights. Inconsistency between the meaning and application of weights can affect the results in an unexpected way, leading to perverse decision outcomes. This paper is not an ‘instruction manual’ for MCDA; the breadth and depth of the literature in that field and even on single approaches precludes such an outcome. However, by examining these fundamental methodological issues, an initial guide has now been provided that can improve environmental managers’ understanding of the relationship between MCDA theory and practice. The framework presented here can thus help enable environmental managers and analysts to provide methodological advice that is consistent with a decision maker’s needs in any given problem context.

Appendix A. Supplementary material Supplementary data related to this article can be found online at http://dx.doi.org/10.1016/j.jenvman.2012.05.004.

References Baumann, H., Tillman, A.-M., 2005. The Hitch Hiker’s Guide to LCA. Studentlitteratur, Lund. Bengtsson, M., 2000. Weighting in practice: implications for the use of life-cycle assessment in decision making. Journal of Industrial Ecology 4, 47e60. Benoit, V., Rousseaux, P., 2003. Aid for aggregating the impacts in life cycle assessment. International Journal of Life Cycle Assessment 8, 74e82. Bouyssou, D., 1990. Building criteria: a prerequisite for MCDA. In: Bana e Costa, C.A. (Ed.), Readings in Multiple Criteria Decision Aid. Springer-Verlag, Berlin, pp. 58e80.

H.V. Rowley et al. / Journal of Environmental Management 111 (2012) 24e33 Bouyssou, D., Marchant, T., Pirlot, M., Tsoukiàs, A., Vincke, P., 2006. Evaluation and Decision Models with Multiple Criteria: Stepping Stones for the Analyst. Springer, New York. Ciroth, A., Fleischer, G., Gerner, K., Kunst, H., 2003. A new approach for a modular valuation of LCAs. International Journal of Life Cycle Assessment 8, 273e282. Cloquell-Ballester, V.-A., Monterde-Díaz, R., Cloquell-Ballester, V.-A., SantamarinaSiurana, M.-C., 2007. Systematic comparative and sensitivity analyses of additive and outranking techniques for supporting impact significance assessments. Environmental Impact Assessment Review 27, 62e83. Costanza, R., d’Arge, R., de Groot, R., Farber, S., Grasso, M., Hannon, B., Limburg, K., Naeem, S., O’Neill, R.V., Paruelo, J., Raskin, R.G., Sutton, P., van den Belt, M., 1997. The value of the world’s ecosystem services and natural capital. Nature 387, 253e260. Curran, M.A., 2008. Development of Life Cycle Assessment Methodology: A Focus on Co-product Allocation. Erasmus University, Rotterdam. Dee, N., Baker, J., Drobny, N., Duke, K., Whitman, I., Fahringer, D., 1973. An environmental evaluation system for water resource planning. Water Resources Research 9, 523e535. Doyle, J.R., 1995. Multiattribute choice for the lazy decision maker: let the alternatives decide! Organizational Behavior and Human Decision Processes 62, 87e100. Figueira, J., Greco, S., Ehrgott, M., 2005. Multiple Criteria Decision Analysis: State of the Art Surveys. Springer, New York. Finnveden, G., 1999. A Critical Review of Operational Valuation/Weighting Methods for Life Cycle Assessment. Swedish Environmental Protection Agency, Stockholm. Gaudreault, C., Samson, R., Stuart, P., 2009. Implications of choices and interpretation in LCA for multi-criteria process design: de-inked pulp capacity and cogeneration at a paper mill case study. Journal of Cleaner Production 17, 1535e1546. Goedkoop, M., Spriensma, R., 2001. The Eco-indicator 99: a Damage Oriented Method for Life Cycle Impact Assessment e Methodology Report, third ed. PRé: Product Ecology Consultants, Amersfoort, The Netherlands, p. 132. Gold Coast Water, 2005. Gold Coast Waterfuture: Water Supply Strategies – Development and Assessment (Revision 4 Draft July 2005). Gold Coast Water, Surfers Paradise, Australia. Guinée, J.B., Gorrée, M., Heijungs, R., Huppes, G., Kleijn, R., de Koning, A., van Oers, L., Wegener Sleeswijk, A., Suh, S., Udo de Haes, H.A., de Bruijn, H., van Duin, R., Huijbregts, M.A.J., 2001. Life Cycle Assessment: An Operational Guide to the ISO Standards. Ministry of Housing. Spatial Planning and the Environment (VROM) and Centre of Environmental Science - Leiden University, The Netherlands. Guitouni, A., Martel, J.-M., 1998. Tentative guidelines to help choosing an appropriate MCDA method. European Journal of Operational Research 109, 501e521. Harsch, M., Schuckert, M., Eyerer, P., Saur, K., 1996. Life cycle assessment. Advanced Materials & Processes (USA) 149, 43e46. Hatton MacDonald, D., Barnes, M., Bennett, J., Morrison, M., Young, M.D., 2005. Using a choice modelling approach for customer service standards in urban water. Journal of the American Water Resources Association 41, 1e10. Hawken, P., Lovins, A.B., Lovins, L.H., 1999. Natural Capitalism: The Next Industrial Revolution. Earthscan, London. Hermann, B.G., Kroeze, C., Jawjit, W., 2007. Assessing environmental performance by combining life cycle assessment, multi-criteria analysis and environmental performance indicators. Journal of Cleaner Production 15, 1787e1796. Hertwich, E.G., Hammitt, J.K., 2001. A decision-analytic framework for impact assessment: Part 1: LCA and decision analysis. International Journal of Life Cycle Assessment 6, 5e12. Hofstetter, P., 1996. Towards a structured aggregation procedure. In: Braunschweig, A., Förster, R., Hofstetter, P., Müller-Wenk, R. (Eds.), Developments in LCA Valuation. IWÖ-Diskussionsbeitrag Nr. 32. IWÖ-HSG, St. Gallen, Switzerland, pp. 122e211. Howard, N., Bengtsson, J., Kneppers, B., 2010. Weighting of Environmental Impacts in Australia. Edge Environment, Sydney.

33

Howard, N., Edwards, S., Anderson, J., 1999. BRE Methodology for Environmental Profiles of Construction Materials, Components and Buildings. BRE, Watford, UK. ISO, 2006a. ISO 14040:2006(E) Environmental ManagementdLife Cycle AssessmentdPrinciples and Framework. 2006-07-01 ed., second ed. International Organisation for Standardisation, Geneva. ISO, 2006b. ISO 14044:2006(E) Environmental Managementd Life Cycle Assessmentd Requirements and Guidelines. International Organisation for Standardisation, Geneva. Koffler, C., Schebek, L., Krinke, S., 2008. Applying voting rules to panel-based decision making in LCA. International Journal of Life Cycle Assessment 13, 456e467. Lemaire, S., Chevalier, J., Guarracino, G., Chevalier, J.-L., 2005. Decision-aid tool to choose builging products introducing their environmental and health characteristics. In: The 2005 World Sustainable Building Conference, Tokyo. Lindeijer, E.W., 1996. Normalisation and Valuation. In: Udo de Haes, H.A. (Ed.), Towards a Methodology for Life-Cycle Impact Assessment. SETAC-Europe, Brussels, Belgium, pp. 75e93. Lundie, S., Peters, G.M., Ashbolt, N., Lai, E., Livingston, D., 2006. A sustainability framework for the Australian water industry. Water 33, 83e88. Lundie, S., Ashbolt, N., Livingston, D., Lai, E., Kärrman, E., Blaikie, J., Anderson, J., 2008. Sustainability Framework e- Part A: Methodology for Evaluating the Overall Sustainability of Urban Water Systems. Water Services Association of Australia, Melbourne and Sydney. Martel, J.-M., Matarazzo, B., 2005. Other outranking approaches. In: Figueira, J., Greco, S., Ehrgott, M. (Eds.), Multiple Criteria Decision Analysis: State of the Art Surveys. Springer, New York, pp. 197e262. Miettinen, P., Hämäläinen, R.P., 1997. How to benefit from decision analysis in environmental life cycle assessment (LCA). European Journal of Operational Research 102, 279e294. Munda, G., 2005. Multiple criteria decision analysis and sustainable development. In: Figueira, J., Greco, S., Ehrgott, M. (Eds.), Multiple Criteria Decision Analysis: State of the Art Surveys. Springer, New York, pp. 953e986. NSW DPWS, 2003. Eurobodalla Integrated Water Cycle Management Strategy. NSW Department of Public Works and Services & Eurobodalla Shire Council. Palme, U., Lundin, M., Tillman, A.-M., Molander, S., 2005. Sustainable development indicators for wastewater systems e researchers and indicator users in a co-operative case study. Resources, Conservation and Recycling 43, 293e311. Pomerol, J.-C., Barba-Romero, S., 2000. Multicriterion Decision in Management: Principles and Practice. Kluwer Academic Publishers, London. Rahimi, M., Weidner, M., 2004. Decision analysis utilizing data from multiple lifecycle impact assessment methods: part I: a theoretical basis . Journal of Industrial Ecology 8, 93e118. Reap, J., Roman, F., Duncan, S., Bras, B., 2008. A survey of unresolved problems in life cycle assessment. Part 2: impact assessment and interpretation. International Journal of Life Cycle Assessment 13, 374e388. Rowley, H.V., Lenzen, M., Peters, G.M., Foran, B., Lundie, S., n.d. A practical approach for estimating weights of interacting criteria from profile sets. European Journal of Operational Research Under Review. Rowley, H.V., Peters, G.M., 2009. Multi-criteria methods for the aggregation of life cycle impacts, 6th Australian Conference on Life Cycle Assessment, Melbourne. Roy, B., 1991. The outranking approach and the foundations of electre methods. Theory and Decision 31, 49e73. Roy, B., 1996. Multicriteria Methodology for Decision Aiding. Kluwer Academic Publishers, London. Roy, B., 2005. Paradigms and challenges. In: Figueira, J., Greco, S., Ehrgott, M. (Eds.), Multiple Criteria Decision Analysis: State of the Art Surveys. Springer, New York, pp. 3e24. Vincke, P., 1992. Multicriteria Decision-Aid. John Wiley & Sons, Chichester. Weidema, B.P., 2003. Market Information in Life Cycle Assessment. Danish Environmental Protection Agency, Copenhagen.