Available online at www.sciencedirect.com Availableonline onlineatatwww.sciencedirect.com www.sciencedirect.com Available
ScienceDirect Procedia Computer Computer Science Science 162 00 (2019) Procedia (2019)000–000 786–794 Procedia Computer Science 00 (2019) 000–000
www.elsevier.com/locate/procedia www.elsevier.com/locate/procedia
7th International Conference on Information Technology and Quantitative Management 7th International Conference on Information Technology and Quantitative Management (ITQM 2019)
(ITQM 2019)
Collaborative Value Modelling in corporate contexts with Collaborative Value Modelling in corporate contexts with MACBETH MACBETH
a
Carlos A. Bana e Costaa,* , Ana C.L.Vieiraaa, Mónica Nóbregaaa, António Quintinobb, a,* Carlos A. Bana e Costa , Ana , Mónica Mónica D. C.L.Vieira Oliveiraaa, João BanaNóbrega e Costacc , António Quintino , Mónica D. Oliveira , João Bana e Costa
a
CEG-IST, Centre for Management Studies of Instituto Superior Técnico, Universidade de Lisboa, Av. Rovisco Pais, 1049-001 Lisboa, CEG-IST, Centre for Management Studies of Instituto Superior Técnico, Universidade de Lisboa, Av. Rovisco Pais, 1049-001 Lisboa, Portugal b GALP ENERGIA, Rua Tomás da Fonseca, Torres dePortugal Lisboa, Torre GALP - 5º Andar, 1600-209 Lisboa, Portugal b GALP cENERGIA, Rua Rua Tomás da Fonseca, Torres nº de 27, Lisboa, GALP - 5ºAntónio Andar, 1600-209 Lisboa, Portugal Decision Eyes, Mouzinho da Silveira, 7ºD, Torre 1250 166 Santo - Lisboa, Portugal c Decision Eyes, Rua Mouzinho da Silveira, nº 27, 7ºD, 1250 166 Santo António - Lisboa, Portugal
Abstract Abstract In complex organizational choices with consequences spread all over the organization, good management practice In complex the organizational with consequences spread over the organization, management practice recommends auscultationchoices of the points of view of people fromalldifferent operational areas,good to inform a final decisionrecommends of the such pointsanofinternal view ofparticipatory people fromprocess differentmay operational areas, atolarge inform a finalofdecisionmaking body.the In auscultation large corporations easily involve number people, making body. In large corporations suchare annot internal participatory processthem may together easily involve a largemeetings numbertoofreach people, however time and scheduling constraints compatible with gathering in presential an however time and scheduling constraints are not compatible with gathering them together in presential meetings to reach an alignment around key decision objectives. One of these cases was the selection of a new Enterprise Management System alignment key decision of these cases was the selection of a is new Enterprise Management System (EMS) for around GALP ENERGIA (a objectives. PortugueseOne energy company). The aim of this paper to report on how a socio-technical (EMS) for ENERGIA Portuguese energy company).Value The aim of this paper is to report on howtechnical a socio-technical process wasGALP designed and took(aplace under the “Collaborative Modelling” framework, combining elements process was designed and took evaluation place undermethod the “Collaborative Value Modelling” framework, combining technicalqualitative elements of the MACBETH multicriteria with two social settings: firstly, Web-Delphi to elicit individual of the MACBETH multicriteria evaluation method with two social settings: firstly, Web-Delphi to elicit individual qualitative judgements from a large number of collaborators from the different departments of GALP ENERGIA; followed by a Decision judgements from number of collaborators fromthe theknowledge different departments GALP ENERGIA; by a Decision Conference wherea alarge small steering committee used collected inofthe Delphi to informfollowed the construction of a Conference model where to a small steering committee usedproposed the knowledge collectedtechnology in the Delphi to inform construction of a quantitative compare the EMS alternatives by information companies. Tothe detail our rationale, quantitative to compareswing-weighting the EMS alternatives proposedMACBETH by information technology companies. detail our rationale, we focus onmodel the qualitative questioning mode, implemented in theToWelphi information we focus on the qualitative questioning mode, implemented the then Welphi information technology platform in whichswing-weighting GALP collaborators providedMACBETH individual weighting judgements,in and in the decision technology platform which GALP collaborators provided individual weighting judgements, and then the decision conferencing for final in weighting of criteria by the steering committee. The main contribution of this paper is onindetailing how, conferencing for final weighting of criteria by the steering committee. The main contribution of this paper detailing how, through the “Collaborative Value Modeling” framework, it is possible to promote “dialogue” amongis aonlarge group of through the in“Collaborative Valuetherefore Modeling” framework, it is and possible to promote among the a large group of participants a corporate setting, promoting alignment increasing model “dialogue” acceptance within company. participants in a corporate setting, therefore promoting alignment and increasing model acceptance within the company. © 2020 The Authors. Published by Elsevier B.V. © 2019 The Authors. Published by Elsevier B.V. This is anThe open access Published article under the CC BY-NC-ND (http://creativecommons.org/licenses/by-nc-nd/4.0/) © 2019 Authors. by responsibility Elsevier B.V. of the license Selection and/or under organizers ITQM 2019Conference on Information Technology and Peer-review underpeer-review responsibility of the scientific committee of the 7thofInternational Selection and/or peer-review under responsibility of the organizers of ITQM 2019 Quantitative Management (ITQM 2019) Keywords: Collaborative Value Modelling; MACBETH; Multicriteria decision conferencing; Web-Delphi; Welphi Keywords: Collaborative Value Modelling; MACBETH; Multicriteria decision conferencing; Web-Delphi; Welphi
* Corresponding author. Tel.: +351-917888995; fax: +351-218417979. * Corresponding author. Tel.: +351-917888995; fax: +351-218417979. E-mail address:
[email protected]. E-mail address:
[email protected].
1877-0509 © 2020 The Authors. Published by Elsevier B.V. This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/) Peer-review under responsibility of the scientific committee of the 7th International Conference on Information Technology and Quantitative Management (ITQM 2019) 10.1016/j.procs.2019.12.051
Carlos A. Bana e Costa et al. / Procedia Computer Science 162 (2019) 786–794 Bana e Costa et al. / Procedia Computer Science 00 (2019) 000–000
787
1. Introduction In a recent article in ITPro, Thorpe [1] states that “Enterprise Management Systems (EMS), sometimes just known as Enterprise Systems (ES), are enterprise-scale application software packages which address the different software needs of large organisations (…). They allow IT teams to support and manage large, complex and sometimes geographically-dispersed IT infrastructure and applications.” Selecting an EMS is a good example of a complex corporate management evaluation context that requires some kind of “participative decision-making” (PDM) [2] for careful analysis of a multiplicity of factors with future users across the organization; because, depending on their needs, they tend to privilege different (system) performance dimensions and “dialogue” is therefore needed to create alignment (“dialogue” in the sense of Bohm [3]; see also [4] and [5]). To face this complexity – inherent to corporate decision making, as in the choice of an adequate EMS configuration – an informal PDM format is not enough. This paper highlights the advantage of supporting the selection with a formal socio-technical process, in which a multicriteria decision analysis (MCDA) [6] is developed in a social setting allowing the participation of a large number of users and experts. It is not common to find in the literature descriptions of socio-technical processes developed for building multicriteria evaluation models. In the book of 2011 titled Multiple Criteria Decision Making: From Early History to the 21st Century [7], the emphasis is, exclusively, technically oriented (concepts and maths). Yet, several years before, the crucial importance of the social component in decision support had already been emphasized in the literature, as for instance in the book of 1987 titled Decision Synthesis: The Principles and Practice of Decision Analysis [8]. A recent book focused on the socio-technical strategy is the Handbook of Decision Analysis [9], in which significant emphasizes is given to the decision conferencing strategy for face-to-face model building with small groups. A design of socio-technical process following the “Collaborative Value Modelling” framework [10] and applied to the corporate context is addressed in Section 2. Section 3 presents a real-world application, developed in 2017, to assist the part of the evaluation process of EMS proposals at GALP ENERGIA (“GALP”, the short company designation, will be used hereafter) devoted to weighting evaluation criteria. Section 4 highlights some final remarks and conclusions. 2. Socio-technical construction of a quantitative value model based on qualitative value judgements 2.1. The framework: Collaborative Value Modelling Isaac ([4], p.25) defines dialogue as “a discipline of collective thinking and inquiry, a process for transforming the quality of conversation and, in particular, the thinking that lies beneath it”. On his seminal work “On Dialogue” [3], Bohm states that, in a group of twenty to forty people, “you begin to get what may be called a 'microculture'. You have enough people coming in from different subcultures so that they are a sort of microcosm of the whole culture. And then the question of culture - the collectively shared meaning - begins to come in. That is crucial, because the collectively shared meaning is very powerful.” This is precisely what cooperate management should look for. Unfortunately, large group face-to-face dialogues are difficult to settle in business environments where time for meeting is a scarce resource; additionally, logistical constrains for gathering everybody in the same place often arise, namely within a geographically dispersed organization. As an alternative, one can take advantage of today current on-line communication vehicles, to establish a non-face-to-face large-group interaction. Collaborative Value Modelling makes use of the Delphi method to construct collective knowledge among a large number of participants on a web-base [10]. The idea is not to reach a consensus – traditionally associated with Delphi in the literature [11] – rather, Delphi is here used to develop organization learning by exploring convergence and divergence among individual points of views [10]. The acquired knowledge will then inform, at a subsequent collaborative stage, a face-to-face dialogue within a smaller decision-making body. The objective is now to achieve a shared and widely informed multicriteria evaluation model. Yet, attention must be paid to the
788
Carlos A. Bana e Costa et al. / Procedia Computer Science 162 (2019) 786–794 Bana e Costa et al. / Procedia Computer Science 00 (2019) 000–000
limitation of dialogue development within a small group, as raised by Bohm [3]: “If five or six people get together, they can usually 'adjust' to each other so that they don't say the things that upset each other - they get a 'cozy adjustment'. (…) And if there is a confrontation between two or more people in such a small group, it seems very hard to stop it; it gets stuck.” Tackling this issue is the role of multicriteria decision-conferencing (DC) with small groups [12, 13]. The facilitator should start by focusing the decision-making group on critically analyzing and jointly “digesting” [14] the knowledge that resulted from the previous enlarged Delphi process. Following Vieira et al. [10], after an initial stage of process design, “collaborative value modelling” develops knowledge acquisition in two distinct stages, carefully planned to well adjust to the context at hand: first a nonface-to-face Web-Delphi process and then a face-to-face multicriteria decision conferencing one, both involving knowledge elicitation, knowledge analysis and knowledge verification, respectively from a large consultation group and a small deliberative group (Figure 1). The final objective is to construct a model to measure the value of alternative courses of action, based of shared judgmental knowledge.
Fig. 1. Overview of the Collaborative Value Modelling framework (source: Vieira et al. [10]).
2.2. The strategy: divide-to-conquer with multicriteria value measurement Already half-a-century ago, Raiffa [15] stated that: “The spirit of decision analysis is divide and conquer: decompose a complex problem into simpler problems, get your thinking straight in these simpler problems, paste these analyses together with a logic glue, and come out with a program for action for the complex problem. Experts are not asked complicated, fuzzy questions, but crystal clear, unambiguous, elemental hypothetical questions” [15]. A usual MCDA implementation of this strategy is by options’ multicriteria value measurement, founded in Mutiattribute Value Theory [16-18]. Start by (i) structuring the complex problem in view of defining the (n) multiple criteria through which options’ evaluation will be decomposed in simpler parts; then (ii) assess the partial value (single-criterion) of each option on each criterion one at a time; and (iii) weight the criteria to make these assessments commensurate; finally, (iv) “glue” them together, to get an overall value score for each option, most often by a simple weighted sum, i.e. multiplying the partial value score on each criterion by the respective weight and sum up these products across all the criteria; and (v) model testing, adjustment and validation. Knowledge elicitation is required to complete steps (i), (ii) and (iii) – either done in this sequence or with (iii) done before (ii). In (i) it is crucial that the criteria are independent, a methodological issue addressed below. In (ii) preference judgements are elicited, on each evaluation criterion, either about the partial value of the options directly or upon a set of levels of performance (this set can be quantitative or qualitative and is usually called an “attribute” [19] or a “descriptor of performance” [20]). The latter path leads to the construction of a single-
Carlos A. Bana e Costa et al. / Procedia Computer Science 162 (2019) 786–794 Bana e Costa et al. / Procedia Computer Science 00 (2019) 000–000
789
criterion value function: it permits to measure an option’s partial value indirectly by associating with the option one of the performance levels of the descriptor and then using the value function to transform that performance into a numerical value score. Let vj(x) be the partial value score of option x calculated by the (single-criterion) value function vj constructed for criterion j in step (ii); let kj > 0 be the weight assigned to j (j = 1, …, n) in step (iii), with ∑𝑛𝑛𝑗𝑗=1 𝑘𝑘𝑗𝑗 . Then, in step (iv) a simple additive value model can be constructed that will permit to calculate for each option x an overall value score 𝑉𝑉(𝑥𝑥) = ∑𝑛𝑛𝑗𝑗=1 𝑘𝑘𝑗𝑗 𝑣𝑣𝑗𝑗 (𝑥𝑥); V(x) is an indirect measure of the overall value of x taken all criteria together into consideration. This additive measure is theoretically meaningful if, and only if, “difference independence” [21] holds. That is, the difference in partial value (preference) between each two performance levels – the same is to say, the added value of an increase in performance from one level to the other – does not depend on the levels on the remaining criteria. We agree with Dyer ([22], p. 284) that “difference consistency is so intuitively appealing that it could simply be assumed to hold in most practical applications”. 2.3. The technique for judgements elicitation: MACBETH An adequate technical basis should be established beforehand for participants’ judgements elicitation, both for scoring and weighting. There exist numerical and non-numerical “technically equivalent” elicitation procedures, both for scoring options and weighting criteria. Well-known examples of the former are the numerical direct rating and swing weighting [18] that requires responders to “produce direct numerical representations of their preferences”. On the other hand, MACBETH (Measuring Attractiveness by a Categorical Based Evaluation Technique [23] that asks for non-numerical paired comparisons of “difference in attractiveness” (preference, or value): “Is there no difference (indifference), or is the difference very weak, weak, moderate, strong, very strong, or extreme?” [24]. Fasolo and Bana e Costa [25] have shown, experimentally, that the former and the latter elicitation techniques are not “psychologically equivalent”, because they are perceived differently by responders depending on their level of “numeracy”, i.e. their “ability to use appropriate numerical principles” [26] and fluency too. And, they prudently advise that: “choice of technique could be made at the point of facilitation depending on the assessed numeracy and fluency of one's clients. Information concerning preference for expressing value judgements numerically or non-numerically can be gathered from the analyst's past experience with the client or can emerge during the first interaction with a client.”([25], p. 89), but this strategy is often not practicable. However, our own past experience of process consultation has shown that, in general, evaluators’ easy adhere to the MACBETH non-numeric questioning procedure in a significant number of realworld evaluation settings. Using the MACBETH elicitation procedure – first individually in the Web-Delphi stage and afterwards collectively in the DC stage – responders were invited to make pair comparison judgements by choosing one of the seven MACBETH qualitative categories (no, or very weak, or weak, or moderate, or strong, or very strong, or extreme) of difference in attractiveness (or desirability or value). It is important to note that in MACBETH the dialogue protocol is established in terms of difference judgements, fitting with the principles of value difference measurement [27]. 3. Application to the selection of an EMS 3.1. The context GALP is an integrated energy operator with a presence across the whole oil and gas value chain as well and increasingly in renewable energy. Its operations are deployed over diverse geographical locations where South America and Africa play a prominent role, namely in exploration and production. GALP wanted to acquire an EMS solution to create an Integration Platform that effectively supports the Oil Value Chain and that will streamline the processes, improve resources efficiency through automation and error reduction, easing access to information to support better decision making and a better Time-to-Market [28]. In 2017, five information
Carlos A. Bana e Costa et al. / Procedia Computer Science 162 (2019) 786–794 Bana e Costa et al. / Procedia Computer Science 00 (2019) 000–000
790
technology companies received an Invitation to Tender and a Tender Program defining how proposals of EMS solutions (and the required services for its implementation and integration) should be presented by the contenders. To evaluate the alternative proposals, a simple additive value-function model was constructed beforehand, in a process leaded by a project steering committee of seven members: two directors, two external consultants, and three IT collaborators (one of which acting as the project manager). Three sets of evaluation criteria were included in the model: seven functional requirements, three technical ones, and five service criteria. 3.2. Web-Delphi for weighting Making use of the Welphi platform (available at www.welphi.com) [29], the “Collaborative Value Modelling” framework was applied in the activities of constructing partial value functions and weighting criteria with MACBETH. Two Delphi processes with 73 invited participants from different GALP departments, and one final DC with a steering committee took place. This section focuses on the weighting Delphi stage concerning the three technical criteria: “Integration”, “Security” and “Mobility”. Qualitative judgements were given to the swings from NEUTRAL to GOOD performance levels of the criteria (see Table 1). Specifically, the following question was asked in the Welphi platform: “(…) Suppose there is a hypothetical proposal with NEUTRAL performances in all criteria. What would be the importance of improving it from NEUTRAL to GOOD on each one of the criteria?” (see Figure 2). Participants provided their answers, individually and anonymously, by choosing on the PC screen one among the MACBETH judgmental categories (Figure 2 is a screenshot of the Welphi platform presented at the beginning of the first weighting round). Table 1. Reference performance levels for weighting the technical criteria. Technical criteria Integration Security Mobility
GOOD performance level Out-of-the-box integration with all core Galp's systems being capable of developing the remaining ones using TIBCO and Informatica PowerCenter. Provides single sign-on functionality, and database's encryption ensures the protection of the data both in transit and at rest. Solution provides access through mobile applications (smartphones, tablets, etc.).
NEUTRAL performance level Can develop a bidirectional integration with all Galp's systems using TIBCO and informatica PowerCenter. Security based on groups and profiles at different levels of the platform (fields, functionalities, etc.), providing a detailed activity log. Provides access to the platform in remote using a browser enabled client.
Fig. 2. Adapted print-screen of the screen of the Welphi platform presented at the beginning of the first round of the Delphi process for eliciting MACBETH judgements for weighting the three technical criteria. Participants could opt for one of the MACBETH categories or for not answering, by clicking on the corresponding circle; they could also insert comments. Each “eye-shaped” button provided participants with the description of the NEUTRAL and GOOD performance levels.
It is fundamental to emphasize that the weighting question was not about the “importance” of each criterion (doing this would incur in the “most common critical mistake” [19]) but rather about the importance of a specific
Carlos A. Bana e Costa et al. / Procedia Computer Science 162 (2019) 786–794 Bana e Costa et al. / Procedia Computer Science 00 (2019) 000–000
791
swing (improvement of performance) on each criterion. To make this clear, note that swing weighting questioning is methodological equivalent to ask for the difference in overall attractiveness between a hypothetical proposal with GOOD performance on one criterion, say j, and NEUTRAL performance on the other ones and the above referred NEUTRAL all-over proposal. In the beginning of the second Delphi round, participants were presented with an anonymous summary of all participants’ answers in the first round (together with the comments they might have given), to promote a form of “dialogue” that prompts reflection and leads participants to either confirm or revise their previous judgements – a participative action paramount for the creation of alignment. A similar process took place in a third and final round. The statistics of the answers on each of the three weighting rounds are presented in Table 2. The final majority judgements were (highlighted in bold in Table 2): a consensual “very strong” importance for (the NEUTRAL to GOOD swing on) “Security”, a 83% majority on “very strong” importance (with 17% on “strong”) on “Integration”, and a majority of 67% on “moderate” importance (with 33% on “strong”) on “Mobility”. Table 2. Percentage of judgements of each MACBETH category given by the participants in the three Delphi weighting rounds for the three technical criteria.
Qualitative swingweighting judgments
Answers in the 1st round Answers in the 2nd round
Answers in the 3rd round
No importance
Very weak importance
Weak importance
Moderate importance
Strong importance
Very strong importance
Extreme importance
Don’t know/don’t want to answer
Integration
0%
0%
0%
0%
9%
45%
45%
0%
Security
0%
0%
0%
9%
27%
45%
18%
0%
Mobility
0%
0%
9%
55%
36%
0%
0%
0%
Integration
0%
0%
0%
0%
14%
86%
0%
0%
Security
0%
0%
0%
0%
14%
71%
14%
0%
Mobility
0%
0%
0%
43%
57%
0%
0%
0%
Integration
0%
0%
0%
0%
17%
83%
0%
0%
Security
0%
0%
0%
0%
0%
100%
0%
0%
Mobility
0%
0%
0%
67%
33%
0%
0%
0%
3.3. Multicriteria decision conferencing for weighting The collective weighting knowledge developed in the Delphi stage was the starting information for the final weighting of the three technical criteria by the steering committee in the DC stage, developed with the support of the M-MACBETH software [30] (available at www.mmacbeth.com). The MACBETH weighting judgements in the matrix at the left in Figure 3 illustrate the final judgements agreed by the Steering Committee (some of the judgements in the matrix were disguised to respect the confidentiality agreement with GALP). The last column of the matrix is filled in with qualitative swing weighting judgements, by decreasing order of importance, compatible with the above conclusions of the Delphi stage. For example, the highlighted cell is filled in with “mod-strg” to indicate group hesitation between “moderate” and “strong” importance of improving from NEUTRAL to GOOD on “Mobility”. Besides these judgements, the complete MACBETH weighting procedure also asks for the paired comparisons between swings. For example, in Figure 3 the judgement “very weak” inserted in the cell of the matrix corresponding to the [ Security ] and [ Integration ] swings reveals the “very weak” importance given to the difference between the two swings. As before, there is an equivalent interpretation in terms of hypothetical proposals: the group could have been asked to judge the difference in overall
Carlos A. Bana e Costa et al. / Procedia Computer Science 162 (2019) 786–794 Bana e Costa et al. / Procedia Computer Science 00 (2019) 000–000
792
attractiveness between a hypothetical proposal [ Security ] GOOD on “Security” and NEUTRAL both on “Integration” and “Mobility” and another one [ Integration ] GOOD on “Integration” and NEUTRAL both on “Security” and “Mobility”. Formally, a common constant score difference (c) being arbitrarily assigned to the increase from NEUTRAL to GOOD on each criterion j (for instance, c = 100), it is easy to see that the correspondent increase in overall preference is measured by the additive model as 100 times the weight of j. Therefore, for example, the fact that the judgement between [ Security ] and [ Integration ] is “very strong”, indicts a significant higher relative weight for “Security” than for “Integration”. The M-MACBETH software automatically verifies the consistency of the matrix of judgements given (and in case of inconsistency displays suggestions to overcome it – see details in [31]). For the consistent matrix in Figure 3, MACBETH suggested the numerical scale of weights (displayed in histogram format at the right in figure 3) compatible with the judgements given (the MACBETH scale is determined by mathematical programming and is unique – see [31, 32]). The suggested weights were discussed by the Steering Committee and the value modelling process moved forward when an agreement was finally achieved on final relative weights for the criteria. To facilitate the discussion, the software offers a sensitivity analysis tool consisting in plotting the interval within which each weight can vary, without violating the relations between qualitative judgements in the matrix and allowing the user to adjust weights and immediately observe the respective implications in overall scores.
Fig. 3. Screenshot of the main screen of the M-MACBETH software showing the matrix of qualitative weighting judgements and the respective criteria weights proposed by MACBETH. The name of a criterion presented in parenthesis represents the respective swing.
As an output of the combined Web-Delphi and decision conferencing processes, the bidders’ classification which resulted by applying the developed value model (which is not shown for confidentiality reasons) was discussed and “the winner” bid was selected by the steering committee. 4. Conclusions In this work we aimed to go a step forward with the “Collaborative Value Modeling” framework by extending the socio-technical participatory process to the incorporation of “dialogue” in corporate (large groups) settings. As described, it is not common to find in the literature descriptions of socio-technical processes developed for building multicriteria evaluation models, and this is especially true for corporate contexts. The possibility of exploring this topic under a real-world application provided us with first-hand experience regarding its acceptance by those involved in the decision-making process. The overall participation was good, with the company collaborators communicating they felt positively involved in the entire decision process, which was not
Carlos A. Bana e Costa et al. / Procedia Computer Science 162 (2019) 786–794 Bana e Costa et al. / Procedia Computer Science 00 (2019) 000–000
793
a rule in the company, till then. Every step in the decision process was always reported to all the collaborators involved and directly contributed for decision-making by the steering committee. The fact that the bidding winner derived directly from the decision process, transmitted a very positive feeling on all the participants, including the steering committee and the procurement department members, as well as demonstrates the practical impact of the proposed approach. Lessons from this and other real-life applications of “Collaborative Value Modelling” in public and private contexts are being studied in current research under the ENERPHI project that aims to give insights to the development of an integrated Welphi-MACBETH group decision support system that can be applied in several sectors, including in energy settings. Acknowledgements The authors thanks GALP ENERGIA for allowing the use in this paper of the company’s EMS selection project, which description in Section 3 is partially based on internal documents of the project, and also on a Master thesis in Energy Engineering Management develop in collaboration between the company and the IST [33]; and the ENERPHI project (http://enerphi.tecnico.ulisboa.pt/) that is funded by EIT InnoEnergy. References [1] Thorpe, E.K., Learn about what an Enterprise Management System is and how it can make large organisations more efficient, in ITPRO. 2018: https://www.itpro.co.uk/business-operations/31331/what-is-an-enterprise-management-system. [2] Essien, W., Business Policy and Participative Decision-Making. 2019, Indiana: AuthorHouse, Bloomington. [3] Bohm, D., On dialogue, in On dialogue, L. Nichol, Editor. 1996, Routledge. [4] Isaacs, W.N., Taking flight: Dialogue, collective thinking, and organizational learning. Organizational dynamics, 1993. 22(2): p. 2439. [5] Gerard, G., Dialogue: Rediscover the transforming power of conversation. 1998: John Wiley & Sons Incorporated. [6] Belton, V. and T.J. Stewart, Multiple Criteria Decision Analysis. 2002, Boston, MA: Springer US. [7] Köksalan, M.M., J. Wallenius, and S. Zionts, Multiple criteria decision making: from early history to the 21st century. 2011: World Scientific. [8] Watson, S.R. and D.M. Buede, Decision synthesis: The principles and practice of decision analysis. 1987: Cambridge University Press. [9] Parnell, G.S., et al., Handbook of decision analysis. Vol. 6. 2013: John Wiley & Sons. [10] Vieira, A.C., M.D. Oliveira, and C.A.B. e Costa, Enhancing knowledge construction processes within multicriteria decision analysis: The Collaborative Value Modelling framework. Omega, 2019. [11] Linstone, H.A. and M. Turoff, Delphi: A brief look backward and forward. Technological Forecasting and Social Change, 2011. 78(9): p. 1712-1719. [12] Phillips, L.D., Decision Conferencing, in Advances in Decision Analysis: From Foundations to Applications. 2007, Cambridge University Press. [13] Phillips, L.D. and C.A.B. e Costa, Transparent prioritisation, budgeting and resource allocation with multi-criteria decision analysis and decision conferencing. Annals of Operations Research, 2007. 154(1): p. 51-68. [14] Carroll, L., The Complete Works of Lewis Carroll, illustrations by J. Tenniel, introduction by A. Wollcott. New York: Modern Library, 1939. [15] Raiffa, H., Decision Analysis Addison-Wesley. Reading, Mass, 1968. [16] Keeney, R.L. and H. Raiffa, Decisions with Multiple Objectives: Preferences and Value Trade-Offs. 1993: Cambridge University Press. 596. [17] Kirkwood, C.W., Strategic decision making. Duxbury Press–Wadsworth, 1997. [18] Winterfeldt, D.v. and W. Edwards, Decision Analysis and Behavioral Research. 1986: Cambridge University Press. 626. [19] Keeney, R.L., Value-focused thinking: a path to creative decision making. 1992, Cambridge, Massachusetts: Harvard University Press. [20] Bana e Costa, C.A. and E. Beinat, Model-structuring in public decision-aiding, in Working Paper LSEOR 05.79. 2005, London School of Economics and Political Science: Londres. [21] Kirkwood, C.W., Strategic Decision Making: Multiobjective Decision Analysis with Spreadsheets. 1997, Belmont, California: Duxbury Press. [22] Dyer, J.S., MAUT- Multiattribute utility theory, in Multiple Criteria Decision Analysis J. Figueira, S. Greco, and M. Ehrgott, Editors. 2005. p. 265-296. [23] Bana e Costa, C.A., J.-M. De Corte, and J.-C. Vansnick, Macbeth. International Journal of Information Technology & Decision
794
Carlos A. Bana e Costa et al. / Procedia Computer Science 162 (2019) 786–794 Bana e Costa et al. / Procedia Computer Science 00 (2019) 000–000
Making, 2012. 11(02): p. 359-387. [24] Bana e Costa, C.A., J.M. de Corte, and J.C. Vansnick, Macbeth (measuring attractiveness by a categorical based evaluation technique). Wiley Encyclopedia of Operations Research and Management Science, 2010. [25] Fasolo, B. and C.A. Bana e Costa, Tailoring value elicitation to decision makers' numeracy and fluency: Expressing value judgments in numbers or words. Omega - The International Journal of Management Science, 2014. 44: p. 83-90. [26] Woloshin, S., et al., Assessing values for health: Numeracy matters. Medical Decision Making, 2001. 21(5): p. 382-390. [27] Roberts, F.S., Measurement Theory with Applications to Decisionmaking, Utility, and the Social Sciences (Section, Mathematics and the Social Sciences). 1984: Cambridge University Press. [28] GALP, Tender Programme, in MPDP – Market/Production Data Platform, G. Energia, Editor. 2017. [29] Welphi. Welphi. 2019; Available from: http://www.welphi.com/. [30] M-MACBETH. M-MACBETH. 2017; Available from: http://m-macbeth.com/. [31] Bana e Costa, C.A., J.M. De Corte, and J.C. Vansnick, On the mathematical foundations of MACBETH, in Multiple criteria decision analysis: state of the art surveys J. Figueira, S. Greco, and M. Ehrgott, Editors. 2005, Springer Science & Business Media. p. 409-442. [32] Bana e Costa, C., J.-M. Corte, and J.-C. Vansnick, On the mathematical foundation of MACBETH. Multiple Criteria Decision Analysis: state of the art surveys, 2005: p. 409-437. [33] Nóbrega, M., Industrial processes integration, in Energy Engineering and Management. 2018, Instituto Superior Técnico, Universidade de Lisboa.