Environmental Impact Assessment Review 30 (2010) 100–107
Contents lists available at ScienceDirect
Environmental Impact Assessment Review j o u r n a l h o m e p a g e : w w w. e l s ev i e r. c o m / l o c a t e / e i a r
An analytical framework for capacity development in EIA — The case of Yemen Louise van Loon a, Peter P.J. Driessen a,⁎, Arend Kolhoff b, Hens A.C. Runhaar a a b
Copernicus Institute for Sustainable Development and Innovation, Utrecht University, P.O. Box 80115, 3508 TC, The Netherlands Netherlands Commission for Environmental Assessment, P.O. Box 2345, 3500 GH Utrecht, The Netherlands
a r t i c l e
i n f o
Article history: Received 26 November 2008 Received in revised form 21 May 2009 Accepted 8 June 2009 Available online 1 July 2009 Keywords: Capacity development EA systems Performance Analytical framework Yemen
a b s t r a c t Most countries worldwide nowadays apply Environmental Assessment (EA) as an ex ante tool to evaluate environmental impacts of policies, plans, programmes, and projects. However, the application and performance of EA differ significantly. Scientific analysis of how EA performs mainly focuses on two levels: the micro (or project) level and the macro (or system) level. Macro level analysis usually focuses on institutions for EA and the organisation of stakeholder interaction in EA. This article proposes a more comprehensive framework for analysing EA systems that combines other approaches with a capacity approach and an explicit consideration of the context in which EA systems are developed and performed. In order to illustrate the value of our framework, we apply it to the Republic of Yemen, where over the last decades many EA capacity development programmes have been executed; however, EA performance has not substantially improved. The Yemen case study illustrates that the capacity development approach allows an understanding of the historical process, the stakeholders, the knowledge component, and the material and technical aspects of EA, but perhaps more important is a systemic understanding of the outcomes: problems are not isolated, but influence and even maintain each other. In addition, by taking into account the context characteristics, our framework allows for the assessment of the feasibility of capacity development programmes that aim at improving EA system performance. © 2009 Elsevier Inc. All rights reserved.
1. Introduction Since its introduction in the United States in 1969, Environmental Assessment (EA) has developed into a widely accepted policy tool. Nowadays, most countries worldwide apply EA as a preventive tool to evaluate the environmental impacts of policies, plans, programmes, and projects. Yet, while its importance is widely recognised, the methods of application and the achievements of EA differ. Scientific analysis of how EA performs in practice mainly focuses on two levels: the micro (or project) level and the macro (or system) level (Sadler, 2004; Retief, 2007b: 444). Micro level analysis (such as the contribution of EA to reducing adverse environmental impacts of wastewater treatment plants, oil refineries, and the construction of roads and bridges) is most commonly executed, as it provides detailed and practical insights that are of great relevance for understanding EA application. Micro level studies can reveal the specific features and strengths and weaknesses of EA practices in unique contexts (e.g. Runhaar and Driessen, 2007). Consequently, micro level research also results in concrete recommendations. Macro level analysis, on the other
⁎ Corresponding author. Copernicus Institute for Sustainable Development and Innovation, Utrecht University, P.O. Box 80115, 3508 TC Utrecht, The Netherlands. Tel.: +31 30 2535771. E-mail addresses:
[email protected] (L. van Loon),
[email protected] (P.P.J. Driessen),
[email protected] (A. Kolhoff),
[email protected] (H.A.C. Runhaar). 0195-9255/$ – see front matter © 2009 Elsevier Inc. All rights reserved. doi:10.1016/j.eiar.2009.06.001
hand, takes a different perspective and approaches EA as a network of stakeholders, activities, and institutions that together determine the performance of EA as a policy tool. Macro level analysis can reflect on ‘enabling conditions’ and ‘barriers’ to the implementation of EA in terms of the policy, legal, and institutional contexts (Buckley, 2000; Partidario and Clark, 2000; Stinchcombe and Gibson, 2001; in: Retief, 2007b: 444). As a result, macro level analysis leads to conclusions that do not reflect specific cases – as micro level analyses do – but instead provide insight into the system level of EA. Since each country has different EA legislation and interpretations, understanding the performance (application, functioning, effectiveness and/or quality) of EA requires country specific interpretation of the performance concept. A country with severe social problems and a weak political system may define EA legislation less strictly than a fully institutionalised democracy. Nonetheless, international literature has made many attempts to formulate generic EA performance indicators. As derived from literature (Fisher and Gazzola, 2006; Retief, 2007a; Androulidakis and Karakassis, 2006), EA performance in this article is defined as the performance of the formal EA system, i.e. the way in which it is described in laws, policies, and plans, as well as its formal actor configuration and the resources available for EA systems to perform as envisaged. EA performance in this context indicates the provision of information on anticipated or revealed environmental impacts that is timely, valid, comprehensive in terms of environmental impacts of proposed activities (preferably multiple variants), and that includes proposed measures for the mitigation of
L. van Loon et al. / Environmental Impact Assessment Review 30 (2010) 100–107
possible environmental impacts. Excluded in this article is the practical performance of EA, which becomes evident only after analysing application of EA in specific cases (which requires micro level analysis). The system approach is more fundamental than project analysis because it entails unravelling the functioning of EA for example by analysing the different steps taken during EA, which together need to contribute to well-informed decision-making. The system approach may also elaborate on the formal network of stakeholders involved in EA, and thereby analyses the different role assignments, division of responsibilities, and power relations in EA of these stakeholders that contribute to the performance of EA. This does not necessarily provide insight in practical application of the system, but rather in its general or formal features. In the EA literature, system analysis is usually restricted to two types of approaches: process analysis and stakeholder analysis (e.g. Samarakoon and Rowan, 2007; ADB, 1997; Choi and Kwon, 2006). A shortcoming of both approaches is that they only partially analyse the EA system. An EA system is the totality of stakeholders, activities, and institutions within their specific configuration; analysing only processes or stakeholders is not sufficient. Even combined, they fall short of analysing the whole EA system. Other relevant aspects are, for example, the availability of material and financial resources, and the role of knowledge. Not only is the EA system more than the sum of the EA process and the stakeholders involved, but also the specific societal context is crucial for EA performance and its possibilities for interventions (Cherp, 2001). It is particularly this latter aspect that restricts many studies, because looking ahead requires understanding the ways in which the current situation developed. In other words: how was EA performance influenced by the past? This thinking in time (taking into account the past while assessing the current) is crucial when referring to a concept that is not the result of a one-day investment. Analysis of EA, without considering how the situation was reached, results in restrictions of the recommendations developed. Thus, analysing EA at macro level requires insights in stakeholders, in activities, in institutions, their relations, in resources, and their development in time. In this paper we propose a system analysis through analysing ‘EA system capacity’ and ‘capacity development’ as a more comprehensive perspective. This implies that recommendations for improvement of EA system performance relate to capacity development. This article aims first at developing a model for analysing capacity of EA, and second at diagnosing the model by applying it to the Republic of Yemen. The case of Yemen is particularly interesting in this context as it experiences weak EA performance, though capacity development has been a central development strategy for the past twenty years. This paper is based upon two main research strategies. First, a model for assessing EA capacity is developed by means of an extensive literature review. Second, the model is applied in a case study on the Republic of Yemen. Aside from literature study in general (scientific literature as well as policy documents), an important method here was an archive study at governments and other relevant organisations. The archives contain financial reports, proposals, evaluations, recommendations, and progress reports, as well as specific reports such as training courses, procurement reports, and mission reports. For assessing the current capacity in Yemen, two additional methods were applied. Semistructured interviews were conducted with 22 individuals from ministries, authorities, research institutes, consultants, environmental non-governmental organisations (NGOs), representatives of private actors, and several international donors. To overcome contradictions and to strengthen findings from interviews, a validation workshop was organised during which a group of stakeholders – including many interviewees – were confronted with the findings of the study. This helped determine whether the findings are correct and complete, and increases their validity. This research strategy means that we primarily assessed EA system performance and explanatory factors from the perception of stakeholders, rather than from a predefined set of evaluation criteria.
101
2. Opening Pandora's Box: introducing capacity development for EA For two reasons, this article proposes ‘capacity development’ as approach for analysing EA on the macro level. First, capacity development is an inclusive concept that explicitly includes development aspects as well, and so provides a more complete and dynamic analytical framework. Second, in many developing countries, capacity development has been the central approach for introducing EA, while at the same time the EA performance in developing countries falls far behind that of developed countries. Explanatory factors for this include weak institutional provisions, poor integration of EA into other decision-making processes, absence of quantitative thresholds and criteria, lack of public participation, and absence of monitoring and enforcement. Possibly, the low EA performance in these developing countries can be explained by means of analysing capacity development. Bringing capacity development to the forefront appears to be opening Pandora's Box: instead of bringing clarity and guidance, it brings diffusion, contradiction, and gaps. The reason is that capacity development is defined and conceptualised differently in literature (e.g. Kirchhoff, 2006; Strigl, 2003; LaFond et al., 2002; Sagar and VanDeveer, 2005, Potter and Brough, 2004), resulting in an unclear description of the concept. Filmer et al. (2000), for example, define capacity development solely as institutional capacity development, or as Pielemeier and Salinas-Goytia (1999) refer to it, institution-building. Others regard capacity development as a synonym for training (Bower, 2000, in: Potter and Brough, 2004). Another related subject, though with a different approach, is institutional capacity, developed by the Organisation for Economic Co-operation and Development (OECD). Five coherent levels of institutional capacity are distinguished: the individual level, the organisational level, the inter-organisational level, the public sector level, and the level of social values and practice in society (Driessen and Leroy, 2007: 202–203). The OECD approach partly overlaps with the approach proposed in this article, but takes a different perspective (starting with the institutional angle). Excluded in the OECD approach are, for example, the framework of rules (the non-human source of institutions such as laws and policies) and how the five levels interfere with one another. So, whereas multiple approaches to capacity development exist, the common weaknesses of existing approaches are that they lack a comprehensive operationalisation (i.e. a translation into concrete and measurable variables) and do not consider the relational perspective of the capacity. The concept of capacity building emerged from the context of development cooperation. It first focused on ‘building’ and ‘strengthening’ of capacity, and in later years the ‘development’ metaphor was adopted. Capacity building and strengthening did not result in long term effectiveness and problem ownership in the recipient countries, primarily because the ‘building’ metaphor suggested a process starting with a plain surface and proposed a step-by-step erection of a new structure based upon a preconceived design (OECD, 2003: 9). Moreover, the support methods – mostly technical – often were not consistent with the broader political, economic, and social setting of the country. The ‘development’ concept does imply this broader focus and includes empowerment, culture, social capital, and an enabling environment. It focuses on the possibilities for supporting, catalysing, adapting, maintaining, and creating capacity in the recipient country (OECD, 2003: 10). Ideally, this ‘development’ approach leads to internalisation of capacity development and eventually sustainable development. In spite of its diffuse and vague character, capacity development has taken a prominent position in international environmental development due to explicitly formulated statements in chapter 37 of Agenda 21 (United Nations Conference on Environment and Development, UNCED). Capacity development is the objective of many development programmes and a component of most others. One might wonder how it is possible that such a vague and unclear concept still became a leading
102
L. van Loon et al. / Environmental Impact Assessment Review 30 (2010) 100–107
principle. Certainly everybody agrees that it is important, but due to the concept's lack of clarity and guidance, it does often not lead to the concrete and accurate actions that result in desired achievements. 3. Operationalisation of capacity for EA Stepping away from vague and diffuse definitions, this article proposes a systemic and inclusive way of conceptualising capacity development for EA. For this purpose, two categorisations of capacity development are combined: sub-capacities and hierarchical dimensions. Sub-capacities define what specific type of capacity is being developed. This categorisation divides capacity development as an umbrella concept into separate and measurable entities: institutional capacity, organisational capacity, human capacity, scientific capacity, technical capacity, and resource capacity (Kirchhoff, 2006: 8–9). The main strength of this categorisation is that a clear distinction allows for the identification of specific performance in each sub-capacity separately. A weakness, however, is that focusing on sub-capacities limits understanding of the inter-relational context of capacity development: developing a certain sub-capacity is directly linked to other sub-capacities. For example, a person may have certain skills but also requires adequate equipment to execute them. Similarly, without communication techniques coordination between organisations is difficult. Therefore, a hierarchical perspective derived from Potter and Brough (2004) has been added to this study to appoint the coherence of the sub-capacities, In this pyramid the different levels depend and build on, the effectiveness of other levels (Potter and Brough, 2004: 339). It may seem as if the lower part of the pyramid is a prerequisite for upper parts, but this is not the case. The levels do not indicate a hierarchy of importance but refer to the complexity and time dimensions of the capacity. The upper levels are technical and ‘easier’ to implement in the short-term, while the lower dimensions are socio-culturally grounded and much ‘harder’ to implement. For example, purchasing computers is more feasible than establishing an effective and efficient policy organisation. Yet, as stated above, there is not an ‘order’ or stepwise process from bottom to top or the other way around (Fig. 1). Each sub-capacity is specified by means of assigning subcomponents, which are aspects to be taken into consideration in each sub-capacity. In our study, a total of 56 sub-components were identified, as summarised in Table 1. These sub-components were identified through a literature survey (see below; for more details, see Van Loon, 2008).
3.1. Institutional capacity The first and most fundamental sub-capacity is found in institutions, the formal sources of authority, that provide the structures for individuals, households, businesses, organisations, social groups and any other entity in society. Institutional capacity is often referred to as ‘the rules of the game’ (Lusthaus et al., 2002: 24). Two types of rules are distinguished: EA specific rules, and the wider institutional framework in which EA is embedded. EA specific rules refer to those leading principles of EA, for instance the execution of screening by means of three categorical lists with project descriptions supported by quantitative thresholds. The wider institutional framework includes governance ideals, such as transparency.
3.2. Organisational capacity Organisations are made up of groups of people working together towards a shared objective, such as making a profit or serving particular collective goals. Organisations are complex because they consist of social structures in which there are codes of behaviour, rituals, power, and different formal and informal rules, which some people might adhere to better than others. Moreover, organisations are increasingly linked into a network of other organisations, for example within the public sector. The EA system is made up of a network of organisations that is continuously changing, re-organising and developing. It is at this fundamental organisational level where the central courses of action are determined and planned, through missions and strategies. Less concrete aspects of organisations are also defined here, such as their management styles (OECD, 2006: 13). Our two main categories of organisational capacity are mission, vision, values and strategies, as well as leadership, management and culture. To illustrate, optimally the organisation's mission is formulated clearly, realistically, and is meaningful in its context. A vision can be, for example, to contribute to an environmentally sustainable world, within which EA may be a central policy tool. The strategy may prioritise the most severe environmental challenges to be addressed within this mission. This, together with the organisational leadership, management, and culture contribute to EA performance because organisational capacity provides the clearness, priorities, and guidance which EA needs to be supported by. Also, organisational capacity provides the roles and responsibilities for the group of actors that execute EA and thereby contribute to the performance.
Fig. 1. Analytical framework of capacity development for EA.
L. van Loon et al. / Environmental Impact Assessment Review 30 (2010) 100–107 Table 1 Division of capacity development into sub-capacities and sub-components. Sub-capacity
Sub-components ‘Points of thought’
• Legal provisions for EIA (definitions, priorities, roles, and responsibilities) • Screening: systematic and specific categories • Scoping: systematic, quality and independence • Preparation of the EIA report • Reviewing: quality and independence • Decision-making and follow-up • Transparency of information • Accountability of formal decisions • Regulatory quality • Coherence Organisational • Mission, vision and values: formulation capacity • Mission, vision and values: level of recentness and dynamics • Mission, vision and values: monitoring of progress towards goals • Strategy: formulation of objectives and targets • Strategy: level of recentness and dynamics of strategy • Strategy: monitoring of progress towards objective and targets • Leadership: level of trust by staff • Leadership: level of stimulation • Leadership: power balance between staff and leader(s) • Management style: high democratic–low democratic • External structure: the level of hierarchy • Internal management structure: centralised–regionalised • Management culture: the level of power distance Human capacity • Staff sufficiency: quantity • Staff sufficiency: skills • Staff sufficiency: personal effectiveness • Staff development • General public: ability and willingness to participate • Organised civil society: attitude and commitment • Organised civil society: accountability • Media: meaningful, reliable and credible • Knowledge actors: inclusiveness and accountability • Private sector: level of obedience to EIA requirements • Private sector: level of commitment to environmental business policy Scientific capacity • Availability of science for officials: accessibility • Participation in science: publications • Scientific network: sharing and cooperation • Applied knowledge in policy: usefulness for environmental policy Technical capacity • Communication technology • Digital environmental data system • EIA procedure: use of methods Resource capacity • Financial capacity within state actors • Financial capacity within private sector • Financial capacity within knowledge actors • Financial capacity within NGOs • Office resources • EIA execution resources Institutional capacity
Source:Van Loon (2008).
3.3. Human capacity Our third sub-capacity is human capacity. Good policies will not implement themselves: the performance of individuals is the basis for the success of any action or policy (OECD, 2003: 12). However, it is crucial to know whether or not these individuals are motivated, whether or not they are competent in their tasks, what their attitudes are, and what their incentives are. Unavoidably, in assessing human capacity generalisations made is applied, i.e. the assessment will refer to the performance of a group and not single individuals. This may neglect a few ‘champions’ or ‘rotten apples’ in a group. We distinguish between three EA stakeholder groups, in which individuals have different aspects determining their capacity. In the first stakeholder group, state actors, the human capacity of those working within the responsible authority for EA is important. Two important aspects are that these individuals ought to be sufficient in numbers and sufficient in qualification. Secondly, human capacity within the civil society sphere is important for EA. More specifically, four groups are distinguished. The general
103
public, environmental NGOs, the media, and knowledge institutes (such as universities) each have their own role and potentials in EA, that can provide a critical and scientific view on the quality and scope of the assessment. Third, the private sector is affected by EA when it initiates or expands a project that potentially has negative effects on the environment, and therefore is required to conduct an EA. Two types of aspects influence the human capacity of the private sector: obedience and commitment to EA. 3.4. Scientific capacity Fourth, the importance of science for EA is obvious, as EA depends upon scientific methods and data for its assessment of risks, inventory of alternatives, and formulation of predictions. Accessibility, participation, cooperation, and the explicit linkage of science with policy are the four crucial aspects of scientific capacity. 3.5. Technical capacity Daily activities greatly depend upon processes, mechanisms, and technology. Often, technical aspects are underestimated or taken for granted, although there is evidence that the lack of technical capacity can have significant consequences (e.g. Sagar and VanDeveer, 2005). The availability of Information and Communication Technology (ICT) applications, maintenance and storage of data and information, and specific EA execution methods is central aspects to be taken into consideration in technical capacity. 3.6. Resource capacity Finally, resources are all the material and virtual stocks needed for EA. These can be categorised into two types of resources: monetary and non-monetary resources. Monetary resources are different between stakeholder groups. Non-monetary resources can be measurement equipment, cars to visit sites, and office resources, for example. All sub-capacities influence EA performance in different ways. For example, the reviewing phase of EA is important because this determines the quality of the input for decision-making. Proper reviewing enhances EA performance because the information supply regarding the future activity is complete and the decision is truly based upon good information. However, with poor review teams (perhaps due to staffing problems, or knowledge shortages), the fact that the EA study may have neglected some environmental aspects or feasible alternatives does not become evident. As a result, EA performance is negatively influenced because the decision-makers are not fully informed about potential environmental effects and alternatives. A second example is the ability and willingness of the general public to participate in EA. The general public's commitment and problem ownership regarding the environment makes them more likely to participate in the EA process. Public involvement is (at least in Western countries) perceived as an important contributing factor to EA performance as it enhances the interactive and inclusive way in which the assessment is conducted: the more stakeholders involved, the more inclusive and informative the process will be, and the better it contributes to informed decision-making. This is, for example, achieved by means of using creativity in finding alternatives, and putting pressure on decision-making outcomes (i.e. making sure EA outcomes are incorporated in the decision). A final example is the facilitating contribution to EA performance by ICT applications. EA is inherently a multiplayer setting in which at least the proponent of the activity, the review authority, and decision-making authority needs to cooperate, coordinate and communicate. However, organisations also internally depend upon colleagues and other departments. ICT is part of the technical capacity and helps the EA process to be a multiplayer setting in which knowledge and opinions are shared. ICT ‘oils the
104
L. van Loon et al. / Environmental Impact Assessment Review 30 (2010) 100–107
engine of EA’ and improves performance in terms of process (rather than content). Once again, capacity development refers to the totality of subcapacities (further divided into sub-components) within the pyramid of hierarchy discussed above. By systemically applying this model in its totality and not only addressing individual components, a better understanding of the EA system can be obtained. Table 1 is the result of an extensive literature study. Different sources appear to elaborate on parts of the capacity development puzzle. Combining these sources has resulted in the current analytical model. Fisher and Gazzola (2006), however, note that international professional literature on EA effectiveness criteria has been developed based on practices and experiences only of a selected number of countries. The bulk of the international literature represents the view of Western countries rather than developing countries. This may be explained by the fact that EA has been introduced in developed countries much earlier than in developing countries, and the same holds for capacity development literature. As a result, developing countries have not contributed equally to international literature regarding these subjects. Therefore, it should be taken into consideration that the analytical model proposed in this article is in a sense biased due to the availability of mainly Western-oriented literature. In developing countries, different or fewer sub-capacities may be important (or feasible) for EA systems to perform as defined in the introduction to this article. 4. Case study: insights from the Republic of Yemen in Environmental Impact Assessment (EIA) Until now we discussed EA in general terms. In practice, two types of assessment can be distinguished: project level EA (or Environmental Impact Assessment — EIA) and EA at the level of policies, plans and programmes (Strategic Environmental Assessment — SEA). The Yemeni EA system is limited to provisions for EIA. Between 1988 and 2008 Yemen was subject to predesigned capacity development projects and programmes as part of foreign development support. The main partners in the field of environment are the Netherlands, Germany, the World Bank, United Nations Development Programme (UNDP), and recently also the United Kingdom. An archive study revealed that while the different development programmes made an effort to develop EIA capacity, the overall result in 2008 is still weak (Van Loon, 2008). Other EIA assessment studies by the World Bank (2000), OECD (2006), and others (e.g. METAP, 2001; Kolhoff, 2008; Directorate General International Cooperation of the Netherlands (DGIS), 2008) confirm that EIA faces major challenges and shortcomings. Often mentioned causes for low performance are the non-application of good practice scoping, limited consultant capacity, almost non-existing inspection and enforcement of EIA, and non-participation of civil society in EIA. 4.1. Development of capacity for EIA Since 1988, four phases of capacity development in Yemen can be distinguished, during which different problems definitions and development strategies were adopted, with international consultants as leaders. At the beginning of the program in 1988 the strategy posed by the international consultants was to develop the fundamental sub-capacities first. In the first period (1988–1992) the main problem, according to consultants and the Yemeni government, was the non-existence of an environmental institutional framework and organisational capacity. Objectives included the establishment of environmental laws and policies, as well as development of internal governmental management practice. While environmental problems were considered severe (especially water issues and waste), it was argued that institutional and organisational sub-capacities needed to be prioritised. The results in 1992 were disappointing for mainly two reasons. First, external causes
delayed and even hampered capacity development: the former republics of North and South Yemen were united in 1990, and the Gulf Crisis and a civil war occurred. However, problems were also faced in the execution of capacity development activities. Project activities were ad hoc rather than structurally organised. Scientific studies were restricted due to the manpower available. As a result, limited data was available for setting environmental standards (institutional capacity). In the second phase (1993–early 1997), efforts and activities were expanded to include development in all sub-capacities as the new strategy. Efforts for institutional and organisational capacity were continued, and new efforts were undertaken to develop human capacity (awareness-raising, education among the general public, staff expansion and training), expand scientific studies, and procure materials. Some positive developments characterise the second phase, for example the introduction of EIA, the establishment of an environmental law, and formulation of EIA policy. However, most activities were overstretched and fragmented: not only were too many projects executed at the same time without sufficient manpower, but also these projects were also conducted in isolation from each other. This was problematic because different projects did not enhance one another, which would have been possible in an aligned configuration. For example, the UNDP support in writing the National Environmental Action Plan (institutional development) was not aligned to the Dutch organisational support to the Environmental Protection Authority (EPA). Alignment of both support projects could have facilitated an action plan more adapted to the organisational capacity present, and organisational capacity might have been able to better anticipate options and restrictions posed in the action plan. In response to the overstretched and fragmented capacity development activities the third phase (late 1997–early 2001) attempted to overcome these problems by introducing a new strategy: the process oriented approach. Capacity development would focus only on specific core tasks, selected for their relevance and feasibility. These core tasks covered all sub-capacities, but only partially. For example, as part of scientific capacity the establishment of environmental standards (to serve the institutional capacity) was a core activity. Other core tasks were implementing and enforcing the National Environmental Action Plan (institutional capacity), and training governmental officials (human capacity). Whereas some achievements were made, such as the increased involvement of stakeholders in EIA, the overall result was poor. The causes for weak developments of all sub-capacities are not totally visible, but a growing problem for the donor community was the suspicion of weak commitment and trust of donors in Yemeni officials. After thirteen years of capacity development, little actual capacity had been established. The poor outcome, as well as the doubts of the international community regarding the commitment and trustworthiness of the Yemeni officials, had damaged the development cooperation relations. Instead of a fourth ‘full’ phase, an interim phase with a set of projects was initiated while the future of the support was discussed among stakeholders. The specific projects were executed rather successfully (scientific development was significantly strengthened), but were incapable of improving the whole EIA system. In 2004 the interim phase was stopped, and discussion about progression of support was discussed for some time. Eventually, in 2008, an EIA capacity development program was officially started. Looking back at twenty years of capacity development activities for EIA in Yemen some trends can be identified. The first major trend relates to the legal position of the environmental authority. For different reasons (disputes and governmental reforms), the environmental authority has had four positions in twenty years. On the one hand this has enabled the increasingly independent position of the authority. On the other hand, it has not supported a stable and visible authority, and the content quality of the authority has improved minimally. Content quality refers to the quality of the authority as a professional, well-managed organisation. A positive trend on the
L. van Loon et al. / Environmental Impact Assessment Review 30 (2010) 100–107
105
An assessment of the current EIA system in Yemen was executed by means of a literature study, interviews, and a validation workshop. The sub-capacities and sub-components were analysed, and this allows for an overall judgement of the EIA system. Assessing the EIA system in Yemen –combining historical insights with current ones– reveals that, according to Yemeni interviewees, three main problems are causing the overall weak EIA performance in Yemen. For each problem a short description of the current practice is given and the current perception of interviewees regarding the contributions and potential interventions for the future is added. It must be emphasised that the latter statement is solely based upon the perceptions of the interviewees and are not definite whatsoever, as contextual factors are not completely incorporated in these statements. They may, however, provide a line of thought for further research.
informed decision-making is not achieved. The interpretable screening lists, for example, may result in projects being excluded from EIA despite significant environmental impacts. The decision about such projects would then be incomplete and subsequent decision-making poorly informed. Also, laws and policies define the actor configurations of EIA, i.e. which role and responsibility actors have. If the environmental authority is not obliged to execute scoping, EIA may take wrong priorities and focus points, which again does not lead to successful EIA. Although many institutional challenges exist, some positive developments have occurred. The environmental authority has established a Memorandum of Understanding (MoU) with the General Investment Authority (GIA), in which submission of all EIA's and timely reviewing is agreed upon. This has most significantly resulted in the inclusion of all industrial (and highly environmental impacting) activities in EIA. Also, the EPA has close cooperation and consultation with the water authority. Finally, some signs of potential institutional change are emerging. Some activities have started that can decrease institutional problems. Environmental directives and a revised version of the environmental law are currently being developed. Fundamental institutional changes are not expected, but specific attention is being paid to enforcement mechanisms and new instruments for environmental authority to enhance follow-up of EIA.
4.2.1. Weak EIA institutional capacity Multiple aspects of the institutional capacity are not well developed, and four weaknesses in particular stand out. First, the formal content of the Environmental Impact Statement (EIS) does not include the positioning of the EIA in its wider institutional context, a monitoring plan, or the role/input of public participation. The latter is absent in the law: public participation is formally not included in the law. Although the EIA policy mentions the possibility, no formal timeframes or rules are formulated for public consultation and participation. In Western literature this is perceived as hindering to EIA performance because the input of civil society contributes to a comprehensive understanding of impacts, as people are able to express their concerns and opinion about future impacts of activities. Nevertheless, as we will elaborate in the reflection section, a discussion exists regarding the absolute necessity of public participation for EIA performance. The second weakness is that, although screening is based upon three lists of projects, it does not include many quantitative thresholds. The majority of thresholds are defined in terms of project scale: projects are either small scale or large scale. This consequently allows for interpretation of the thresholds. For example, the screening list obliges only ‘large scale’ wastewater treatment plants to perform an EIA, and lacks a threshold in cubic metres or number of households. The Sana'a wastewater treatment plant processes 200,000 m3/week. In the Netherlands an EIA for wastewater treatment is obliged when the plant has a capacity of 150,000 population equivalent or more. Sana'a city currently has more than a million inhabitants, yet no EIA was conducted for the plant in Sana'a. Thirdly, scoping is not formulated in the environment law, and the EIA policy states that screening, if necessary, needs to be supported by scoping guidelines. No clear methods are formulated for scoping, besides the provision of roles (EPA and competent authority are responsible) and some mention of external expert support. Public participation during scoping is not obliged. Fourth and finally, the EIA procedure is insufficiently aligned to decision/making processes of line ministries. Several examples exist of cases in which ministries do not prepare EIA reports for their projects or EIA reports are submitted when the ministry has already licensed the activity. Ministries tend to have different attitudes towards EIA: especially the powerful Ministry of Public Works and the Ministry of Oil and Minerals are perceived as being not very cooperative regarding EIA. Overall, the weak institutional capacity is problematic for the EIA performance because without a proper institutional basis of EIA, well-
4.2.2. Shortage of internal governmental organisational capacity The central authority responsible for EIA faces internal organisational problems. Guidance of the staff by means of leadership and clear statements of mission, vision, and strategies has not fully developed. Strategic documents, policies, and plans are outdated (if present) and insufficiently specific. To illustrate, the authority envisions that EIA contributes to sustainable development. However, sustainable development is not defined, nor is it clarified how EIA is linked to sustainable development. Leadership requires improvements because the organisational structure of the EPA is rather horizontal, implying that teamwork is more prominent than hierarchical leadership. On the one hand this is positive because it implies a cooperative and accessible culture, and all staff has respect for their leaders. However, the staff also feels that too little guidance is provided when formulating and executing the tasks of the authority. Other problems directly linked to leadership are that there are few management meetings and little financial incentive to encourage individual performance. Finally, an organisational culture based upon mistrust, both between individuals and between departments, is perceived. This is partly a leadership problem, but it must be added that the organisational culture influences this. In the past, Yemen was a tribal society in which people were used to handling problems within their tribe without cooperation with others, and permanently mistrusted and were hostile to other people. As a result, the organisational culture in Yemen strives for clear and separated responsibilities, but environmental governance is inherently dependent upon cooperation between multiple people, departments, organisations, and sectors. However, leadership may (partly) overcome this by creating a more transparent organisation. Job motivation is not highly encouraged by the organisation. Incentives other than appreciation are absent. This, combined with low salaries, fails to provide financial satisfaction, and does not lead to job maturity: people working within governments in general tend to change jobs and start working in the private sector, or have multiple jobs. The consequences of this for EIA performance are critical. The authority is the central organisation responsible for coordinating the EIA procedure; it is to oversee the roles and responsibilities of all involved actors, to assure timely management, to see upon the quality of the EIS, to pass the EIS and review outcomes to the decision-maker and provide involved actors with feedback of the results. A possible problem may be that scoping outcomes are not passed on to other actors, which results in the neglect of important environmental aspects during the preparation of the EIS. This consequently also
other hand, which is more directly related to capacity development, is the changing strategy applied in the past. A continuous attempt to adapt to the problems and needs of the present can be seen. This iterative capacity development indicates that a learning mechanism has been present, or at the very least there were some attempts to learn from the past. 4.2. Assessment of current capacity
106
L. van Loon et al. / Environmental Impact Assessment Review 30 (2010) 100–107
effects the impact identification and prediction, as well as the study of alternatives. Given these insights, the feasibility of interventions in this area is perceived as low by our informants, especially because organisational culture is an important problem. Culture is difficult to change, and people are not easily willing or capable of changing their behaviour and habits. However, leadership may provide some practical tools to overcome some challenges. 4.2.3. Weakly developed scientific basis for EIA Science is a core aspect of EIA: well-informed decision-making means the need for knowledge. However, scientific research for EIA was not a structural activity in the past and is currently still weakly developed. Gaps in knowledge and uncertainties hamper proper inventories of EIA project sites. As one of the interviewees mentioned: ‘if you can't measure, you can't mention’, which refers to the difficultly in executing EIA's caused by lack of data and adequate research methods. Problems related to science are the accessibility of scientific material, as well as the participation of Yemeni scientists in scientific literature regarding EIA. The connection with the international network of science is a problem and within the country the scientific infrastructure is poor. Universities, knowledge institutes, governments, and consultants work insufficiently together. Universities have only recently started curricula for EIA. As a consequence, consultants are insufficiently evaluated on their qualities, EIA review is difficult, and only few future environmental scientists are being educated. Formal EIA performance is influenced by science because the ability to establish an institutional framework unavoidably includes standards and indicators that are directly derived from science. Also, the core elements of the impact study require science: the state of the art description needs insights into environmental dynamics, levels of pollution, and water tables. Impact identification and prediction requires the use of scientific models and input data. The study of alternatives must have insight in possibilities of other locations or technical applications of the project. Finally, the review processes necessarily uses assessment and analysis methods. Based solely on these findings, reducing the problem of a limited scientific basis of the EIA system is perceived by Yemeni actors involved as moderately feasible. Options are currently supported by an increased tendency of universities to cooperate on projects with local authorities, NGOs and the general public to achieve environmental goals, such as sustainable fishing or effective water monitoring. Also, within the organisational capacity category, stricter requirements have been formulated with respect to the consultants involved in EIA. 4.3. Discussion of results Capacity development activities in Yemen over the last two decades have been iterative because each separate phase attempted to redefine the main problems, learn from the past, and adapt strategies. Our historical analysis showed that when the ambition of phase two appeared to be overwhelming and unachievable, the third phase focused on feasible and short-term capacity development activities. However, the problem is that these strategies do not interpret EIA as a systemic concept in which problems cannot be seen separately from each other. Capacity development in Yemen has been focused too much on separate sub-capacities, while in fact, each of the subcapacities of the EIA system is related. This is a crucial point: capacity development cannot be understood by examining these problems separately, because they uphold and influence each other. For example, the implementation of the institutional framework for EIA is not effective without scientific knowledge. The weak organisational capacity of the environmental authority is negatively influenced by weak institutional provisions for EIA: clear rules allow for better guidance. However, institutional capacity is itself the result of human and organisational efforts.
There is still some disagreement about one important subject. Public participation is perceived in Western democracies as being a crucial ingredient for EIA performance. As mentioned, civil society (either the general public or NGOs) can bring creativity to the process and influence the use of EIA in decision-making. In Yemen, civil society is weakly involved in EIA. The general public is occupied with survival rather than with environmental issues. The NGO society is immature and some have a negative image, since the majority of NGOs work for their private, instead of public, interest. One could argue that the potential of public participation in Yemen exists, and that capacity development, by means of education, financial support and awareness-raising, can overcome these problems. However, civil society is, more than other subcapacities, inherently influenced by the context of the country. Even with financial support and education of NGOs, their role in EIA remains negligible when the political context does not allow for NGOs to participate in EIA processes. Similarly, if awareness among the general public is improved, but the cultural value of the environment remains unchanged, it is unlikely that the general public will start participating in EIA. This point clarifies the importance of the context inclusion when analysing capacity development. It seems that the context defines the parameters between which capacity can be developed. 5. Reflection and conclusions In this article we have made a first step towards developing an approach for analysing EA at the macro level that is more inclusive than approaches taken thus far. Indeed, findings in Yemen illustrate that the capacity development approach gives an understanding of the process, the stakeholders, the knowledge component, the material and technical aspects of EA, and perhaps most importantly, the systemic way of understanding the outcomes. We have seen that problems are not isolated from each other, but are mutually selfreinforcing. Application of the assessment model showed that five main problems cause the weak EA (in this case EIA) capacity in the country. Moreover it allows us to interconnect these five problems and understand how they are reinforcing each other. What can we learn from the case of Yemen? The most important lesson is that problems in EA capacity should not be understood separately. Historical developments in Yemen illustrated that concentration on an isolated problem does not solve EIA capacity problems. By interpreting all capacity problems simultaneously and in an inter-relational manner, a contribution to more effective EA capacity on the macro level is more likely to be made. A critical footnote however, is that the assessment model in this article is not a blueprint. Some difficulties became evident during the course of the study. Some aspects formulated in the model were not considered to be actual problems in Yemen. An example is the existence of ICT applications. Informal social networks allow people to connect and communicate with others, and many interviewees argue that weak ICT capacity is not problematic. Improvement of the analytical model will be facilitated by the lessons drawn from conducting case-study applications of the methodology, as was shown in this article in the case of Yemen. Applying the analytical model of capacity development has not only resulted in a more complete insight of the EA system in Yemen, but has also indicated that 1) the identified problems are the result of historical events as well as current circumstances, and that 2) these problems are inherently linked to one another. Recommendations ought to be based upon these findings, but assessing the feasibility of interventions is a crucial step. Detected problems may be evident, agreed upon, and acknowledged as being severe, but if few possibilities exist to actually change this situation, recommendations are useless. Analysis of the EA system is valuable for identifying the current state of affairs of the EA system and how it got there, but the step towards recommendations should not be taken for granted. The main influential factor for feasibility is the EA system
L. van Loon et al. / Environmental Impact Assessment Review 30 (2010) 100–107
context. As mentioned in the beginning of this article, sensitivity to the context of a system is a precondition for proposing any recommendations (Cherp, 2001). A comparative research project on EA systems by Gazzola (2008) showed that the success of EA varies in different types of planning systems. While an analysis of the rules, values, routines, priorities, attitudes, and traditions of a specific societal and cultural framework is the key to achieve effectiveness of the overall process, this is often neglected in analysis as well as in intervention strategies. This article provided some first indications of the importance of the Yemeni context in explaining EA system performance and the feasibility of EA system capacity development. However, we acknowledge that further research is required to formulate more concrete recommendations for capacity development in this specific case. References ADB. Environmental impact assessment for developing countries in AsiaVolume 1: overview. Asian Development Bank; 1997. Androulidakis I, Karakassis I. Evaluation of the EIA system performance in Greece, using quality indicators. Environ Impact Assess Rev 2006;26:242–56. Bower H. Putting the capacity into capacity building in South Sudan. The Lancet 2000;356:661. Buckley R. Strategic environmental assessment of policies and plans: legislation and implementation. Impact Assess Proj Apprais 2000;18(3):209–15. Cherp A. EA legislation and practice in Central and Eastern Europe and the former USSR; a comparative analysis. Environ Impact Assess Rev 2001;21:335–61. Choi J, Kwon YH. Comparative study on the environmental impact assessment of golf course development between Korea and China. Landsc Ecol Eng 2006;2:21–9. DGIS. Appraisal document aid water sector, Environmental Protection Authority. PAWSEPA, 2008–2010, Activity 17549, Republic of Yemen, 2008. Driessen PPJ, Leroy P. Milieubeleid; analyse en perspectief. Bussum: Uitgeverij Coutinho; 2007. Gazzola P. What appears to make SEA effective in different planning systems. J Environ Assess Policy Manag 2008;10(1):1-24. Filmer D, Hammer JS, Pritchett LH. Weak links in the chain: a diagnosis of health policy in poor countries. World Bank Res Obs 2000;15:199–224. Fisher TB, Gazzola P. SEA effectiveness criteria—equally valid in all countries? The case of Italy. Environ Impact Assess Rev 2006;26:396–409. Kirchhoff D. Capacity building for EIA in Brazil: preliminary considerations and problems to be overcome. J Environ Assess Policy Manag 2006;8(1):1-18. Kolhoff A. Strengthening EIA in Yemen. Programme proposal, February 2008. Netherlands Commission for Environmental Assessment (NCEA) in collaboration with Embassy of the Kingdom of the Netherlands (EKN) in Yemen, and the Environment Protection Authority (EPA) of Yemen; 2008. LaFond AK, Brown L, Macintyre K. Mapping capacity building in the health sector: a conceptual framework. Int J Health Plann Manag 2002;17:3-22. Lusthaus C, Adrien MH, Anderson G, Carden F. Montalván GP. Organizational assessment; a framework for improving performance. Inter-American Development Bank, Washington D.C, and International Development Research Centre, Ottawa, 2002. http://www.idrc.ca/en/ev-23987-201-1-DO_TOPIC.html. METAP. Evaluation and future development of the EIA system in Yemen. Mediterranean Environmental Technical Assistance Programme, Manchester University EIA Centre and World Bank Water and Environment Department; 2001. OECD. Institutional capacity and climate change. Paris: Organisation for Economic Cooperation and Development; 2003. OECD. Working towards good practice: the challenge of capacity development. DAC guidelines and reference series. Paris: Organisation for Economic Co-operations and Development; 2006. Partidario M, Clark R. Perspectives on strategic environmental assessment. Boca Raton: CRC Press; 2000. Pielemeier J, Salinas-Goytia AD. United Nations capacity building in Brazil. In: Maconick R, Morgan P, editors. Capacity building supported by the United Nations: some
107
evaluations and some lessons. New York: United Nations, Department of Economic and Social Affairs; 1999. Potter C, Brough R. Systemic capacity building: a hierarchy of needs. Health Policy Plann 2004;19(5):336–45. Retief F. A performance evaluation of strategic environmental assessment (SEA) processes within the South African context. Environ Impact Assess Rev 2007a;27:84-100. Retief F. A quality and effectiveness review protocol for strategic environmental assessment (SEA) in developing countries. J Environ Assess Policy Manag 2007b;9 (4):443–71. Runhaar H, Driessen PPJ. What makes strategic environmental assessment successful environmental assessment? The role of context in the contribution of SEA to decision-making. Impact Assess Proj Apprais 2007;25(1):2-14. Sadler B. On evaluating the success of EIA and SEA. In: Morrison-Saunders M, Arts J, editors. Assessing impact: handbook of EIA and SEA follow-up. London: Earthscan; 2004. p. 248–85. Sagar AD, VanDeveer SD. Capacity development for the environment: broadening the scope. Glob Environ Politics 2005;5(3):14–22. Samarakoon M, Rowan JS. A critical review of environmental impact statements in Sri Lanka with particular reference to ecological impact assessment. Environ Manage 2007;41:441–60. Stinchcombe K, Gibson R. Strategic environmental assessment as a means of pursuing sustainability: ten advantages and ten challenges. Journal Environ Policy Manag 2001;3(3):343–72. Strigl AW. Science, research, knowledge and capacity building. Environ Dev Sustain 2003(5):255–73. Van Loon, L. Capacity development for Environmental Impact Assessment (EIA): application and insights in the Republic of Yemen. MSc Thesis, Utrecht University, Utrecht, 2008. Available on the World Wide Web on: http://www.geo.uu.nl/homegeosciences/research/researchgroups/environmentalstu/staff/drhacrunhaar/teaching/21328main.html. World Bank. Comprehensive development review: environment in the Republic of Yemen. Rural Development, Water, and Environment Department, Middle East and North African Region; 2000. Louise van Loon graduated from Utrecht University where she studied Sustainable Development (environmental policy and management). Since 2008 she is a researcher at the Netherlands Court of Audit. Peter Driessen is a professor of Environmental Studies. He graduated in urban and regional planning from Nijmegen University. In the last 15 years he worked successively as a researcher, as director of education, and as head of the department of Innovation and Environmental Sciences, Utrecht University. His research focuses on environmental planning and interactive governance. Currently, he is scientific director of the Dutch national research program on climate change and adaptation strategies ‘Knowledge for Climate’. Arend Kolhoff is senior technical secretary at the international department of the Netherlands Commission for Environmental Assessment. He is currently working on a PhD research project which aims to identify guiding principles for the development of better performing EIA systems in developing countries. He has a degree in human geography from Utrecht University. He has fifteen years working experience with the Commission in about twenty countries as trainer and advisor on EIA and SEA capacity development activities. Since 2003, Hens Runhaar has been an assistant professor at Utrecht University, Section of Environmental Studies and Policy. In 1994 he graduated from Twente University where he studied Public Administration. Between 1994 and 1998 Hens worked as a researcher at the Erasmus University Rotterdam and at AGV Consultancy for transport and traffic studies. In 1998 he started a Ph.D. project on the effects of transport costs on paper logistics. He received his Ph.D. in 2002 from Delft University of Technology. His current research focuses on the use of knowledge and planning tools in decision-making and on risk governance.