Utilities Policy 13 (2005) 279e288 www.elsevier.com/locate/jup
The role of efficiency estimates in regulatory price reviews: Ofgem’s approach to benchmarking electricity networks Michael Pollitt*,1 Judge Institute of Management, University of Cambridge, Trumpington Street, Cambridge CB2 1AG, UK Received 1 October 2004; accepted 20 January 2005
Abstract Electricity regulators around the world make use of efficiency analysis (or benchmarking) to produce estimates of the likely amount of cost reduction which regulated electric utilities can achieve. This short paper examines the use of such efficiency estimates by the UK electricity regulator (Ofgem) within electricity distribution and transmission price reviews. It highlights the place of efficiency analysis within the calculation of X factors. We suggest a number of problems with the current approach and make suggestions for the future development of X factor setting. Ó 2005 Elsevier Ltd. All rights reserved. JEL classification: L51; L98 Keywords: Efficiency analysis; Benchmarking; Electricity; RPI-X; Price control
1. The theoretical underpinnings of the use of benchmarking Incentive regulation suggests that the price charged by a regulated monopolist should be set independently of its own costs. Ideally at the average economic cost of a group of comparable firms (Shleifer, 1985). There are two problems with this approach. First, it is difficult to find
* Tel.: C44 1223 339615; fax: C44 1223 339701. E-mail address:
[email protected] 1 The author acknowledges the help of Jonathan Mirrlees-Black and Hannah Nixon at CEPA, Ofgem and the participants at the LBS conference for which this material was originally prepared. Stephen Littlechild and Jon Stern provided extensive comments on an earlier draft. All opinions expressed in this paper are those of the author and does not necessarily reflect the views of any third party. All remaining errors are his responsibility. 0957-1787/$ - see front matter Ó 2005 Elsevier Ltd. All rights reserved. doi:10.1016/j.jup.2005.01.001
a group of strictly comparable firms. Second, it is risky to pay no attention to actual costs: this can result in bankrupting the firm if the price cap is too tight; or allow the firm to earn politically unacceptably high profits if the price cap is too loose. RPI-X price control (Littlechild, 1983) attempts to combine desirable incentive properties and politically acceptable prices and profits. It does this by setting prices with reference to the starting level of costs but fixing the price path for a number of years. During the period of the price control the firm can keep any profits from outperformance against its regulated prices. This offers a role for efficiency analysis in setting the starting level of costs which an efficient firm might be expected to have. The regulated revenue of a monopolist in a particular region (or country) can be thought of being the revenue that would be bid by the winning bidder in a competitive franchise auction (Williamson, 1976). The nature of the
280
M. Pollitt / Utilities Policy 13 (2005) 279e288
auction is one where the bidders are required to bid the least amount of revenue they would accept to run monopoly services in the region without subsidy. This implies that the regulated revenue, under RPI-X, should reflect the efficient cost level in that region and a fair market rate of return. For industries where there is only one regulated company benchmarking requires detailed analysis of cost categories against similar functions in firms in other industries or comparison with equivalent international companies. This is the case with electricity transmission in England and Wales. For industries where there are a number of domestic regulated firms, efficiency analysis of the sample of domestic firms may be possible, as is the case with electricity distribution in Great Britain where there are 14 franchise areas. An additional use of efficiency analysis is that the operating conditions of firms tend to vary across regions and hence some statistical analysis of cost drivers is required in order to arrive at the efficient level of costs in a particular region.
2. Ofgem’s general approach Electricity supply in the UK was restructured and substantially privatised in 1990. Accompanying RPI-X regulation was introduced at privatisation and an independent regulatory agency, Offer (now Ofgem) was established. This gives the UK a substantial amount of experience with RPI-X regulation in the electricity sector. RPI-X price control reviews have implemented a 55% real reduction in the price of electricity distribution (since 1995) and a 30% real reduction in electricity transmission charges (since 1993). Ofgem and its predecessor Offer have now conducted three price control reviews of electricity distribution, five price control reviews of electricity transmission (two for Scotland and three for England and Wales) and two price control reviews of electricity supply (retail sales). Electricity supply is now fully liberalised and price controls no longer apply. Regulated electricity distribution company revenue is around £3bn p.a. and regulated transmission company revenue is around £1bn p.a. In this short paper we discuss Ofgem’s general approach drawing heavily on the recent work conducted by the author as a participant in the three consulting reports for Ofgem. These reports contributed to the most recently completed distribution price review which sets prices for the period April 2005eMarch 2010 (CEPA, 2003a,b, 2004). Our focus will primarily be on electricity distribution where Ofgem’s use of benchmarking is clearly identifiable. We will make brief comments on the use of benchmarking within the transmission price reviews where the origin of the efficiency estimates is based on external rather than the in-house analysis and
their direct use in efficiency calculations is much less and influences a smaller percentage of total revenue. The basic characteristics of the Offer/Ofgem approach can be stated as follows. An initial consultation document is issued around 18 months before the end of the current price control period. This document discusses the timetable and issues for consideration in the upcoming control period. This is followed by several subsequent documents. At each stage responses are invited from interested parties and these are publicly available in the Ofgem library unless marked confidential. A ‘Final Proposals’ document is issued within 6 months of the end of the price control with details of the X factors which Ofgem proposes to apply to each company from the beginning of the next control period. Companies have 1 month to decide to appeal to the competition authority, the Competition Commission (formerly the Monopolies and Mergers Commission) if they are unwilling to accept the proposed price control. An appeal on distribution prices has happened once so far when Scottish Hydro-Electric did not accept its final distribution and supply price controls proposed by the regulator for 1995e2000.2 By way of illustration of the key documents and the timetable, we note the process for the 2000e2005 distribution price control. This price control ran from April 2000 to the end of March 2005. The first consultation paper appeared in July 1998 (replies by 25 September 1998). The regulated companies (Public Electricity Suppliers, PESs) then submitted business plans with projects for operating and capital expenditure requirements. These were published in December 1998 (replies by 2 March 1999). A second consultation paper appeared in May 1999 (replies by 2 July 1999). Initial proposals for X factors were published in August 1999 (with replies by 17 September). Final proposals for X factors appeared in December 1999. For the purposes of our discussion the two most important documents are the Initial Proposals and the Final Proposals. These contain actual efficiency estimates which are then translated into X factors. For the 2000e 2005 review, the Initial Proposals document mentioned above reported two types of efficiency analysis and their results (Offer, 1999a). One of these used top down regression analysis (of which more below), the other reported a bottom-up efficiency study supplied by management consultants. Efficiency analysis provided a single efficiency score for the benchmarked portion of costs in the particular year of the analysis. In this case the analysis was 2
See MMC (1995). Ofgem’s jurisdiction covers Great Britain only not Northern Ireland. Electricity and gas in Northern Ireland is regulated by Ofreg. Northern Ireland Electricity appealed against Ofreg’s distribution price control for the period 1997e2002 (see MMC, 1997).
M. Pollitt / Utilities Policy 13 (2005) 279e288
available for 1997e1998. Converting this into X factors requires a number of key assumptions. For example, in 1999, the Initial Proposals took a view on: 1. What is the scope for efficiency saving (or productivity growth) in the most efficient firms over the life of the price control? 2. How fast is it reasonable to assume that inefficient firms can close the gap with the most efficient firms? 3. How will mergers be allowed to affect the X factors? 4. Given a net present value of allowed revenue over the price control period, how should it be translated into (a) a combination of initial price reduction (the so-called P0 adjustment) and (b) an annual X factor? 5. How should demand growth be incorporated into the allowed revenue? The Final Proposals (Ofgem, 1999) took a different view on some of the above. In particular, on question 1, the scope for efficiency savings on the part of frontier firms was determined to be zero. On question 2, the original assumption that the inefficient firms would close all of the efficiency gap immediately was amended to assume that 75% of the gap would be closed by the end of the second year of the price review and remain constant thereafter. In addition, there were some adjustments to the calculated efficiency scores and the allowance of some additional revenue for IT costs and the costs of the separation of supply and distribution businesses. It was also stated that there would be further adjustments to revenue to account for quality of supply performance from 2002/2003. The Final Proposals resulted in some more revenue for all companies relative to the Initial Proposals and reflected discussions between Ofgem and individual companies based on the Initial Proposals. (Note that the structure of these two documents was very similar in the most recent distribution price review completed in November 2004 for the period 2005e2010). An example of the use of benchmarking within electricity price control is provided in Table 1. This is the panel from Ofgem for one of the distribution companies, United Utilities, for the most recent distribution price control review for the period 2005e2010. Some key points on the table are: The base price control revenue is at line 22. This is made up from summing the lines above. Network capital expenditure (line 2) is arrived at by discussion with Ofgem of the capital requirements of business and informed by an independent audit by consulting engineers of the companies’ business plan proposals. Lines 1e5 document the impact of the capital expenditure and depreciation on the regulatory asset base. Line 5 involves discounting the closing asset value by the appropriate allowed weighted average cost of
281
capital (the ‘vanilla’ weighted average cost of capital 5.545%).3 This indicates that the company needs to be compensated for all its direct expenses plus an amount of £94.8m to compensate for the change in the value of its asset base over the course of the price control. The allowed weighted average cost of capital (WACC) is 6.9% real rate of return pre-tax. Its calculation is detailed in the Ofgem price control documents. The total required revenue is then the direct expenses plus the reduction in the value of the asset base. The direct costs are given in lines 7e16. These are made up of the main cost items e operating costs, capital expenditure, one-off allowances for pensions and an allowance for tax (lines 7e10) e and some additional small allowances (lines 11e15). The total allowed expenditure is then discounted and the loss of asset value added to give the total present value of required revenue at line 19. This is then profiled according to Ofgem’s preference to fix the annual reduction in prices at 0.0% and then calculate the P0 initial price reduction in order to reach the total net present value. This is line 22. A revenue index is calculated at line 20 based on the projections of growth in customer numbers and units distributed, this is discounted in line 21. Line 22 is derived by taking the present value at line 19, deducting the excluded revenue in line 23 and dividing the result by the sum of the discounted revenue index in line 21 and multiplying by the revenue index in line 20. This means that line 22 is equivalent to a profiling of the net present value at line 19 less the un-regulated revenue in line 23. It can be seen that allowed revenue for 2004/2005 is £205.2m whereas the initial revenue in 2005/2006 is £220.9m (at 2002/2003 prices). This shows an increase of 7.6% in P0. Adding in a small adjustment for the Innovation Funding Incentive (line 28), this implies prices can be increased in the first year of the new price review by 8% more than inflation. The analysis of the sources of this price rise follows in lines 32e39. This shows that lower operating expenditure is offset by higher depreciation, tax and allowed rate of return (which was increased from 6.5% to 6.9%).
3. Assessment of Ofgem’s approach to benchmarking The first thing that is striking from the above description is that benchmarking is only one part of a much larger framework within which X factors are set. For United Utilities (and all the other companies), benchmarking has only effected the operating cost part 3 The ‘vanilla’ weighted average cost of capital is calculated without any tax adjustment to the cost of equity or cost of debt.
282
M. Pollitt / Utilities Policy 13 (2005) 279e288
Table 1 Price control calculations for United Utilities 2002/03 prices RAV 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40
2004/05 (£m) Opening asset value Total capex Depreciation Closing asset value Present value of opening/closing Year movement in closing RAV allowed items Operating costs (excluding pensions) Capital expenditure (excluding pensions) Pensions allowance Tax allowance Capex incentive scheme Sliding scale additional income Opex incentive/other adjustments Quality reward DPCR3 costs Total allowed items Present value of allowed items 5 Year movement in closing RAV Total present value over 5 years revenue Revenue index Discounted revenue index Price control revenue Excluded services revenue Total revenue Present value of total revenue Total present value over 5 years P0 based on the above Revenue (line 22) P0 for Innovation Funding Incentive (IFI) Total P0 for comparison purposes X Analysis of P0 (%): Include EHV Exclude metering Change in opex Depreciation Return Rates Tax Other Total
205.2
2005/06 (£m)
2006/07 (£m)
2007/08 (£m)
2008/09 (£m)
2009/10 (£m)
920 112.7 68.5 964.3 920
964.3 112.3 74.1 1002.5
1002.5 111.8 79.7 1034.7
1034.7 111.4 85.3 1060.8
67 103.5 16 19.4 1.8 1.6 1.4 1.5 1.5 212.3 206.6 206.6
64.7 103.1 16 22 1 1.7 1.4
63.1 102.6 16 23.1 0.6 1.8 1.4
61.7 102.2 16 24.5 1.1 1.8
1060.8 110.9 90.9 1080.9 825.2 94.8 60.2 101.7 16 24.5 0.5 1.9
209.9 193.6 193.6
207.5 181.3 181.3
205.1 169.8 169.8
1 0.973 220.9 5.8 226.7 220.6
1.011 0.932 223.2 5.8 229 211.2
1.013 0.885 223.7 5.8 229.5 200.6
1.022 0.846 225.8 5.8 231.6 191.7
203.8 159.9 159.9 1006.1 1.024 0.803 226.1 5.8 231.9 181.9 1006.1
7.6% 0.4% 8.0% 0.0% 1.5% 1.3% 7.0% 7.8% 2.7% 1.0% 5.0% 1.6% 8.0%
(Source: Ofgem, 2004c, p. 127).
3.1. Ofgem’s use of the COLS methodology
such as consultant analysis of cost categories, but the reports which detail these calculations have in general not been published and hence the methodologies used in them are difficult to evaluate. We therefore concentrate on reviewing the efficacy of the regression approach as undertaken by Ofgem in the 1999 and 2004 distribution price control reviews. A detailed review of the 1999 method is available in CEPA (2003a), while CEPA (2004) provides a review of the 2004 Initial Proposals. There are three key elements to look at:
The main benchmarking methodology that Ofgem have used is based on regression analysis of the operating costs. They have made use of other methods,
(i) the regression methodology; (ii) the relevant cost drivers within the analysis; and (iii) the benchmarked measure of output.
of their allowed revenue (7 under RAV in Table 1). In addition it is necessary to make assumptions about what operating costs should be benchmarked and how fast the company should be assumed to be capable of catching up with the most efficient firm. It is also important to point out that small changes in allowed rates of return and in allowed capital expenditure are as significant as large relative changes in the measured efficiency of United Utilities’ benchmarked operating costs.
M. Pollitt / Utilities Policy 13 (2005) 279e288 OLS regression line
B
C
Cost, C D
COLS frontier line A
E
O
F
Composite Scale Variable (CSV) • = Distribution company
283
2004/2005. In theory it is possible to have benchmarked total costs (operating expenditure plus capital expenditure). Such ‘totex’ benchmarking would appear to be superior because it would properly account for the possibility of opex vs capex trade-offs. Failure to account for this possibility leads to unfair treatment of companies who make efficient use of capex and also provides incentives for companies either to misrepresent opex as capex or to alter their actual mix of expenditure away from the total cost minimising level to gain greater regulatory revenue. Analysis of the COLS efficiency of opex vs total costs does show that some firms do better on totex efficiency than opex efficiency (CEPA, 2003a). CEPA (2003a) 4 suggest that benchmarking techniques need to be assessed against eight criteria:
Fig. 1.
Ofgem have in both the last two distribution reviews used a relatively simple regression methodology where they have obtained an adjusted measure of operating costs for each company and plotted this against a measure of composite output. They have then carried out an OLS regression of operating costs against output. Finally, they have shifted this line downwards, based on the technique of Corrected Ordinary Least Squares (COLS), to obtain a frontier line against which inefficient firms are then compared (see Fig. 1). In 2004 (and 1999) the data available for the regression analysis was for a single year (2004 Z 2002e2003 and 1999 Z 1997e1998) and also for just the 14 companies regulated by Ofgem. In Fig. 1, the efficiency score of firm B is given by the ratio: EF/BF. This represents the extent to which actual costs could be reduced while still keeping firm B on the efficient frontier. It would have been possible for Ofgem to have used a quadratic rather than a linear regression or alternative techniques such as data envelopment analysis (DEA) and stochastic frontier analysis (SFA). CEPA (2003a) investigated each of these alternatives and found that the quadratic cost function did not look more plausible than the linear one and was more sensitive to outliers, the DEA results were similar to those for COLS and that the SFA technique could not be implemented with such a small dataset. This leads to the CEPA (2003a) conclusions that i. the COLS technique implemented is confirmed by DEA and ii. no obviously superior technique exists for Ofgem given the data available to them in 1999 and 2004. Ofgem benchmarks just a small proportion of the total revenue. For our United Utilities example in Table 1, base costs used in the regression analysis were £70m in 2002/2003 against a price control revenue of £205m in
practical application, robustness of the methodology, transparency and verifiability, capture of industry specific factors, minimal restrictions on the shape of the frontier, consistency with non-frontier approaches such as financial market perceptions of relative performance, consistency with economic theory, and low regulatory burden. Ofgem’s approach clearly scores highly on practical application, transparency and low regulatory burden as it appears to be easy to reproduce and implement, although, in practice, CEPA (2003a) found it impossible to exactly replicate the derivation of the 1999 regression results. However, it is not so clear as to how robust the Ofgem methodology is or whether it captures all the relevant industry specific factors. Perhaps the strongest argument for Ofgem’s approach is that it is not clear that, given the small sample size, there are any viable alternatives to the sort of crude regression benchmarking that was undertaken. 3.2. Ofgem’s input and output measures Ofgem’s approach crucially depends on the base level of operating expenditure used in the regression analysis. This is obtained by taking the reported operating expenditure and adjusting it for any included capital expenditure, one-offs and other non-comparable cost elements (such as higher wages in London). In both the 1999 and 2004 calculations for electricity distribution this involved several person years work within the regulatory office, conducted in consultation with the companies. In the 2004 analysis, electricity distribution system fault costs were separated out from base operating costs (to 4
This list is motivated by Bauer et al. (1997).
284
M. Pollitt / Utilities Policy 13 (2005) 279e288
attempt a separate analysis) but then added back in to give a measure of base opex plus fault costs which is equivalent to Ofgem’s 1999 measure of base operating expenditure (opex) costs. The other key variable used (in addition to base operating expenditure) was the composite measure of output (or composite scale variable, CSV). In 1999 this was calculated as: CSVZðCustomer numbersÞ0:5 ðUnits distributedÞ0:25 !ðNetwork lengthÞ0:25 The ability to use multiple output variables within a cost function framework was limited by the availability of data. With just 14 data points the use of separate output measures within a flexible cost function framework (which would give rise to square and interaction terms as well) was not feasible (see Coelli et al., 1998). The composite variable also avoids potential multicollinearity problems with the output variables, which all tend to move in the same direction at once. Before considering the issue of methodology it is useful to look at the other cost drivers not considered in the composite output measure. The central problem is the shortage of data points. However, it may be that topography, climate, customer mix and losses are important other drivers of costs apart from the output variables. The physical environment and the technical standards in which the UK distribution companies operate are quite similar. So too is the size and nature of their operations compared to a sample of, say, US distribution utilities. CEPA (2003a) attempted to test whether there was a significant correlation between the 1999 base operating cost and a range of other cost drivers. They found no statistically significant additional cost drivers, though this may have been a function of the small sample size. The composite scale variable attempts to take into account some of the differences in the nature of the output of electricity distribution. Customer numbers and units distributed are obvious outputs as the pricing of electricity distribution varies according to both of these dimensions. Network length is a less clear-cut measure of output as it measures a capital input. Indeed studies of the efficiency of electricity distribution disagree on whether it should be an input or an output in efficiency analysis (see Jamasb and Pollitt, 2001). The rationale for treating it as an output is that it measures the difficulty of the topology holding constant customer numbers and units distributed. CEPA (2003a) investigated the scope for changing the weighting and components of the composite variable. They found that units distributed and customer numbers were almost perfectly correlated. Combining this with the observation that connected customer
numbers are not relevant to distribution utilities whose customers are supply companies implies that customer numbers could be dropped from the composite variable and a fiftyefifty weighting of units distributed and network length could be used instead. In the 2004 Initial Proposals the weighting was indeed adjusted to increase the effective weight on network length (as the weights became 0.25 customer numbers, 0.25 units distributed and 0.5 network length). With such a small number of available data points the number of methodological choices is limited. The approach that Ofgem in 1999 adopted was a variant of COLS where the initial regression line was pivoted around the estimated intercept on the base opex axis such that it went through the second most efficient firm. This is not the standard COLS approach, where the regression line is shifted so that it just envelops all of the data. The reason why the standard approach could not be adopted was because the shifted regression line would have led to an implausibly low intercept (based on an expert view of the level of fixed costs in UK electricity distribution). 3.3. The 2004 Distribution Price Control Review methodology In 2004, a parallel shift was applied to the OLS line as the resulting intercept was higher and more plausible. However, the parallel shift was to the average of the most efficient upper quartile of firms (i.e., an average of the 3rd and 4th most efficient firms). This was done to allow for the possibility that the lowest average cost firms were enjoying negative cost shocks not available to other firms and hence providing invalid points of comparison. Ofgem (1999) reports just one set of regression based efficiency scores. Ofgem (2004a) reports three. One involves regression based efficiency analysis of operating costs of the 14 firms. The other two are detailed below. Ofgem (2004a) implements totex efficiency analysis on the 14 companies using a parallel analysis to that for opex. This proved very difficult because it is not clear how to measure capital input effectively. The problem is that while it is relatively easy to measure actual capex it may fluctuate significantly across the investment cycle, thus a given year may not be representative. Ofgem (2004a) handles this by taking the projected annual average over 2000e2010. Using actual capex is not correct as capex is the addition to the capital stock, while capital input is a function of the actual size of the stock in any year. If one attempts to calculate the actual depreciation on the electricity distribution capital stock one soon gets very large numbers where the capital consumption effects swamp the opex. As these figures are obtained by assumption, it is difficult to rely on the benchmarking results to which they give rise.
M. Pollitt / Utilities Policy 13 (2005) 279e288
Benchmarking totex seems to offer superior incentive properties by removing capexeopex trade-offs. However, due to the crudeness of the capital input measurements, it is not clear that regulators are capable of implementing in such a way as to prevent arbitrary wealth redistributions to and from distribution company shareholders. Benchmarking totex also means that companies have a very strong incentive to economise on capex (as is currently the case with opex) and this means that desirable long term investment may not be undertaken. Mergers are a serious issue for the implementation of the Ofgem methodology for electricity distribution. There are 14 distribution network companies. However, these now belong to only 7 independent groups. This means that the separate benchmarking of the 14 distribution utilities may no longer be valid. This is because it may be the case that firms can strategically manipulate their overall efficiency level upwards by moving costs around between companies within their group. Jamasb et al. (2004) demonstrate how this can be done with a sample of US distribution utilities. CEPA (2003a) investigated how the position of the frontier might change if the network companies were aggregated into independent groups. They found that the efficiency scores go up for all the companies. However, the number of data points in the UK is now extremely low for such an aggregation to produce statistically robust regression analysis. We note that Ofgem (2004a) used 9 independent groups (existing in 2002e2003) in one of their COLS analyses of opex efficiency. So far we have focussed on the benchmarked part of costs. In moving from the efficiency analysis to the allowed operating expenditure further adjustments are required. These include applying the efficiency score to the benchmarked part of costs, combining the results of different methods and adding in the uncontrollable operating costs (such as local property taxes). In making use of the efficiency analysis, regulators have to decide exactly how to apply the calculated efficiency score to calculate the revenue requirement associated with the benchmarked part of costs. This can be done directly by taking the efficiency score and multiplying it by actual costs and using this figure in the financial model outlined in Table 1. If there is more than one measure of efficiency (from different efficiency analysis methodologies) then some way to combine them must be decided on. Alternatively it is possible to introduce ‘expert judgement’ and only use calculated efficiencies to ‘inform’ the revenue requirement. Ofgem has tended to use more than one efficiency methodology to assess the revenue requirement associated with controllable operating expenditure. It has then made direct use of most favourable to the company efficiency measure and multiplied this by the actual costs to get the
285
required revenue associated with controllable operating expenditure. In its 1999 analysis of electricity distribution Ofgem took the higher efficiency score (efficient costs divided by actual costs) of the regression based analysis and a bottom-up consulting report score and multiplied it by the base controllable operating cost to get the regulated revenue required to cover controllable operating cost. In 2004, Ofgem calculated three efficiency scores for the opex of each firm using 2002e2003 data. The first, from a regression analysis of the opex of the 14 firms, the second from a regression analysis of the opex of the 9 ownership groups and the third from a regression analysis of the totex of 14 firms (as described above). The individual firm scores from these three measures were then averaged to give a fourth measure of efficiency for each firm. Then the higher of this fourth ‘average’ efficiency measure and the first efficiency measure of the opex analysis of the 14 firms was taken and applied to the 2002e2003 benchmarked operating cost. In both 1999 and 2004 the base line efficient revenue was then adjusted for volume growth and inflation to get to the starting revenue requirement for the first year of the new price control period. The 2004 analysis was subsequently updated in September 2004 (Ofgem, 2004b) and the Final Proposals appeared in November 2004 (Ofgem, 2004c). These updates involved no substantial change to the methodology outlined above but included some increases in allowed costs, including increases in tree-trimming allowances and in projected capital expenditure. Ofgem (2004c, p. 68) reports that the use of normalised costs to establish a benchmark using COLS is ‘not a purely mechanistic process’ and other issues have had to be considered. These include special costs affecting London Electricity, whose network is much denser than any of the others. 3.4. Ofgem’s analysis of transmission company efficiency There are three transmission companies that have been regulated by Ofgem: National Grid Company (NGC), which operates in England and Wales, and the transmission businesses of Scottish Power and Scottish Hydro-Electric. The Scottish companies have turnovers around 10% and 4%, respectively, of NGC. Each has a monopoly within its area of operation. Finding suitable comparators for NGC is difficult. The two Scottish transmission companies are much smaller than NGC and getting comparable data on its most obvious international comparators, EdF (France) and ENEL (Italy), has proved difficult due to their vertically integrated structure. International analysis of NGC and other transmission companies has been undertaken as part of price reviews. Arthur Andersen (2000) examined NGC’s costs
286
M. Pollitt / Utilities Policy 13 (2005) 279e288
in comparison to other industries in the UK, Scottish Power’s transmission business and three US utilities with similar transformer and circuit voltage configurations. This involved analysis of cost trends and average cost per unit of output indicators, not regression analysis. The resulting study formed part of the most recent completed review of the transmission operation business5 of NGC for the 2001e2006 price review. A low percentage of costs was benchmarked (just 26% of allowed revenue) and these were initially assessed to be 97% efficient (Ofgem, 2000a, p. 9). In contrast, for electricity distribution (in the 2000e2005 price control) benchmarking was more significant with 33% of allowed revenue being benchmarked and these were assessed to be 78% efficient. This indicates that either NGC is almost fully efficient (which seems unlikely given that UK transmission prices are in the middle of the European price range, ETSO, 2004) or that Ofgem need to undertake a more robust and discriminating benchmarking exercise in the future. However, Ofgem did not make direct use of the Arthur Anderson results (Ofgem, 2000a, p. 10). They produced two scenarios based on the consultant report, comments by NGC and ‘advice from its senior business advisers’: one high cost and one low cost. In the end Ofgem used the high cost scenario as the basis for the final price controls (Ofgem, 2000b). The result was that NGC received a P0 of 0% and an X of 1.5% over the period of the review. The most recent benchmarking of the Scottish companies occurred in 1999 (Offer, 1999b) as part of the Scottish transmission price control for 2000e2001 to 2004e2005. Consultants, PKF, conducted an efficiency analysis of ‘core transmission’ operating costs for 1997e 1998. The methodology of the analysis is not made clear in the regulatory documents but includes some comparison of the Scottish companies with each other, NGC and the distribution companies. PKF also provided the closely related bottom-up efficiency analysis for the 1999 distribution price control review. PKF found that Scottish Power benchmarked operating cost efficiency was 85% but benchmarked costs were only 16% of 2000e2001 ‘core transmission’ revenue; while for Scottish Hydro-Electric benchmarked operating cost efficiency was 88% but benchmarked costs were only 12% of 2000e2001 revenue (Offer, 1999b, p. 14, 51e52). A separate analysis of system operation was also carried out. Benchmarked operating costs in system operation were 100% efficient for Scottish Power and 71% efficient for Scottish HydroElectric, but the costs involved were only 2e3% of total transmission business turnover (Offer, 1999b, p. 17).
5 The system operation business of NGC was subject to a separate revenue control exercise.
4. How Ofgem’s benchmarking could be improved The most important factor limiting the use of benchmarking by Ofgem is the lack of panel data to facilitate analysis over time and to provide more data points. Panel data would increase the robustness of the parameter estimates in the regression analysis and facilitate the inclusion of additional variables directly into the cost function. Given that Ofgem have data from 1991 to 1992 this should currently give them data from 13 years to 2003e2004. This would allow robust testing of cost driver and composite variable effects, especially time varying environmental effects such as weather. It would also facilitate the analysis of productivity growth over time. In particular it would allow the decomposition of productivity growth for each firm into technical change and efficiency change (or catch-up) using a technique like Malmquist DEA (see Coelli et al., 1998). This would allow the degree of convergence in efficiency scores to be detected and the rate of growth of technical change in the most efficient frontier firms to be measured. More data would also facilitate the successful implementation of SFA and the direct incorporation of stochastic factors into the analysis of efficiency. Currently this is handled rather unsatisfactorily by the shifting of the frontier to the upper quartile of efficient firms in the 2004 Distribution Price Review. Panel data can be supplemented by international data. Although international data poses data comparability problems which are markedly greater than those posed by domestic panel data, they do offer significant opportunities to increase sample size. Jamasb and Pollitt (2003) use 63 European distribution utilities (including 14 from the UK) in a pilot international efficiency study. This study suggests that there may be a substantial benefit to the UK consumers from an international sample, namely that the increased sample size is likely to lead to lower measured efficiency of UK utilities (the most efficient UK utility was only 63% efficient on one measure). International benchmarking also better reflects the emerging reality of international electricity companies who are capable of internal benchmarking between international divisions (9 of Ofgem’s 14 distribution companies are foreign owned). However, there is little doubt that such comparisons require good relations between regulators to make the necessary accounting adjustments to ensure compatibility of data. Quality of supply is not part of the distribution or transmission price control benchmarking exercise. There are separate incentive mechanisms for quality. As quality is a potentially important part of the business of these companies it is important to have efficient tradeoffs between quality, quantity and costs. This can only
M. Pollitt / Utilities Policy 13 (2005) 279e288
be done if measures of quality are included in the benchmarking exercise. Giannakis et al. (2005) demonstrate the difference adding in quality to efficiency analysis of UK distribution companies makes. They suggest ways forward for Ofgem to incorporate quality and the price of quality into efficiency analysis in order to give companies the correct incentives. The scope of the benchmarking of costs is falling over time. The Ofgem method of benchmarking has been very effective to date in delivering price reductions to consumers. This has also been accompanied by rapid cost reductions and maintained levels of profitability among most of the regulated utilities. However, there is evidence that the scope for continuing to use benchmarking in the way it has been in the past is increasingly limited. The reduced scope for efficiency improvements between 1999 and 2004 is indicated by the reduced degree of dispersion around the frontier visible in Fig. 1. Efficiency scores for operating costs are converging. In 1999 the average inefficiency for electricity distribution on the base opex regression analysis was 22% (25% for the bottom-up study). Benchmarked costs for a typical company constituted around 33% of allowed revenue.6 In the 2004 Initial Proposals the average inefficiency was 11% on the opex regression for 14 firms (reduced to 10% if the other regressions are used as described above). The percentage of revenue benchmarked for the same typical company had dropped to around 28%. As we have noted percentage of revenue subject to benchmarking was even lower for electricity transmission. The net effect of the benchmarking exercise in 2004 was to show modest savings on operating costs which were more than cancelled out by rises in allowed capital expenditure. This suggests that it is worth thinking about whether the process of comparing efficiencies and setting differential X factors will really add value by the time of the next price review. The underlying theoretical robustness of Ofgem’s benchmarking is weak. This is because of the simple nature of Ofgem’s regression analysis which, for data availability reasons, cannot compare well to state-ofthe-art cost function estimation (see Coelli et al., 1998). The main problems include: (i) The rather arbitrary nature of the implementation of cost function analysis without attention to stochastic factors and the lack of differential input prices. Stochastic factors are potentially significant especially as a significant part of costs arise in response to faults which are stochastic.
(ii) The lack of input prices means that measures of efficiency are merely technical rather than include the possibility of allocative inefficiency related to differential input price conditions.7 (iii) The use of averaging of different method results is an unsatisfactory way to handle method robustness problems as the average of several individually unsatisfactory measures is not any more satisfactory than the measures being averaged. (iv) The lack of attention to possible trade-offs between capital expenditure and operating expenditure is also a potentially a serious distortion, especially as variations in capital costs become much more significant in the setting of final X factors. The points above suggest that a radical reconsideration of both the purpose and method of benchmarking is required if Ofgem is to avoid a benchmarking exercise which is (a) increasingly peripheral to the overall setting of X; (b) may create perverse incentives; and (c) be more open to legal challenge. One possibility is that a single X, equal to the average rate of total factor productivity in the sector, could be set for a longer price control period of, say, 10 years. This idea has been explored in an Australian context (see Farrier Swier Consulting, 2002).
5. Conclusions In closing we offer two sets of conclusions. First, on the use of benchmarking to date by Ofgem: 1. Lack of comparable data has limited the sophistication of the benchmarking undertaken. 2. The need to produce some justifiable X factors calculated on a reasonably consistent basis to use in negotiations with the companies has forced Ofgem to proceed with benchmarking in spite of practical difficulties. 3. There has been a substantial data collection and standardisation effort which has produced numbers that have formed an agreed basis (to companies and other stakeholders) for benchmarking. 4. The actual benchmarking methods undertaken are open to question. Other methods could have been used and the comparison of results from several different methods could have done more systematically. 7
6 This is calculated by taking the base opex figure used in the benchmarking and dividing by the price control revenue for the last year of the current price control period for the company Midlands Electricity.
287
Farrell (1957) divides overall productive efficiency into two components technical efficiency and allocative efficiency. Technical efficiency shows how far a given firm is from the frontier, allocative efficiency further shows how far it is from the optimal point (given relative input prices) on the frontier. Ofgem’s efficiency measure ignores the effect of input price differentials on overall productive efficiency.
288
M. Pollitt / Utilities Policy 13 (2005) 279e288
Second, on the continuing use of benchmarking in the future: 5. If individual firm X factors informed by benchmarking of costs is to be continued, then more attention needs to be paid to the most serious methodological issues raised by the current approach. These are the lack of attention to the use of input prices and prices for quality and the modelling of capexeopex tradeoffs. 6. The lack of data points is now acute. Mergers have reduced the number of independent groups to just 7 in 2004. Use must be made of either panel data or international data (and preferably both) if any sensible efficiency analysis is to be conducted in the future. 7. Serious consideration ought to be given to the abandonment of comparative cost benchmarking and switching to a simpler system of X factor setting possibly based on a single X factor for the whole industry for a longer price control period. In spite of the above criticisms, during the period 1990e2004 the incentive based regulation of electricity distribution and transmission has produced impressive price reductions and productivity improvements while maintaining industry profitability. This has been achieved because Ofgem have successfully managed not only the technicalities of benchmarking but also the equally important relations with their company, consumer and government stakeholders. This contrasts sharply with the recent experience in the Netherlands and their failed 2001e2003 electricity distribution price review (Nillesen and Pollitt, 2004).
References Arthur Andersen, 2000. Review of NGC’s operating cost efficiency for 2002 to 2006. Ofgem, London. Bauer, P.W., Berger, A.N., Ferrier, G.D., Humphrey, D.B., 1997. Consistency conditions for regulatory analysis of financial institutions: a comparison of frontier efficiency methods. Federal Reserve System Finance and Economics Discussion Series, paper 1997e50. CEPA, 2003a. Assessing efficiency for the 2005 distribution price control review. Cambridge Economic Policy Associates for London, Ofgem. CEPA, 2003b. Productivity improvements in distribution network operators. Cambridge Economic Policy Associates for London, Ofgem.
CEPA, 2004. Peer review of the top down benchmarking analysis conducted by Ofgem for distribution price control review 4. Cambridge Economic Policy Associates for London: Ofgem. Coelli, T., Rao, D.S.P., Battese, G.E., 1998. An introduction to efficiency and productivity analysis. Kluwer Academic Publishers, Boston. ETSO, 2004. Benchmarking on transmission pricing in Europe: synthesis 2003. European Transmission System Operators, Brussels. Farrell, M.J., 1957. The measurement of productive efficiency. Journal of Royal Statistical Society, Series A 120, 253e281. Farrier Swier Consulting, 2002. Comparison of building blocks and indexed-based approaches. Utility Regulators Forum, June 2002. Giannakis, D., Jamasb, T., Pollitt, M., 2005. Benchmarking and incentive regulation of quality of service: an application to the UK electricity distribution utilities. Energy Policy 33 (17), 2256e2271. Jamasb, T.J., Pollitt, M.G., 2001. Benchmarking and regulation: international electricity experience. Utilities Policy 9 (3), 107e130. Jamasb, T.J., Pollitt, M.G., 2003. International benchmarking and yardstick regulation: an application to European electricity utilities. Energy Policy 31 (15), 1609e1622. Jamasb, T.J., Nillesen, P., Pollitt, M., 2004. Strategic behaviour under regulation benchmarking. Energy Economics 26, 825e843. Littlechild, S.C., 1983. The regulation of British telecom’s profitability. HMSO, London. MMC, 1995. Scottish Hydro-Electric plc: a report on a reference under section 12 of the Electricity Act 1989. Monopolies and Mergers Commission, London. MMC, 1997. Northern Ireland Electricity plc: a report on a reference under Article 15 of the Electricity (Northern Ireland) Order 1992. Monopolies and Mergers Commission, London. Nillesen, P.H.L., Pollitt, M.G., 2004. The consequences for consumer welfare of the 2001e2003 electricity distribution price review in the Netherlands, CMI Electricity Project Working Paper No.50. Ofgem, 1999. Reviews of Public Electricity Suppliers 1998e2000: Distribution Price Control Review Final Proposals. Ofgem, London. Ofgem, 2000a. The transmission price control review of the National Grid Company from 2001: transmission asset owner, Draft proposals. Ofgem, London. Ofgem, 2000b. The transmission price control review of the National Grid Company from 2001: transmission asset owner, Final proposals. Ofgem, London. Ofgem, June 2004a. Electricity Distribution Price Control Review, Initial Proposals. Ofgem, London. Ofgem, September 2004b. Electricity Distribution Price Control Review, Update paper. Ofgem, London. Ofgem, November 2004c. Electricity Distribution Price Control Review, Final Proposals. Ofgem, London. Offer, 1999a. Reviews of Public Electricity Suppliers 1998e2000: Distribution Price Control Review Initial Proposals. Offer, London. Offer, 1999b. Reviews of Public Electricity Suppliers 1998e2000: Scottish transmission price control: Draft Proposals. Offer, London. Shleifer, A., 1985. A theory of yardstick competition. Rand Journal of Economics 16 (3), 319e327. Williamson, O.E., 1976. Franchise bidding for natural monopolies e in general and with respect to CATV. Bell Journal of Economics 7, 73e104.