WORLD HEALTH REPORT 2000
needed for the econometric and statistical models used by WHO. WHO should be less ambitious, more modest, and more relevant in the collection and use of data. The writers of the World Health Report 2000 dismiss too quickly what they term “process indicators”—eg, the percentage of people with access to drinkable water, or waiting lists for treatment of specific illnesses—and focus instead on synthetic, compound indicators of health outcomes, such as disability, which is of dubious comparability between nations. Further difficulties arise when arbitrary and subjective information is used—eg, in the report’s evaluation of health systems’ responsiveness. In my article, I criticised the report for the use of a definition of responsiveness derived mainly from what Murray and Frenk call “key informants”—ie, health experts, who are likely to repeat the prevailing conventional wisdom in international health policy circles. Not surprisingly, systems based on managed competition and markets, which are currently fashionable, fare better in the responsiveness league. As I expressed in my article, I feel that patients’ own assessments of their health systems might be more accurate than key informants’ judgments of responsiveness. Murray and Frenk answer that there are difficulties in choosing patients’ opinions, problems of which I am aware. But why these difficulties are greater than those created by choosing key informants is left unanswered. Murray and Frenk plan to use data from household surveys to test the reliability of the information provided by key informants. But, again, they do not explain why household surveys are better than patients’ own assessments. Both household and patient surveys could probably be useful for comparison of health system performance within countries; but comparisons among countries are a different matter. Murray and Frenk seem unaware of the results of many studies that show the complexity in attempts to compare social indicators across countries. With respect to fairness, Murray and Frenk confuse this issue with effort expended by families to pay for health care. In the World Health Report 2000, fairness is measured by the amount that different households pay for medical care— ie, if the wealthy spend more on health care than do the poor, the system of funding is progressive. In my article, I questioned their definitions of fairness and the indicators used to measure it. Fairness cannot be measured without assessment of health care expenditures and the distributive impact of those expenditures on households. A rich household could have very high health care expenses and a poor household could have very low health care expenses, yet the system of health care funding could still be very regressive. For example, if the first household were enrolled in a private health insurance system it could spend health resources on superfluous consumption, without redistribution of resources to the second household, which could be covered by public medical care. Fairness is a notion that links the ability to pay with need; thus, payers and users need to be in the same health care pool. Murray and Frenk continue to ignore this point. I certainly agree with Murray and Frenk on the need for accountability and transparency. I also see the need for WHO to provide reliable and comparable data that countries can use to compare themselves with others. What I am against is the use of poor indicators that are then surrounded with a statistical and mathematical discourse to give an appearance of rigour that they in fact lack. Furthermore, I do not trust the single indicator of quality of health system performance put together by WHO from many outcome indicators. I protest at Murray and Frenk’s misrepresentation of what I wrote in my article. They are
1702
incorrect in saying that “Vicente Navarro argues that other United Nations agencies do not report rankings”. In fact, I wrote that “there is not a single United Nations indicator for ranking countries by economic performance”, which is a very different statement. Economic performance is too complex to be measured by a single indicator. The United Nations ranks countries’ economic performance by measurement of several indicators, such as unemployment, gross national product per head, and productivity rates. Paradoxically, for the equally complex measure, social performance, the United Nations (through the United Nations Development Program) ranks countries by quality of life, which is a single indicator derived from many ingredients. This quality-of-life indicator has caused complications. Spain, for example, moved from 7th to 21st position in the quality-of-life ranking in only 1 year, because weights given to different components in the formula were changed. The media wrongly accused the government of dramatically undermining Spaniards’ quality of life, but no real quality-of-life change had happened. Governments frequently assume that the components selected by WHO or the United Nations to produce a compound indicator are factors whose development will move the nation upward in the international ranking. Thus, a ranking that is defined by technocrats becomes a policy guide, which is profoundly wrong because it gives too much power to those defining the basis for the ranking. Contrary to what Murray and Frenk assume, I think that countries should be ranked; but they should be ranked on indicators such as infant mortality, deaths at work, waiting lists for serious operations, or other indicators that are reasonably easy to obtain and are comparable across countries. What I am against is ranking countries by synthetic indicators that are unreliable, non-comparable, and that, as in the World Health Report 2000, reproduce a set of values that are not conducive to better health. Vicente Navarro Public Policy Program, Johns Hopkins University, Baltimore, MD, USA, and Pompeu Fabra University, Spain (Prof Vicente Navarro PhD) Correspondence to: Prof Vicente Navarro, Public Policy Program, Johns Hopkins University, Baltimore, MD 21205-1996, USA (e-mail:
[email protected]) 1 2 3 4
Murray C, Frenk J. World Health Report 2000: a step towards evidencebased health policy. Lancet 2000; 357: 1698–700. Navarro V. Assessment of the World Health Report 2000. Lancet 2000; 356: 1598–601. Navarro V, Benach J. Health inequalities in Spain. Madrid: Ministry of Health and Consumer Affairs, 1994. Schalick LM, Hadden WC, Pamuk E, Navarro V, Pappas G. The widening gap in death rates among income groups in the United States from 1967 to 1986. Int J Health Serv 2000; 30: 13–26.
When the World Health Report 2000 was published last year, most newspapers in the UK focused on its league tables, which ranked countries’ overall health system “performance”. Eyebrows were raised in surprise that France and Italy led the world, followed by some small states: San Marino (third), Andorra (fourth), Malta (fifth), and Singapore (sixth). But eyeballs rolled in disbelief when the USA scored highest in terms of responsiveness to expectations of the population, and Colombia in fairness of financial contribution. Health policy analysts were frankly bemused. This response was unfortunate, because it deflected attention from some interesting and complex thinking in the main text of the report, and undermined its credibility and value. Institutions, and individuals who are concerned about
THE LANCET • Vol 357 • May 26, 2001
WORLD HEALTH REPORT 2000
world health, have been hoping for the leadership role of WHO to be clearly asserted, and for WHO to recover its pre-eminence as a source of expertise and authoritative analysis. The World Health Report 2000 was viewed as a key milestone on the road to a strengthened WHO. Analysts, academics, and Ministries of Health and their advisors have scrutinised the evidence on which the rankings are based, and have become increasingly critical about the criteria used to judge health systems, methods for ranking systems, and the shortcomings in absence of real data. Extensive manipulation of data, adjustments for missing information, and data derived from small-scale pilot studies of responsiveness, led one analyst1 to term the exercise “a virtuosic display of skating on thin ice”. Most health policy analysts would support the search for better measures to provide evidence to inform health policy, but few would see this search as an exercise that can be stripped of values and ideology. Navarro,2 soon after the report appeared, said that “WHO is not a scientific institution but rather a political institution whose positions and reports must be assessed both scientifically and politically”. However, Murray and Frenk3 do not agree, and believe that good analysis can overcome politics. But this belief is not tenable because performance criteria represent social and political judgments, most notably those that concern fairness, not objective truth. Countries’ history and values, even if contested, will affect what they seek to achieve through their health systems and domestic health policy, and the extent to which external analysis is judged relevant. Furthermore, there seems to be a gap between Murray and Frenk’s intellectual acceptance of the many factors that affect health, and the scope of the World Health Report 2000, which focuses most of its attention on health care. Most analysts, when assessing people’s health and what policies would lead to improvements in health, would have difficulty in separating the health system (“all the activities whose primary purpose is to promote, restore, or maintain health”),4 from the many factors outside it (employment, security, and environmental pollution among others), and from the complex effect on health of social inequalities.5 The focus of the World Health Report 2000, to obtain better data on health care systems, is laudable. Furthermore, the introduction of notions of responsiveness and fairness is to be warmly welcomed— but more discussion of how to measure these complex ideas is necessary if countries are to sign up to the analysis and act on its conclusions. What are any of the reports produced by United Nations organisations for? At a minimum, the United Nations is well placed to collect data from countries. Compilation of tables that describe events from around the world is useful for comparisons between countries and times. To hold countries to account for failures in their health systems helps nationals and others who wish to change the status quo. But data need to be simple, useful, believable, and avoid undue dependence on criteria that do not have universal acceptance. The United Nations Development Programme’s human development index and UNICEF’s child health
THE LANCET • Vol 357 • May 26, 2001
reports have produced important comparative evidence, despite imperfect data. For example, UNICEF’s tables from the State of the World’s Children Report 2001, show that global coverage of immunisation against the six major diseases of childhood dropped from 90% in the late 1990s, to 75% in 2000. UNICEF identified 19 countries in which diphtheria-tetanus-pertussis vaccine coverage had dropped to below 50%.6 Thus, these data indicate the failure of health systems to avoid vaccine-preventable deaths, and might be better at stimulating international and national action than rankings of disability-adjusted life expectancy, or even responsiveness of health systems—at least until we have transparent and simple indicators of health system performance. United Nations’ reports might initiate discussion between civil society organisations about failings in systems, and mobilise support for change and a reallocation of resources. Again, for these changes to occur, data need to be perceived as legitimate, and presented simply. If reports are written in an academic, overly opaque style, and supported by data that are complex and contested, they are unlikely to be acted on by policy makers. The World Health Report 2000 does not contain clear links between performance tables and its interesting, but complex, analytical framework (three overall goals and four vital functions), and was little discussed outside WHO headquarters before being published. This omission has been noted by critics who commented that 30 of 32 references on methods were by the report’s writers,7 and has been acknowledged by the WHO Executive Board,8 which has called for broader discussion and scientific analysis. The creation of a website7 for posting contributions is a step in the right direction. Many health policy analysts were challenged and stimulated by the report’s ideas. WHO’s goal should now be to lead debate on new ways of thinking about, and measurement of, important notions of fairness and responsiveness, the contribution of health systems to improved health, and how best to improve the functions of health systems. We thank Barbara McPake for helpful points on a first draft.
Gill Walt, Anne Mills London School of Hygiene and Tropical Medicine, Keppel Street, London WC1E 7HT, UK ( Prof G Walt PhD, Prof A Mills PhD) Correspondence to: Prof Gill Walt (e-mail:
[email protected]) 1 2 3 4 5 6 7
8
Williams A. Science or marketing at WHO? A commentary on “World Health 2000”. Health Economics 2001; 10: 93–100. Navarro V. Assessment of the World Health Report 2000. Lancet 2000; 356: 1598–601. Murray C, Frenk J. World Health Report 2000: a step towards evidence-based health policy. Lancet 2001; 357: 1698–700. WHO. World Health Report 2000. Geneva: WHO, 2000. Leon D, Walt G, eds. Poverty Inequality and Health. Oxford: Oxford University Press, 2000. Hardon A. Immunisation for all? HAI-Lights 2001; 6: 1. Braveman P, Viacava F, Travassos C, et al. Scientific concerns regarding the World Health Report 2000. 2001 http://www.fiocruz.br/cict/dis/verbra.htm. (accessed Jan 6, 2000). WHO. Health systems performance assessment. Geneva: WHO, 2000 (EB 107.R8).
1703