Public Health (2008) 122, 914e922
www.elsevierhealth.com/journals/pubh
Original Research
The Data for Decision Making project: assessment of surveillance systems in developing countries to improve access to public health information K. Wilkinsa,*, P. Nsubugab, J. Mendleinc, D. Mercerd, M. Pappaioanoue a
National Center for Immunization and Respiratory Disease, Centers for Disease Control and Prevention, 1600 Clifton Rd. NE, Mailstop E-05, Atlanta, Georgia 30333, USA b Division of International Health, Epidemiology Program Office, Centers for Disease Control and Prevention, USA c Division of Epidemiology and Surveillance Capacity Development, Coordinating Office for Global Health, Centers for Disease Control and Prevention, USA d Vaccine Preventable Diseases and Immunization, Division of Health Programs, World Health Organization, Regional Office for Europe, Denmark e Association of American Veterinary Colleges, USA Received 25 January 2007; received in revised form 13 October 2007; accepted 6 November 2007 Available online 19 May 2008
KEYWORDS Health information system; Decision making; Surveillance
Summary Objective: By using timely, high-quality information, ministries of health can identify and address priority health problems in their populations more effectively and efficiently. The Data for Decision Making (DDM) project developed a conceptual model for a data-driven health system. This model included a systematic methodology for assessing access to information to be used as a basis for improvement in national health surveillance systems. Study design and methods: The DDM surveillance assessment methodology was applied to six systems in five countries by staff from the US Centers for Disease Control and Prevention (CDC). Ministry of health personnel at national, regional, district and local levels were interviewed using either informal conversation or an interview guide approach, and their methods for collecting and using data were reviewed. Attributes of timeliness, accuracy, simplicity, flexibility, acceptability and usefulness were examined. Problems and their underlying causes were identified. Results: The problems preventing decision makers from having access to information are many and complex. The assessments identified no fewer than eight problem areas that impeded decision makers’ access to information. The most common deficiencies were concerning the design of the system, ongoing training of personnel and dissemination of data from the system.
* Corresponding author. Tel.: þ1 404 639 5298; fax: þ1 404 639 8573. E-mail address:
[email protected] (K. Wilkins). 0033-3506/$ - see front matter ª 2007 The Royal Institute of Public Health. Published by Elsevier Ltd. All rights reserved. doi:10.1016/j.puhe.2007.11.002
The Data for Decision Making project
915
Conclusions: To improve the availability of information to public health decision makers, it is recommended that: (a) surveillance system improvement begins with a thorough evaluation of existing systems using approaches outlined by the CDC and the Health Metric Network of the World Health Organization; (b) evaluations be designed to identify specific causes of these deficiencies; (c) interventions for improving systems be directly linked to results of the evaluations; and (d) efforts to improve surveillance systems include sustained attention to underlying issues of training and staff support. The assessment tool presented in this report can be used to facilitate this process. ª 2007 The Royal Institute of Public Health. Published by Elsevier Ltd. All rights reserved.
Introduction In 1990, the US Centers for Disease Control and Prevention (CDC), in collaboration with the US Agency for International Development (USAID), designed the Data for Decision Making (DDM) project with the goal of enhancing the use of data in the decision-making process through capacity building at national, regional and district levels, and increasing decision makers’ access to health information. During its design phase in 1990, the DDM project developed a conceptual model for a data-driven health system that outlined the roles of decision makers, technicians and information. This model included three components: (a) decision makers who are knowledgeable about the usefulness of relevant epidemiological information for problem-solving; (b) advisors who are capable of analysing epidemiological data; and (c) information systems that make relevant, valid and timely data available for use at all levels of the health system. Also included were technical assistance for improving access to information, and interdisciplinary training for programme managers at central and decentralized levels of the ministry of health, tailored to the needs of the country.1 Between 1990 and 2003, DDM project personnel assisted 15 countries in identifying and addressing barriers to the use of data. Assessments identified multiple barriers to data use. These barriers and the constellation of interventions put in place to address them, including the interdisciplinary training programme, are described elsewhere.1 In five of the 15 countries, improving access to information was identified as important to increasing their use of health data. This report focuses on one of the three components of the DDM project: information systems. By using timely, high-quality information, ministries of health can identify and address priority health problems in their populations more
effectively and efficiently. Health information systems are necessary for ongoing disease surveillance and evaluation of the implementation of public health policies, strategies and interventions.2 Data from health information systems can be used to detect, investigate and control an outbreak of acute infectious disease; monitor disease morbidity and mortality; provide a basis for predicting the future course of an infectious or non-infectious disease; assess the need for various interventions; support policy formulation and resource allocation; guide research; and aid the clinical care of patients via the appropriate allocation of limited resources.3 Such data can also be used to improve the management of resources for the delivery of public health services and programmes, project future needs and monitor costs.4 Although they are widely perceived as being a necessary component of effective health programmes, information systems in developing countries do not yet generally provide sufficiently useful information for effective public health management. Problems cited in the literature include: lack of trained personnel, diagnostic laboratories and funds to support surveillance5; inadequate data-collection systems; an apparent lack of interest and motivation among personnel; a lack of data analysis among data managers6; overly complex systems; delays in reporting urgent events; incomplete reporting7e9; a lack of dissemination and feedback of information7; a lack of reliability in case reporting10; and a perception that data users did not have input into the collection of information.1 Much attention has been paid to the need to improve the quality of information provided and to
916 the evaluation of surveillance systems. The CDC and the World Health Organization (WHO) have both published guidelines for the systematic, quantitative evaluation of disease surveillance systems11e14 that can also be applied to health and management information systems. Although several evaluations of surveillance systems have been published,7e9 these tend to focus on a single system and only identify problems in the most general terms. To the authors’ knowledge, no report has been published to bring together the lessons to be learned from various evaluations. Efforts to improve health information systems have focused on the development of new forms and computerization rather than taking a systemwide approach to identifying and solving problems. For instance, at the workshop entitled ‘Issues and Innovation in Routine Health Information in Developing Countries’ held in Potomac, Maryland, 14e16 March 2001, five of 10 reported initiatives centred on the creation of new forms. In a sixth initiative, computerization was the only activity proposed (http://www.cpc.unc.edu/measure/publications/html/rhino2001). These efforts, as documented in project reports, do not appear to be based upon the results of systematic evaluations. DDM personnel developed and tested a systematic methodological approach for the assessment of surveillance systems that facilitates the development of appropriate recommendations and plans for action. This approach takes the decision makers’, rather than the system developers’, point of view. This paper presents a summary of the results of assessments done by the DDM teams. When viewed together, these assessments demonstrate the complex nature of surveillance systems and the requirement for multifaceted, sustained efforts in order to produce timely, accurate, useful information that can be used in decision making.
Methods Definitions Access to information is defined as a situation in which decision makers have information they feel they need, that is of adequate quality and is in a form they can understand, when they need to make a decision. A health information system is defined as ‘a combination of vital and health statistical data from multiple sources, used to derive information about the health needs, health resources, costs, use of health services and outcomes of use by the
K. Wilkins et al. population of a specified jurisdiction’.15 For the purposes of DDM projects, any one or a combination of the following were included in surveillance systems: surveillance systems for notification of diseases of epidemic potential (typically including those that require immediate or weekly reporting); information systems that require periodic (monthly, quarterly or other interval) reporting of diseases, injuries, behaviours or other conditions used for planning, resource allocation, policy determination and evaluation at national level (management/service/coverage information as well as data on morbidity and mortality); mortality reporting through vital registration systems; reporting as required by prevention and control programmes addressing specific diseases or groups of diseases, termed ‘vertical programmes’; sentinel reporting systems; and periodic surveys and studies done by or for the ministry of health, e.g. demographic and health surveys.
Framework Between 1994 and 1997, a framework was developed to characterize attributes of information systems and to identify factors that facilitate or hinder performance. This framework is based primarily on the CDC’s ‘Guidelines for Evaluating Surveillance Systems’11,13 as well as the ‘Principles and Practice of Public Health Surveillance’,17 a literature review, and expert opinion. The information system attributes of timeliness, simplicity, flexibility, acceptability, sensitivity (the proportion of cases of a disease or health condition detected or the system’s ability to detect epidemics), predictive value positive (the proportion of persons identified who have the condition under surveillance), and representativeness from the guidelines13 were used as outcomes. Positive predictive value sensitivity and representativeness were regrouped together under ‘accuracy’. Usefulness of information was added as an important attribute that had not been included in the first publication of the guidelines.13 Although this framework is consistent with that of the Health Metrics Network (HMN),14 it was developed and used prior to the publication of the first HMN framework.
The Data for Decision Making project
917
For each of the six attributes, two to eight factors that contributed to the positive or negative performance of the information system were identified using a methodology for problem analysis described by Dyal.16 An illustrative problem analysis for one attribute is presented in Fig. 1, and a list of possible factors contributing to negative performance, by attribute, is presented in Table 1. An assessment tool was subsequently developed that included a list of these 26 possible negative or limiting factors, to be used as a prompt to evaluators in identifying actionable underlying causes of system difficulties that would be addressed in DDM workplans.
Setting All systems assessed by the DDM project were included in this report. Between 1992 and 2000, ministries of health in five countries in Latin America, Africa, the Middle East and Asia requested the systemic assessment of their health information systems. Country populations ranged from 4 to 89 million, and both middle- and lowincome countries were included (see Table 1 for country characteristics and systems assessed). The names of the countries are not provided here to provide some degree of anonymity.
planning and information systems design. An indepth assessment of information sources, scope and dissemination was undertaken. Assessment teams began by interviewing high-level ministry officials using either an informal conversation or interview guide approach.18 The officials to be interviewed were identified by the ministry staff who invited DDM to conduct the assessments. Persons interviewed were, for the most part, ministry employees. In one country, the paediatric association, diabetes society and civil office staff were also interviewed as potential data providers and users. In the field, teams interviewed personnel from all levels of the healthcare systems and directly observed data collection and analysis tasks. Teams identified principal problems and system attributes that were limiting access to information. They also questioned key informants at provider, intermediate and decision-maker levels to identify underlying causes of identified problems. The findings of each team were incorporated into a DDM workplan that contained targeted strategies (e.g. training and technical assistance on specific health problems) contributing to the goal of improving the use of data in the decisionmaking process. The workplan was then submitted to the ministry of health and to USAID for approval.
Analysis Data collection Visiting assessment teams included two to six members with skills in epidemiology, management,
Attribute
A retrospective review of the reports of the five information system assessments undertaken between 1992 and 1998 was conducted to evaluate
Contributing Factors Information not used at point and time of collection Lack of understanding / motivation of personnel to report in timely fashion
No feed back received Personnel not paid
Information Available Late (Timeliness)
Staff not trained
Lack of authority (real or perceived) to implement change based on data
System insensitive to informal reports, e.g., Through media, rumor Case definitions too restrictive
Mail speed or fees
Communications too slow
No phones or cost
Inefficient / insufficient computerization of information
Messenger service unreliable or occasional
Figure 1 Illustrative problem analysis of factors contributing to poor performance for the attribute ‘lateness of available information’, Data for Decision Making assessment of health information systems.
918
K. Wilkins et al.
Table 1 Selected demographic characteristics of countries where Data for Decision Making assessments of access to health information were implemented; 1992e2000. Country
1
2
3
4
5
Year of assessment Region
1992
1994
1993
1992
1998, 2000
Africa
Latin America
Latin America
Middle East
5,400,000 75e100
5,000,000 50
89,820,000 47
South East Asia 7,700,000 53
4,200,000 46
$150
$650
$3800
$1010
$1510
Multiple including disease notification and management
Health information system (immediate, weekly, monthly) from outpatient facilities, other systems of vertical programmes
Disease surveillance (immediate, weekly), periodic surveys, health service data, other systems of vertical programmes
Management information system, sentinel reporting system, other systems of vertical programmes
Mortality reporting, disease surveillance (immediate, weekly, monthly)
Population Infant mortality rate (per 1000) Per capita gross national product (95) Information system(s) assessed
the applicability of the assessment tool. The tool was revised to include some potential problems that had not been identified previously. In 2000, the revised tool was field-tested with the assessment of a second information system within one of the five original countries (disease surveillance, Country 5).
Results Each system was judged to have deficiencies in at least four of the six attributes, with two systems having five deficiencies and three systems having six deficiencies (Table 2). Overall, the lowest number of contributing factors identified by assessment teams was eight (System 5), while the highest was 15 out of the possible 26 identified by the DDM staff. The most common problems for the provision of high-quality, timely information among the six systems studied concerned the design of the system (complexity, large number and difficulty of use of forms, responsiveness to information needs of decision makers), people (not having enough motivated personnel with proper training and supervision) and dissemination. The framework developed based on the experience of five initial assessments was found useful by the team for assessing the sixth system.
Timeliness was considered to be a problem in five out of the six information systems examined. Of the five systems reporting that information was received late, the most common contributing factors identified were the lack of motivated or trained personnel, difficulties in communication (payment for phone calls, telexes or other), and poor or nonexistent case definitions. In two of these five systems, inefficient or insufficient use of computers was noted as a contributing factor. In both these cases, computers were available but data entry was described as slow or behind schedule. Data from existing systems were considered to lack accuracy to some degree in all systems. Further examination demonstrated that the most commonly identified contributing factors were problems with data flow. How and to whom cases and conditions were to be reported was not clear; data and information were not used at the point of collection, losing an opportunity to identify and correct errors before sending them to the next level; and personnel responsible for reporting lacked training and supervision. In only three of the six systems considered to provide inaccurate information, users reported that forms were difficult to complete, contributing to the rate of errors. Simplicity was reported as a problem in five of the six systems. The most common contributing factors were the use of multiple reporting forms and a lack of clarity in response responsibilities.
The Data for Decision Making project Table 2
919
Deficiencies identified in Data for Decision Making country assessments of information systems.
Attribute
Contributing factors
Lack of timeliness Means of communication (telephone, fax, mail) too slow Lack of understanding or motivation of personnel to report on time Case definitions too restrictive or non-existent System insensitive to informal reports (media, rumour) Inefficient/insufficient computerization Lack of accuracy
System 1
2
-
-
Lack of simplicity
-
Lack of flexibility System incapable of reporting unusual events System unresponsive to changing needs of decision makers Lack of acceptability System does not respond to local needs Political or social considerations influence reporting of information Lack of usefulness System not designed with user input Information not disseminated to decision makers Lack of knowledge on how to use information Lack of confidence in data quality Lack of resources to implement change based on data Total attributes identified as problem areas Total factors contributing to poor performance in attribute
5
6
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
Flow of reports not clear or confusing Case definitions too complex for available resources Too many forms Roles and responsibilities for response unclear
4
-
Forms difficult to fill in correctly Flow of reports inappropriate Data not used at point of collection Case definitions not specific or not well applied Appropriate laboratory confirmation not applied Data not verified at point of collection Inadequate training and supervision System coverage not representative of target population
3
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
6 15
6 15
4 10
6 9
5 8
5 13
-, The attribute was considered a problem area by the assessment team; -, The determinant or contributing factor was cited as a problem by the assessment team.
920 Unclear reporting protocols were cited in two of the five systems. None of the assessments reported that standard case definitions were too complex. When determining complexity, the authors examined whether standard case definitions could be understood and applied by care providers, given their education levels and the local availability of laboratory confirmation. Flexibility was described as a problem in four of the six systems. In each of these four systems, there were no routine mechanisms for review and adaptation of the systems to reflect the changing needs of decision makers. All six systems were viewed by national decision makers as unacceptable. Local decision makers and data providers reported that the systems were unresponsive to local needs. Teams reported that most were designed to meet the information needs of vertical programmes or to satisfy the requirements of external donors. Few allowances were observed for the information needs of health workers at subnational levels, and none of the national systems assessed addressed or used individual patient record keeping. In two cases, there was a perception that data were collected to satisfy political and social considerations rather than the technical needs of the ministry of health. Information from the systems was not considered to be useful by decision makers in any of the six systems, either because information was not disseminated at all or did not meet the perceived needs of the intended target audiences. Potential users of information were not found to possess the skills and competencies necessary to use the information for setting priorities or programme management. In four of the six systems, lack of confidence in the quality of the information was a contributing factor in its non-use. In one case, staff stated that data were not used because resources for control or prevention efforts were not available. The DDM project ended before having the opportunity to examine the outcome of workplan development and implementation.
Discussion The six systems evaluated in this project represent a range of developing countries in four geographic regions, with both low- and middle-income economies. The framework for the assessments was applicable in all six cases and allowed for a systematic, thorough examination of factors impeding access to health information. In some cases, this facilitated the design of workplans for the improvement of health information systems. While
K. Wilkins et al. these assessments were conducted before the development of the HMN framework, the methods used are compatible with that framework. The large number of problems found with each system (eight to 15 of a possible 26) demonstrates the complexity of the issues and the need for multifaceted solutions, often involving development of human as well as material resources. Careful, systematic identification of problems, analysis of underlying factors and matching of feasible interventions to these factors will help to achieve the objective of improved access to information; a necessary step in promoting the optimal use of data. This report is subject to several limitations. Firstly, some of the factors contributing to negative performance, such as insufficient computer and laboratory support, did not appear as often as might be expected given the known infrastructure of the countries assessed. For both computerization and laboratory support, the assessment teams were aware before travelling to the country that the DDM project could do very little if anything in these areas; for this reason, they may not have examined these issues closely. Secondly, the countries and information systems described in this report were a convenience sample based on ministry invitation to conduct the assessment, and do not represent a scientific sample of countries or the information systems within those countries, perhaps limiting the generalizability of the results. The assessments were limited to information produced within and used by ministries of health. Also, while the authors recognize the importance of other information systems, including census, resource records and individual patient records, the systems examined were those identified by the respective ministry of health. However, it is believed that this method is equally applicable to other public health systems. The results of this study demonstrate that greater attention must be paid to issues related to the human element in information system design and maintenance. The need for local ownership of information and systems (such as information responding to local needs and participatory design) was mentioned for every system in every country. Local ownership of information helps to motivate all participants in the process. Local use not only reinforces ownership, but provides a system for early detection and correction of errors in the system. This emphasis is supported somewhat in the literature,19 but more work needs to be done to ascertain the relative importance of the many factors. Lack of training, supervision and motivation for completion of forms, verification at the point of
The Data for Decision Making project collection and use at all levels of the healthcare system were cited almost as often as important contributors to poor information systems. However, the resources necessary to train and provide continual support are rarely made available. Planning and financing of public health programmes, be they vertical or in the context of health system reform, should include ongoing training and supervision of personnel and systems for reinforcing positive behaviours to increase motivation. Dissemination of information produced by the information systems, including feedback, is also highlighted by the results of this analysis. Dissemination is not only important to informing decision making, but also represents a powerful tool to motivate personnel. The lack of dissemination has been further described in the assessment reports as resulting from a lack of trained human resources, lack of infrastructure, lack of paper, lack of distribution network, and the production of reports that are not readily understandable by the target audience. Unless these problems are understood and addressed, even the best of systems will not get information to decision makers, and the same complaints will continue to be heard. In conclusion, the problems that prevent decision makers from having access to the information they need to make informed decisions are many and complex. In order to improve access, systems must be systematically evaluated and interventions designed that follow closely on the results of these evaluations. This approach will also assist in evaluating improvements to information systems by linking actions to improved performance under the various system attributes. Specifically, it is recommended that: (a) information system improvement begins with a thorough evaluation of existing systems using approaches outlined by the CDC11 and the WHO/HMN12,14; (b) evaluations identify specific determinants and root causes of deficiencies; (c) interventions for improving systems link directly to results of the evaluations; and (d) efforts to improve information systems include sustained attention to underlying issues of training, staff support and dissemination.
Acknowledgements The authors wish to acknowledge the contributions of the following in the creation and support of this project: Jim Sheperd, Pamela Johnson and Celeste Carr, USAID/Washington; Drs. Joe Davis, Ronald Waldman and Andrew Vernon, CDC. The authors
921 gratefully acknowledge the contributions of members of the assessment teams who devoted long hours to the evaluation of information systems: Dr. Douglas Klauke, Mr. Robert Fagan, Mr. Bradley Otto, Dr. Steve Yoon, Dr. Daniel Fishbein, Dr. Michael Malison, Dr. Robert Fontaine, CDC; Mr. Gerald Hursh-Cesar, Intercultural Communication Incorporated; Charles Myers, Dr. Julia Walsh, Harvard University; Drs. Najwa Ja’Rour, Dr. Adel Bilbasi, Ministry of Health, Jordan.
Ethical approval None sought.
Funding The Data for Decision Making project was funded by the United States Agency for International Development under Project Number 936-5991.
Competing interests None declared.
References 1. Pappaioanou M, Malison M, Wilkins K, Otto B, Goodman R, Churchill RE, et al. Strengthening capacity to use data for public health decision making: the Data for Decision Making project. Soc Sci Med 2003;57:1925e37. 2. Trebucq A, Ait-Khaled N, Gninafon M, Louis JP. Information management in national tuberculosis control programmes and national health information systems. Int J Tuberc Lung Dis 1998;2:852e6. 3. Morris D, Gray A, Noone A, Wiseman M, Jathanna S. The costs and effectiveness of surveillance of communicable disease: a case study of HIV and AIDS in England and Wales. J Public Health Med 1996;18:415e22. 4. Janes G, Hutwagner L, Cates W, Stroup D, Williamson GD. Descriptive epidemiology: analyzing and interpreting surveillance data. In: Teutsch S, Churchill RE, editors. Principles and practice of public health surveillance. 2nd ed. New York: Oxford University Press; 2000. p. 112e67. 5. Cash RA, Narasimhan V. Impediments to global surveillance of infectious diseases: consequences of open reporting in a global economy. Bull World Health Organ 2000;78: 1358e67. 6. Azubuike MC, Ehiri JE. Health information systems in developing countries: benefits, problems and prospects. J R Soc Promot Health 1999;119:180e4. 7. Tshimanga M, Peterson DE, Dlodlo RA. The health information system in the City of Bulawayo, Zimbabwe: how good is it? Cent Afr J Med 1997;43:195e9. 8. Chadee DD. Evaluation of malaria surveillance in Trinidad (1988e1998). Ann Trop Med Parasitol 2000;94:403e6. 9. Ferreira VM, Portela MC, Vasconcellos MT. Variables associated with underreporting of AIDS patients in Rio de Janeiro, Brazil, 1996. Rev Saude Publica 2000;34:170e7.
922
K. Wilkins et al.
10. Blackburn N, Schoub B, O’Connell K. Reliability of the clinical surveillance criteria for measles diagnosis. Bull World Health Organ 2000;78:861. 11. Centers for Disease Control and Prevention. Updated guidelines for evaluating public health surveillance systems: recommendations from the Guidelines Working Group. MMWR 2001;50:1e35. 12. World Health Organization. Recommended surveillance standards. 2nd ed. WHO/CDS/CSR/ISR/99.2. Geneva: WHO. 13. Centers for Disease Control and Prevention. Guidelines for evaluating surveillance systems. MMWR 1988;37:1e18. 14. World Health Organization. Health metrics network, framework and standards for country health information systems. 2nd ed. Geneva: WHO. Available from: http://www.who.int/healthme trics/documents/hmn_framework200709.pdf; 2007.
15. Last JM. A dictionary of epidemiology. 2nd ed. New York: Oxford Press; 1988. 16. Dyal W. Program management: a guide for improving program decisions. 5th ed. Atlanta: Centers for Disease Control and Prevention; 1990. 17. Teutsch SM, Churchill RE, editors. Principles and practice of public health surveillance. 2nd ed. New York: Oxford University Press; 2000. 18. Patton M. How to use qualitative methods in evaluation, Newbury Park. Sage; 1987. 19. White M, McDonnell S. Public health surveillance in low and middle-income countries. In: Teutsch S, Churchill RE, editors. Principles and practice of public health surveillance. 2nd ed. New York: Oxford University Press; 2000. p. 287e315.
Available online at www.sciencedirect.com