A national standards-based assessment on functionality of electronic medical records systems used in Kenyan public-Sector health facilities

A national standards-based assessment on functionality of electronic medical records systems used in Kenyan public-Sector health facilities

Accepted Manuscript Title: A National Standards-Based Assessment on Functionality of Electronic Medical Records Systems Used in Kenyan Public-Sector H...

894KB Sizes 0 Downloads 7 Views

Accepted Manuscript Title: A National Standards-Based Assessment on Functionality of Electronic Medical Records Systems Used in Kenyan Public-Sector Health Facilities Authors: Samuel Kang’a MSc Nancy Puttkammer MPH, PhD Steven Wanyee Msc Davies Kimanga MD Jason Madrano DNP Veronica Muthee MPH MandE Officer Patrick Odawo MD Anjali Sharma ScD Tom Oluoch MSc Katherine Robinson MPH James Kwach MSc William B. Lober MS, MD PII: DOI: Reference:

S1386-5056(16)30206-4 http://dx.doi.org/doi:10.1016/j.ijmedinf.2016.09.013 IJB 3401

To appear in:

International Journal of Medical Informatics

Received date: Revised date: Accepted date:

21-10-2015 22-8-2016 22-9-2016

Please cite this article as: Samuel Kang’a, Nancy Puttkammer, Steven Wanyee, Davies Kimanga, Jason Madrano, Veronica Muthee, Patrick Odawo, Anjali Sharma, Tom Oluoch, Katherine Robinson, James Kwach, William B.Lober, A National StandardsBased Assessment on Functionality of Electronic Medical Records Systems Used in Kenyan Public-Sector Health Facilities, International Journal of Medical Informatics http://dx.doi.org/10.1016/j.ijmedinf.2016.09.013 This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and review of the resulting proof before it is published in its final form. Please note that during the production process errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.

A National Standards-Based Assessment on Functionality of Electronic Medical Records Systems Used in Kenyan Public-Sector Health Facilities Samuel Kang’a, MSc1; Nancy Puttkammer, MPH, PhD2; Steven Wanyee, Msc1;; Davies Kimanga, MD4; Jason Madrano, DNP3; Veronica Muthee, MPH1; Patrick Odawo, MD6; Anjali Sharma, ScD2; Tom Oluoch, MSc5; Katherine Robinson, MPH5; James Kwach, MSc5; William B Lober, MS, MD2

1. International Training & Education Center for Health (I-TECH) Kenya 2. International Training & Education Center for Health (I-TECH) Seattle, Department of Global Health, University of Washington, USA. 3. Afya Bora Consortium in Global Health Leadership, University of Washington, USA 4. Elizabeth Glaser Pediatric AIDS Foundation (EGPAF), Kenya 5. US Centers for Disease Control and Prevention (CDC), Division of Global HIV/AIDS (DGHA), Nairobi, Kenya 6. Private Consultant, Kenya

Corresponding author: 1. Veronica Muthee, M&E Officer I-TECH Kenya - [email protected]

Word count: Word Count: 3912

Key words: EMRs, Standards, Review, Checklist

Abstract Background: Variations in the functionality, content and form of electronic medical record systems (EMRs) challenge national roll-out of these systems as part of a national strategy to monitor HIV response. To enforce the EMRs minimum requirements for delivery of quality HIV services, the Kenya Ministry of Health (MoH) developed EMRs standards and guidelines. The standards guided the recommendation of EMRs that met a preset threshold for national roll-out.

Methods: Using a standards-based checklist, six review teams formed by the MoH EMRs Technical Working Group rated a total of 17 unique EMRs in 28 heath facilities selected by individual owners 1

for their optimal EMR implementation. EMRs with an aggregate score of > 60% against checklist criteria were identified by the MoH as suitable for upgrading and rollout to Kenyan public health facilities.

Results: In Kenya, existing EMRs scored highly in health information and reporting (mean score = 71.8%), followed by security, system features, core clinical information, and order entry criteria (mean score =58.1% - 55.9%), and lowest against clinical decision support (mean score = 17.6%) and interoperability criteria (mean score = 14.3%). Four EMRs met the 60.0% threshold: OpenMRS, IQCare, C-PAD and Funsoft. On the basis of the review, the MoH provided EMRs upgrade plans to owners of all the 17 systems reviewed.

Conclusion: The standards-based review in Kenya represents an effort to determine level of conformance to the EMRs standards and prioritize EMRs for enhancement and rollout. The results support concentrated use of resources towards development of the four recommended EMRs. Further review should be conducted to determine the effect of the EMR-specific upgrade plans on the other 13 EMRs that participated in the review exercise.

2

Introduction The World Health Organization has listed health information systems as a key building block for strong health systems [1]. Electronic medical record systems (EMRs) have increasingly become part of health information systems in recent years, and offer strong potential to inform health care delivery for improved quality of care [2-4]. Ministries of Health have a leading role in oversight of EMRs, to assure that systems protect patient safety [5], meet local information needs, and fit appropriately within the broader national health information system [6, 7].

HIV-focused EMRs were among the earliest adopted systems in Kenya. The Ministry of Health (MoH) initiated several EMR assessment activities at a time when EMR systems uptake was still low in Kenya, but rapidly growing, as part of its oversight of these systems. First, the MoH commissioned two assessments of HIV-related EMRs in 2009 [8, 9]. Each of these reviews represented a snapshot of the systems in use at a particular time and the standards against which the EMRs were judged were not explicit. Next the MoH developed a national set of Standards and Guidelines for Electronic Medical Record Systems in Kenya (henceforth called the “Standards”) [10].

The Standards adapted aspects of international data standards for example (ISO 22220, TR 20514, ISO 13606, ISO/TR 18307:2001, ISO/TS 18308:2004, ISO 27799, ISO 17090:2008) which were most relevant for HIV-related EMRs in the Kenyan context, and defined syntax and coding schemes for EMR deployment and usage. This approach created minimum expectations on which to base new EMRs software development or system selection. Studies have described this type of local definition of standards and guidelines as essential for successful application of international standards [11]. After publications of the Standards, the MoH led a review process to evaluate how existing EMRs in Kenya performed against the Standards. This paper describes the results of the review process that guided recommendation of EMRs for implementation at public health facilities.

Methods Electronic Medical Records Systems Identification The International Training and Education Center for Health (I-TECH), a collaboration between the University of Washington, Seattle and the University of California, San Francisco, provided technical assistance to MoH on the EMRs review process. Funding support was provided by the United States President’s Emergency Plan for AIDS Relief (PEPFAR). MOH initiated the EMRs review process 3

through the EMR Technical Working Group (TWG), charged with overseeing all EMRs activities, which liaised with health facilities to identify the different EMRs in use at various public and private health facilities. The EMRs TWG contacted the owners of all identified EMRs to gather their nominations for the health facilities with the most functional uses of EMRs. Following the nominations, the EMRs TWG divided the physical location of the EMRs countrywide into six geographical areas: Nairobi, Central, Coast, Nyanza, Western and Nairobi-Nakuru highway. The list of EMRs was an exhaustive sample based upon the expert knowledge of the EMR TWG, and the geographic areas represented a logistically convenient breakdown of the locations where these EMRs were deployed. At the time of the assessment, no EMRs were known to be deployed in the North Rift and Eastern regions of Kenya.

Selection of reviewers and preparation for the review The EMR TWG identified a group of 22 assessors with a mix of knowledge in information systems, HIV care, or both areas, from its own staff and from the ranks of implementing partners engaged in HIV care and treatment. Prior to conducting the reviews of EMRs, the lead technical coordinator led the reviewers through the review checklist and invited questions and discussion to clarify the instructions. Reviewers volunteered for role-playing exercises that emulated the planned review process, which enabled participants to anticipate field realities and receive constructive peer feedback on their performance. This process reinforced a consistent understanding of the review checklist by all reviewers.

Each of the geographic areas was assigned a group of 3-4 reviewers. This methodology assured consideration of multidisciplinary perspectives, buy-in of implementing partners, and consensusbased results for the review process.

The MoH established a calendar of review dates and informed the system owners of the review schedule and of the requirement to provide a representative at the facility to participate in the review. In addition to this representative, the participating health facilities provided a highly skilled system user with knowledge of HIV care, as well as a guide to direct the reviewers through the health facility.

Guided by the Standards, the EMRs TWG developed a checklist and scoring tool for reviewers to use during their systematic review of EMRs functionality. The scoring tool was divided into seven review domains: basic demographic and clinical health information; clinical decision support; order entry 4

and prescribing; health information and reporting; security and confidentiality; exchange of electronic information; and general system details and standards. Each of the domains was divided into 2 to 21 specific review criteria per domain. Figure 1 provides an example of standards and review criteria in the area of software security and confidentiality. Review Administration Teams of reviewers targeted 17 different EMRs implemented in 28 health facilities, meaning there were several cases where multiple instances of the same EMRs were reviewed. Reviewers provided a summary overview of findings to the health facility administration during a debriefing session immediately after each visit. The EMR TWG then completed a detailed upgrade report, and shared this with each system owner.

In determining the overall EMRs score, the EMR TWG presented the number of criteria met within each functional area for each instance of the EMR as a percentage. The average percentage was taken across the 7 review domains to give the final summary score (that is, each domain was equally weighted) for each EMR. In cases where multiple instances of the same EMR were reviewed, the best score for each criterion was used in rating the EMR. For example, for the criterion on support for user authentication, one instance of an EMR may have met the criterion while other instances of the same EMR may not have met the criterion. The overall composite score for this particular functionality/criterion would consider this criterion to be met for the EMR.

To clarify reviewers’ questions and allow for adjudication of scores to represent consensus results, the MoH organized a workshop for reviewers before the final results were disseminated. In this workshop, team members provided feedback about their review experiences and sought clarification on unclear scoring issues, such as how to provide the summary score in cases where a system partially met a criterion. This resulted in the appropriate adjudication and adjustment of scores to represent consensus results. The consultant in charge of the quantitative analysis of the review findings presented a report and reviewers verified that the report was a true representation of the findings in the field. During the reviewer’s workshop, the EMRs TWG set a threshold score of 60.0% for an EMR to be recommended for implementation at public health facilities. This threshold score was considered to represent minimum acceptable adherence to the Standards.

Quantitative results of the reviews were analyzed using Stata version 11 and Microsoft Excel 2007, based on the final quantitative scores agreed during the reviewers’ workshop. 5

Ethical considerations Ethical approval was obtained from the KEMRI Ethical Review Committee and the CDC Associate Director for Science.

Results Quantitative results Twenty-eight instances of the 17 EMRs were reviewed. Of the 52 criteria used to review each of the 28 EMRs instances (1,456 data points), there were only 4 cases of missing data (0.3%). Overall scores for the 17 systems ranged from 12.8% - 95.2%. Four EMRs met the MoH threshold of 60.0% (OpenMRS-AMPATH, IQ-Care, C-PAD and Funsoft) [12]. Table 1 shows in greater detail the summary scores for each EMR.

Six of the EMRs were evaluated in more than 1 facility and the number of instances of each system ranged from 2-4. For one EMR with 2 instances reviewed, one instance had a zero score across all functional domains and was dropped from the analysis. The scores for each unique instance of a system were generally notably lower than for the composite score for the overall system, because each instance may have had only some of the available EMR features actually enabled. Scores for distinct instances were 5.4-54.7 points lower than their respective composite scores. Compared with the composite scores for the 4 recommended EMRs, the average individual EMRs scores were 39.3 lower for four IQ-CARE instances, 34.8 percentage points lower for four C-PAD instances, 28.5 percentage points lower for three OPENMRS-AMPATH instances, and 8.5 percentage points lower for two FUNSOFT instances. Overall, the 17 EMRs scored highest against health information and reporting (mean score = 71.8%). The systems scored lowest against clinical decision support and interoperability criteria (mean scores = 17.6% and 14.3% respectively). The mean scores in the remaining domains were 58.1% for security, 57.1 for system features, 56.6% for core clinical data, and 55.9% for order entry. Figure 2 presents the mean, minimum, and maximum scores for the 17 systems in each of the 7 domains. Of the four recommended EMRs, OpenMRS and IQ-Care scored strongly in all review domains. C-PAD was strong in all domains besides interoperability. Funsoft was strong in all domains besides order entry.

Table 2 presents the strengths and weaknesses for the 17 EMRs reviewed by domain and by specific criteria. The strongest criteria, met by at least 14 of the 17 EMRs were as follows: having an EMR 6

specific unique patient identification system; generating patient lists; supporting patient search using one or more identifiers; recording of patient demographics; generating patient-level reports; supporting user authentication; having a customizable user interface (forms and fields); and having an encryption matrix for user passwords.

The weakest criteria, met by 4 or fewer of the 17 EMRs were as follows: having biometric or bar code support for patient record identification; having patient education materials available; supporting patient referral; managing drug interactions; managing adverse drug reactions exchanging clinical information and patient summaries; electronically transmitting prescriptions; electronically transmitting and receiving laboratory orders and results; supporting HL7 messaging; supporting XML generation and messaging; electronically transmitting aggregate information to the District Health Information System (DHIS); and supporting the SDMX-HD data exchange standard.

Qualitative Results The EMR TWG recommended four EMRs as preferred systems for its two principal EMR implementing partners, I-TECH and Futures Group, to upgrade and deploy within 630 health facilities countrywide. The EMR TWG recommended that EMR owners (EMR developers and the health facilities implementing EMRs) whose systems have large compliance gaps either: 1) identify their own technical resources to upgrade their systems according to their upgrade plans (EMR developers), or 2) adopt one of the four recommended systems with the support of the assigned implementing partner (health facilities implementing EMRs). A report summarizing the specific findings of the EMRs reviews was publically shared at a dissemination meeting attended by about 70 EMR stakeholders in Nairobi in March, 2012. To date, the EMR TWG has not conducted any follow-up reviews on the non-recommended EMRs to identify progress by in complying with provided upgrade plans, or transitioning to one of the recommended EMRs.

Discussion The standards-based review process in Kenya yielded a consensus-based selection of four EMRs with an acceptable level of attainment of the Standards. The process built familiarity of EMRs owners with the content of the Standards, and produced system-specific upgrade plans. The review process also yielded a comprehensive picture of the strengths and weaknesses of EMRs in Kenya around key functional domains.

7

The overall strength of the 17 EMRs in meeting criteria related to reporting functionality is not surprising. This reflects the fact that reporting to donors and governments has been a primary motivation for EMRs adoption in the context of HIV care and treatment scale-up, given pressures for transparency and accountability linked to the growth in HIV/AIDS funding in the past decade [4, 13].

Similarly, the general weakness of the 17 EMRs in meeting criteria related to interoperability is also not surprising. Interoperability supports continuity of care across clinical information systems within a given health facility as well as across health facilities, while maintaining a patient’s unique identification. Much of recent investment in eHealth innovations has gone toward implementing stand-alone “first generation” electronic systems, rather than building linkages between systems *14, 15]. Achieving interoperability involves addressing the complexities of both technical and semantic integration, attending both to data integration (via syntactic frameworks and semantic ontologies) and functional integration (via middleware and application frameworks) [16]. Foundational elements for successful interoperability include: governance structures to define EMRs architecture and to oversee adoption of international and local data standards; technical expertise to define unique identifier schema and core data sets; financial incentives to adopt standards-based approaches in designing software; structures which support local as well as shared “ownership” of data; policies for security and privacy of data; and trust-building and alignment with medical communities [14, 17-19]. Many of these foundational elements remain weak in many settings [14]. For example, a discontinued health information exchange project in Santa Barbara (USA) spent considerable time mapping data to standards, resulting in delays in software implementation. The project’s failure to find streamlined ways to achieve standardization of data was an important cause of the overall failure of the project [20]. Our Kenya EMRs review findings are consistent with an evaluation of 12 open-source EMR systems used globally, which reported weaknesses in use of standard codes for medical terminology and in application of standard syntaxes for electronic data exchange [21].

Attainment of criteria in the system features, core demographic and clinical data, and security and confidentiality domains was highly heterogeneous, with some criteria met by most systems and others met by very few. The criteria within each of these domains ranged widely in terms of the technical and technological sophistication required. In interpreting the results of the review, it is important to note that EMRs with more instances implemented had a greater chance to score highly on the review. While it is likely that EMRs with multiple instances tended to be more robust systems,

8

this methodology may have biased results in a downward manner for EMRs with only a single instance reviewed.

The overall average score for clinical decision support (17.6%) and the low number of systems which incorporated clinical decision guidelines and which were capable of generating alerts or providing reports to support clinical decisions (3 systems) was well below expectations. The complete absence of this functionality among 14 out of 17 systems a (Figure 2, Table 1) was notable and points to the importance of this area for additional software development among EMRs used in Kenya.

Strengths and Limitations Strengths of the review process were the involvement of 3-4 reviewers with complementary expertise in each review team, the review of multiple instances of each EMRs, and the engagement in a rigorous debate between the reviewers to arrive at final consensus scores for each EMR.

A key limitation of the review was the gap in level of details for some review criteria. For example, while the scores on clinical decision support (CDS) were modestly strong across systems, the criteria simply reviewed whether any CDS guidelines were incorporated and whether any provider alerts were generated. The criteria did not allow for a nuanced review of how many decision support features existed, which clinical knowledge areas were covered, or whether mechanisms were present for maintaining and updating the CDS features as clinical guidelines change [23].

Another important limitation was that the review process focused principally on software functionality rather than organizational or behavioral components of EMR implementations; these components are essential to performance of routine health information systems [24]. For example, the security-related criteria considered only features internal to the software and did not address training of users in security and privacy [25]. Besides, the review exercise did not review the usability of the EMRs.

That the reviewers were constrained to using production EMRs during the evaluation, rather than demonstration versions of each EMRs, was another limitation. Because the instances involved realworld patient data, it was not possible to test certain features and functions due to the risk of contaminating patient data. For example, one system required evidence of payment before a patient’s vital signs could be captured. Thus, it was not possible to test the process of capturing vital

9

sign data within this system. It was necessary during the review to rely on existing patient data, rather than testing entry of new, fictitious patient data.

Despite these limitations, the Standards laid out a broad scope for functional review of EMRs in Kenya and the review process encompassed much of this scope. The Kenya EMRs standards-based review process is a positive example of a government using its policy-making and oversight authority in the governance of a national health information system.

Implications and Future Directions A recent qualitative evaluation of a single-system EMR implementation in the United Kingdom, which had been driven by interests in overall system interoperability, identified numerous issues including: poor usability in local contexts; extensive workarounds that undermined the information contained in the system; and complications in communication, collaborative teamwork, managerial oversight, and validity of the data to be exchanged across information systems [26]. The approach by which the Kenya MoH established standards and oversaw review struck an advantageous middle-ground between “anything goes” and imposing a single system. This approach supports innovation and local problem-solving, while keeping a central MoH role in assuring the quality and integrity of the national health information system.

Areas for further research include surveying stakeholders—including the MoH representatives, reviewers, and EMRs system owners—on their level of satisfaction with the review process, and conducting a follow-up review to measure the degree to which system owners have applied the upgrade plans or transitioned to alternative EMRs which more strongly fulfill the Standards.

The Kenya MoH has replicated the HIV EMRs standards development process in creating standards for laboratory information systems, primary health care information systems, and pharmacy information systems.

Information on stakeholder satisfaction and system owners’ subsequent

decision-making on EMRs upgrades would be useful in planning and replicating review processes related to the new areas of Health Information Systems (HIS) standards. It would also be useful to develop a formal reviewers’ training curriculum to ensure the availability of personnel with appropriate skills to undertake future reviews.

The process led to the insight that the Standards should be revised at regular intervals, to adapt to improvements in international standards and respond to changing local needs. However, the scope of 10

the assessment did not include providing a comprehensive set of recommendations on refinements to the Standards. There is a general evolution from disease-specific EMRs toward clinical information systems which encompass EMRs, as well as ancillary systems (handling laboratory, pharmacy, radiology, etc.), medico-administrative systems (handling admission, emergency, consultations, central archiving, nursing health care, surgery), and administrative systems (handling finance, human resources, equipment, etc.), linked within a central repository for shared business intelligence and analysis. There will certainly be a need to evolve the Standards in light of such future evolution towards interoperable systems. On-going work will be needed to regularly update the Standards and carry out repeated EMRs assessments in the future.

Conclusion Kenya’s initiative represents a desirable balance between identifying conformant EMRs and retaining flexibility to accommodate needs within different local contexts. Insights on the broad areas of strength and weakness across the 17 EMRs reviewed can inform the MoH and funder strategies and priorities for strengthening of EMRs and other eHealth tools in Kenya. At the time of conducting this study, there were not many African studies adopting standards for assessing EMRs and as such, this study could benefit countries in similar context. Kenya’s process sets a positive example for a constructive governmental role in eHealth oversight.

Attribution of support: The work has been supported by PEFPAR through the Health Resources and Services Administration, under award number U91HA0680, to the International Training and Education Center for Health (ITECH) at the University of Washington.

Disclaimer: The findings and conclusions in this paper are those of the author(s) and do not necessarily represent the official position of the U.S. Centers for Disease Control and Prevention/Government of Kenya.

11

Figure 1: Sample standards content and scoring criteria for “Security and Confidentiality” domain Sample of standards Confidentiality and Security 11.1 System supports secure logon to the EMR system 11.2 System controls access to and within the system at multiple levels (e.g. per user, per user role, per area, per section of the chart) through consistent identification and user authentication mechanisms Audit/Logging 19.1 System keeps an audit of all transactions 19.2 System dates and time stamps all entries Sample of scoring criteria (0=No, 1=Yes) Supports user authentication Encryption Matrix implemented for User Passwords Supports role based access control Audit trail / log supported Supports analysis of audit trails reports Automatic logoff implemented

12

Figure 2: Criteria Attainment by Domain

13

14

Table 1: Summary of Scoring Matrix, by System

System score (%)

Interoperability (%)

Security and Confidentiality (%)

Health Reporting (%)

Clinical Decision Support (%)

Order Entry (%)

Core Clinical Information (%)

EMR System

System Features (%)

Domain

(ranges for multiple instances shown in italics below the overall % for each system) OPENMRS AMPATH IQ-CARE C-PAD FUNSOFT System A System B System C System D System E System F System G System H System I System J System K System L System M Average score across EMR (n=17) Average score across EMR instance (n=27)

100.0 (57.1-100)

95.2 (all 90.5)

100.0 (all 50.0)

100.0 (all 100)

100.0 (20.0 –100)

100.0 (62.5-87.5

71.4 (0.0-57.1)

100.0 (42.9-85.7) 71.4 (28.6-57.1) 100.0 (71.4-85.7)

85.7 (38.1-71.4) 80.9 (38.1-66.7) 66.7 (all 61.9)

100.0 (all 100) 100.0 (0.0-100) 0.0 (all 0.0)

100.0 (0.0-100) 100.0 (0.0-100) 0.0 (all 0.0)

100.0 (80.0-100) 100.0 (60.0-100) 100.0 (all 100)

75.0 (50.0-62.5) 87.5 (37.5-75.0) 62.5 (37.5-50)

71.4 (0.0-71.4) 0.0 (all 0.0) 28.6 (0.0-28.6)

71.4

66.7

50.0

0.0

100.0

75.0

42.9

58.0

71.4

47.6

100.0

0.0

80.0

75.0

0.0

53.4

71.4

61.9

100.0

0.0

100.0

37.5

0.0

53.0

71.4

71.4

100.0

0.0

40.0

62.5

0.0

49.3

57.1

76.2

0.0

0.0

100.0

50.0

14.3

42.5

42.9

42.9

0.0

0.0

100.0

100.0

0.0

40.8

28.6 (28.6)

47.6 (42.9-47.6)

100.0 (0.0-100)

0.0 (all 0.0)

40.0 (20.0-40.0)

50.0 (all 50.0)

0.0 (all 0.0)

38.0

28.6

47.6

50.0

0.0

40.0

37.5

14.3

31.1

42.9

38.1

50.0

0.0

40.0

37.5

0.0

29.8

28.6

61.9

0.0

0.0

60.0

37.5

0.0

26.9

42.9

52.4

0.0

0.0

40.0

25.0

0.0

22.9

28.6

19.0

0.0

0.0

80.0

0.0

0.0

18.2

14.3

0.0

0.0

0.0

0.0

75.0

0.0

12.8

57.1

56.6

55.9

17.6

71.8

581

14.3

47.3

54.0

56.6

40.7

22.9

69.6

54.6

11.1

42.3

15

95.2 90.3 77.1 65.4

Table 2: Strengths and Weaknesses of 17 EMRs, by Criteria Strongest criteria ≥12 systems (>70%) System features  Customizable user interface (forms and fields)  User friendly system prompts and appropriate error messages with clear corrective action

Moderate criteria 7-11 systems (41-65%)

Weakest criteria ≤6 systems (≤40%)

 Uses a standardized naming protocol (e.g., ICD10 or SNOMED)

 Uses a standard drug coding / listing system

 Uses a standard lab test coding / listing system

 Data verification and validation implemented

 Built-in backup and restore feature Core demographic and clinical information  Unique patient  Captures historical medical identification system history / information

 Biometric, bar code support

 Generates patient lists

 Supports appointment scheduling set up, update and management

 Supports patient referral

 Supports patient search for patients using one or multiple identifiers

 Maintains up-to-date problem lists

 Patient education materials available

 Records patient demographics

 Maintains medication lists

 Manages drug interactions

 Multiple patient Identifiers Supported

 Maintains allergy lists

 Manages adverse drug reactions

 Generates clinical summaries

 Records vital signs

 Provides a longitudinal view of a patient’s medical history

 System captures defined minimum data set

 Implements persistent patient records allowing for rollback in case of Deletion of patient/ patient records

 Incorporates lab test results

Clinical decision support  Incorporates clinical decision guidelines  Generates alerts to support 16

clinical decision Order entry  Generates lab orders  Generates prescriptions orders Health reporting  Generates patient-level reports

 Customizable reports

 Generates facility-specific reports

 Generates the MoH required reports

 Export/import of external reports Table 3: Strengths and Weaknesses of 17 EMRs, by Criteria (cont.) Interoperability/electronic data exchange  Exchanges clinical information and patient summaries  Electronically transmits prescriptions  Electronically transmits and receives laboratory orders and results  Supports HL7 messaging  Supports XML generation and messaging  Electronically transmits aggregate information to DHIS  Supports SDMX-HD

Security and confidentiality  Supports user authentication  Encryption matrix implemented for user passwords

 Supports role based access control

 Audit trail / log supported

 Data encryption within the database

 Supports analysis of audit trails reports

17

 Manual and automated database back up

 Automatic logoff implemented

18

Author Contributions Samuel Kang’a, International Training & Education Center for Health (I-TECH) Kenya, led the conception, analysis, drafting, revision of the manuscript and provided final approval for publication. Nancy Puttkammer, International Training & Education Center for Health (I-TECH) Seattle, Department of Global Health, University of Washington, USA, contributed to the conception of the work, contributed to the interpretation of data, revised the manuscript for important intellectual content, and provided final approval for publication. Steven Wanyee, International Training & Education Center for Health (I-TECH) Kenya, Contributed to the conception of the evaluation, assisted in interpretation of data, revised the manuscript for important intellectual content, and provided final approval for publication. Davies Kimanga, Elizabeth Glaser Pediatric AIDS Foundation (EGPAF), Kenya, contributed to the interpretation of data, revised the manuscript for important intellectual content, and provided final approval for publication. Jason Madrano, Afya Bora Consortium in Global Health Leadership, University of Washington, USA, contributed to the conception of the work, contributed to the interpretation of data, revised the manuscript for important intellectual content, and provided final approval for publication. Veronica Muthee, International Training & Education Center for Health (I-TECH) Kenya, contributed to the conception of the work, contributed to the interpretation of data, revised the manuscript for important intellectual content, and provided final approval for publication. Patrick Odawo, Private Consultant, Kenya, contributed to the interpretation of data, revised the manuscript for important intellectual content, and provided final approval for publication. Anjali Sharma, International Training & Education Center for Health (I-TECH) Seattle, Department of Global Health, University of Washington, USA, contributed to the interpretation of data, revised the manuscript for important intellectual content, and provided final approval for publication.

19

Tom Oluoch, US Centers for Disease Control and Prevention (CDC), Division of Global HIV/AIDS (DGHA), Nairobi, Kenya, contributed to the interpretation of data, revised the manuscript for important intellectual content, and provided final approval for publication. Katherine Robinson, US Centers for Disease Control and Prevention (CDC), Division of Global HIV/AIDS (DGHA), Nairobi, Kenya, contributed to the interpretation of data, revised the manuscript for important intellectual content, and provided final approval for publication. James Kwach, US Centers for Disease Control and Prevention (CDC), Division of Global HIV/AIDS (DGHA), Nairobi, Kenya, contributed to the interpretation of data, revised the manuscript for important intellectual content, and provided final approval for publication. William B Lober, International Training & Education Center for Health (I-TECH) Seattle, Department of Global Health, University of Washington, USA, contributed to the interpretation of data, revised the manuscript for important intellectual content, and provided final approval for publication.

20

References 1. 2.

3. 4.

5.

6. 7.

8. 9.

10.

11.

12. 13.

14. 15. 16. 17. 18.

19.

WHO, Everybody's business: strengthening health systems to improve health outcomes: WHO’s framework for action. 2007, WHO Document Production Services: Geneva. Clifford, G.D., Joaquin A. B., Rachel Hall. et al., Medical information systems: a foundation for healthcare technologies in developing countries. BioMedical Engineering OnLine, 2008. 7: p. 18. Dentzer, S., E-Health's Promise For The Developing World. Health Affairs, 2010. 29(2): p. 229229. Nash D., Elul B., Rabkin M., et al., Strategies for More Effective Monitoring and Evaluation Systems in HIV Programmatic Scale-Up in Resource-Limited Settings: Implications for Health Systems Strengthening. J Acquir Immune Defic Syndr, 2009. 52(Supplement 1): p. S58–S62. Magrabi F., Aarts J., Nohr C., et al., A comparative review of patient safety initiatives for national health information technology. International Journal of Medical Informatics, 2013. 82(5): p. e139-e148. AbouZahr, C. and T. Boerma, Health information systems: the foundations of public health. Bulletin of the World Health Qrganization, 2005. 83: p. 578-583. Lippeveld, T. and R. Sauerborn, A framework for designing health information systems, in Design and Implementation of Health Information Systems. 2000, World Health Organization (WHO): Geneva. p. 15-31. UNES, Review of Softwares in the Health Sector, Kenya, 2009. 2009, University of Nairobi Enterprises and Services Limited (UNES): Nairobi. NASCOP, EMR System Assessments Harmonization Report - A synthesis of assessments of EMR and HMIS software conducted by HMIS, NASCOP and CDC, 2009. 2009, National AIDS Control Program, Kenya Ministry of Health: Nairobi. NASCOP, Standards and Guidelines for Electronic Medical Record Systems in Kenya 2011, Ministry of Health (MoH): Nairobi. http://www.nascop.or.ke/library/3d/Standards_and_Guidelines_for_EMR_Systems.pdf Mykkänen, J., Korhonen M., Porrasmaa J., et al., A National Study of eHealth Standardization in Finland - Goals and Recommendations. Stud Health Technol Inform, 2007. 129(Part 1): p. 469-73. NASCOP, Report on the Review of EMR Systems Towards Standardization. 2012, National AIDS Control Program (NASCOP), Kenya Ministries of Health: Nairobi. Porter, L.E., Bouey P.D., Curtis S., et al., Beyond Indicators: Advances in Global HIV Monitoring and Evaluation During the PEPFAR Era. J Acquir Immune Defic Syndr, 2012. 60(Supplement 3): p. S120–S126. Hammond, W.E., Bailey C., Boucher P., et al., Connecting Information To Improve Health. Health Affairs, 2010. 29(2): p. 284-288. WHO. Call to Action on Global eHealth Evaluation in WHO Global eHealth Evaluation Meeting. 2011. Bellagio, Italy. Lenz, R., M. Beyer, and K.A. Kuhn, Semantic integration in healthcare networks. International Journal of Medical Informatics, 2007. 76(2-3): p. 201-207. Fontaine, P., Zink T., Boyle R.G. et al., Health Information Exchange: Participation by Minnesota Primary Care Practices. Archives of Internal Medicine 2010. 170(7): p. 622-29. Overhage, J.M., Communities' Readiness for Health Information Exchange: The National Landscape in 2004. Journal of the American Medical Informatics Association, 2004. 12(2): p. 107-112. Marchibroda, J., Health information exchange policy and evaluation. Journal of Biomedical Informatics, 2007. 40: p. S11–S16.

21

20.

21.

24.

25.

26.

Frohlich, J., Karp S., Smith M.D. et al., Retrospective: Lessons Learned From The Santa Barbara Project And Their Implications For Health Information Exchange. Health Affairs, 2007. 26(5): p. w589-w591. Flores Zuniga, A.E., K.T. Win, and W. Susilo, Functionalities of free and open electronic health record systems. International Journal of Technology Assessment in Health Care, 2010. 26(04): p. 382-389. 23. Cho, I., JeongAh K., Ji H. K. et al., Design and implementation of a standards-based interoperable clinical decision support architecture in the context of the Korean EHR. International Journal of Medical Informatics, 2010. 79(9): p. 611-622. Aqil, A., T. Lippeveld, and D. Hozumi, PRISM framework: a paradigm shift for designing, strengthening and evaluating routine health information systems. Health Policy and Planning, 2009. 24(3): p. 217-28. Fernández-Alemán, J.L., Señor I.C., Lozoya P.Á., et al., Security and privacy in electronic health records: A systematic literature review. Journal of Biomedical Informatics, 2013. 46(3): p. 541-562. Cresswell, K.M., A. Worth, and A. Sheikh, Integration of a nationally procured electronic health record system into user work practices. BMC Medical Informatics and Decision Making, 2012. 12(1): p. 15.

22

Summary Points What is already known about the topic  Variations in the functionality, content and form of electronic medical record systems (EMRs) challenge national roll-out of these systems.  Definition of Standards and Guidelines for EMRs implementation creates a nonconstricting environment to spur controlled innovation.  EMRs offer strong potential to inform health care delivery for improved quality of care. What this study has added  The standards-based review process in Kenya yielded a consensus-based selection of four EMRs with an acceptable level of attainment of the Standards.  The process built familiarity of EMRs owners with the content of the Standards, and produced system-specific upgrade plans. 

The review process also yielded a comprehensive picture of the strengths and weaknesses of EMRs in Kenya around key functional domains.

23