Existing instruments for assessing physician communication skills: Are they valid in a computerized setting?

Existing instruments for assessing physician communication skills: Are they valid in a computerized setting?

Patient Education and Counseling 93 (2013) 363–366 Contents lists available at SciVerse ScienceDirect Patient Education and Counseling journal homep...

211KB Sizes 0 Downloads 28 Views

Patient Education and Counseling 93 (2013) 363–366

Contents lists available at SciVerse ScienceDirect

Patient Education and Counseling journal homepage: www.elsevier.com/locate/pateducou

Existing instruments for assessing physician communication skills: Are they valid in a computerized setting? Shiri Assis-Hassid a,*, Tsipi Heart a, Iris Reychav b, Joseph S. Pliskin a,c, Shmuel Reis d a

Department of Industrial Engineering and Management, Ben-Gurion University of the Negev, Israel Department of Industrial Engineering and Management, Ariel University, Israel c Department of Health Systems Management, Ben-Gurion University of the Negev, Israel d Faculty of Medicine in the Galilee, Bar Ilan University, Safed, Israel b

A R T I C L E I N F O

A B S T R A C T

Article history: Received 19 November 2012 Received in revised form 3 March 2013 Accepted 25 March 2013

Objectives: This study aims to highlight the differences in physicians’ scores on two communication assessment tools: the SEGUE and an EMR-specific communication skills checklist. The first tool ignores the presence of the EMR in the exam room and the second, though not formally validated, rather focuses on it. Methods: We use the Wilcoxon Signed Ranks Test to compare physicians’ scores on each of the tools during 16 simulated medical encounters that were rated by two different raters. Results: Results show a significant difference between physicians’ scores on each tool (z = 3.519, p < 0.05 for the first rater, and z = 3.521, p < 0.05 for the second rater), while scores on the EMR-specific communication skills checklist were significantly and consistently lower. Conclusion: These results imply that current communication assessment tools that do not incorporate items that are relevant for communication tasks during EMR use may produce inaccurate results. Practice implications: We therefore suggest that a new instrument, possibly an extension of existing ones, should be developed and empirically validated. ß 2013 Elsevier Ireland Ltd. All rights reserved.

Keywords: Electronic medical records EMR Primary care Patient–doctor communication Communication skills Communication task

1. Introduction Physicians in the US conduct approximately 120,000–160,000 interviews throughout their career [1], making the medical interview one of their most commonly performed tasks [2]. The literature on patient–doctor communication (PDC) is unanimous on the fact that the relationship established between physician and patient during the medical encounter is the heart of medicine since it affects the flow of knowledge and understanding that are necessary for a successful and effective medical encounter [2]. Recognizing the importance of PDC, organizations such as the Institute of Medicine, the Accreditation Council for Medical Education (ACGME) and the American Board of Internal Medicine, require medical students as of 2004 to demonstrate communication competencies in order to receive their certifications [1,3]. The role of computers and more specifically, Electronic Medical Records (EMRs) in healthcare has increasingly grown in the past years, making them an integral part of the medical encounter [4]. Though EMR integration in primary care entails many potential

* Corresponding author at: 6 Sivtey Israel st., Israel. Tel.: +972 54 4409111. E-mail address: [email protected] (S. Assis-Hassid). 0738-3991/$ – see front matter ß 2013 Elsevier Ireland Ltd. All rights reserved. http://dx.doi.org/10.1016/j.pec.2013.03.017

benefits, concerns have been raised regarding the possibility that EMR use may hinder PDC during the medical encounter. Acknowledging the important role of patient–doctor communication to healthcare quality, various physician communication skills assessment tools and methods have been developed and identified in the literature [5,6]. One of the most widely used assessment tools for physicians’ communication skills teaching and assessment in North America is the SEGUE provided by Makoul [7]. The SEGUE is a checklist of communication tasks that should be carried out by physicians throughout the different stages of the medical encounter. However, the SEGUE was developed prior to the large scale introduction of EMRs into healthcare and therefore does not take into account the effects of physician’s EMR use while interacting with the patient. The question is whether the SEGUE and similar communication assessment tools are relevant in their current format given the new dynamics created by the introduction of EMRs into primary care. Interestingly, as far as we know none of the existing and widely used assessment tools has been modified to allow physicians’ communication skills assessment in the computerized primary care clinic, in spite of evidence to the effect of healthcare computerization on physicians’ ability to communicate with their patients [8]. One of the only references of EMR communication skills assessment tools in the literature has been elicited by Morrow et al. [9] who suggest a checklist of

364

S. Assis-Hassid et al. / Patient Education and Counseling 93 (2013) 363–366

recommended communication tasks needed to be carried out by physicians in the computerized exam room. Although not yet formally validated, this checklist is the only reference of EMRspecific communication skills on a practical level. Since it is possible to speculate that general communication skills may predict those in the computerized setting, rendering assessment tools that address the latter, we have embarked on an investigation of this speculation. This paper aims to find out the differences, if exist, in physicians’ scores on two assessment tools: the SEGUE and Morrow et al.’s checklist, whereas the first tool ignores the presence of the EMR in the exam room and the second focuses on it. The hypothesis being that a similar score will support the usefulness of existing communication skills, while a markedly different one will attest to these skills representing possible different constructs, and point to the need of validating tools to gauge the EMR-specific skills. 2. Research methods 2.1. Simulations Physician communication skills assessment in the present study is based on video-taped simulated medical encounters that have been conducted in the Israel Center for Medical Simulation (MSR) at the Chaim Sheba Medical Center, during Aug.–Oct. 2011. MSR (see http://www.msr.org.il/) has a state-of-the-art capacity to simulate the physical environment of the medical profession as well as a reliable installation of the local most commonly used EMR system – Clicksß by Roshtov (see www.roshtov.com). Twenty one generic scenarios that simulate physician–patient primary care encounters have been developed for training participants to identify potential pitfalls, learn various strategies and skills for using an EMR during consultations, and employ them flexibly depending on the situation. Patients were impersonated by actors trained to become standardized patients (SPs). The scenarios sought to include an appropriate mix of challenges: clinical cases, literacy and cultural diversity, patient care safety challenges, communicative complexity (delivering bad news, error disclosure, etc.), as well as technically demanding scenarios. Scenarios covered topics such as using the computer for patient education, dealing with a patient that refuses preventive treatment because of religous reasons, revealing the actual reason for the patient’s visit, chronic desease management and more. The scenarios were piloted [10], proven reliable and valid, and then used in an experiment of an educational intervention (Reis, in press) with 36 residents in primary care from the Maccabi Health Services (www.maccabi-health.co.il) Health Maintenance Organization (HMO). The experiment included a simulation based pretest, intervention and post-test. It is important to note the EMR use is mandatory in Israeli primary care and no data collection, retrieval or insertion are conducted manually. 2.1.1. Sample For the purpose of our study, 16 physicians from different stages of their four year residency and experience of between one month and three years of EMR use were randomly selected from the pre-test simulated encounters and the same scenario was selected for all 16 encounters in order to ensure comparability. The scenario at hand included a patient with high blood pressure who is reluctant to follow the recommended medical plan which consists of daily intake of pills and blood pressure monitoring. The selection of this specific scenario was made to ensure a moderate level of clinical complexity and at the same time required EMR-use throughout the various stages of the encounter. Scoring of the sample was conducted by the first two authors of this paper (SAH

and TH). The raters worked independently and geographically remote from each other. 2.2. Assessment tools 2.2.1. The SEGUE assessment tool The SEGUE framework focuses on communication tasks that should be carried out during the medical encounter in order to establish effective patient–doctor communication [7]. Makoul divides the encounter into five stages: Set the stage, Elicit information, Give information, Understand the patient’s perspective and End the encounter. Each stage consists of specific communication tasks, with a total of 25 tasks. For example: Set the stage refers to communication tasks such as: greet the patient appropriately, establish reason for visit. Elicit information includes: explore physical/psychological factors, discuss how health problems affect the patient’s life. Give information includes: explain rationale for diagnostic procedures, encourage patient to ask questions. Understand the patient’s perspective includes: express caring, concern, empathy, maintain a respectful tone. End the encounter includes tasks as ask if there is anything else the patient would like to discuss, review next steps with patient. The tasks included in the SEGUE tool are believed to be indicators of good communication skills. An objective observer rates each task as NO if not conducted during the encounter, YES if conducted at least once, or N/A if not applicable to the scenario at hand. In addition, the SEGUE checklist is accompanied by coding rules and a codebook which must be carefully reviewed before assessment is carried out. The final grade can be calculated by the number of all the YES answers. This calculation yields a 0 to 25 scale, namely, if the answer is YES for all behaviors, the score will be 25. The task approach of the SEGUE framework places the task as the objective of behavior while enabling each physician to apply his/her individual strategies and approach for achieving specific communication tasks. Makoul explains that if a task is defined as ‘make a personal connection with the patient’, physicians may proceed in a variety of ways, all equally effective and fitting the specific context (e.g., the physician’s personal style, the patient and the situation). The SEGUE framework entails an inherent flexibility with regard to the strategies required to accomplish each communication task. One of the greatest advantages of this approach is that it reflects the individuality of human communication. In addition, it is different from some of the other approaches in that it focuses on observable behaviors and in that manner can be used (and is used) for both teaching and assessment [7]. 2.2.2. The EMR-specific communication assessment tool In addition to analyzing patient–doctor communication via the SEGUE, our evaluation included a second analysis based on constructs and items provided by Morrow et al. [9]. They identify behaviors that facilitate communication in the EMR environment and categorize these behaviors into three themes based on the work of Ventres et al. [11]: (1) adjust the geography, includes behaviors such as: physicians adjusted the screen so that the patient could see it easily; (2) triad: physician–patient–EMR relationship, includes behaviors such as: physician introducing oneself before turning to the computer, and maintaining good eye contact with the patient during the encounter; (3) using the computer to teach/enhance quality of care, includes behaviors such as: physician asked the patient if she/he would like a copy of their data, physician accessed other online patient educational materials for the patient. Similar to the SEGUE scale, the items provided by Morrow et al. are rated on a nominal scale (Yes/No/ NA). This similarity provides an adequate comparison of physician scores on both tools.

S. Assis-Hassid et al. / Patient Education and Counseling 93 (2013) 363–366

The SEGUE tool consists of 25 items and the EMR-specific communication skills checklist consists of 13 items. However since both raters assigned the answer N/A to items #2 on the EMRspecific communication skills checklist and #21 on the SEGUE (‘‘physician adjusted the chair to be at eye level with the patient’’, ‘‘physician acknowledged waiting time’’) on all simulations (there was no waiting time and the chair was not adjustable), these items were dropped from all calculations. 2.3. Data analysis For the purpose of this study, each of the raters (SAH and TH who are members of the research team) analyzed 16 different physicians using both tools. Sample size was computed according to the Wilcoxon Signed Ranks Test assumptions by which results of the test are considered accurate if the sample size consists of 16 pairs at least. The analysis of the rating comprised two stages: (1) inter-rater reliability between the scores provided by both raters on each of the assessment tools, (2) application of the Wilcoxon Signed Rates Test for each pair of ratings. It was decided that in case of moderate or low inter-rater reliability results, the Wilcoxon Signed Ranks Test will be computed on each set of the ratings separately in order to be able to point at differences or similarities between the end results. 3. Results 3.1. Inter-rater reliability Inter-rater reliability (IRR) analysis was run using Brennan and Prediger’s Kappa [12]. The same inter-rater agreement index was also used by Makoul for the SEGUE validation. The mean IRR for the SEGUE assessment tool was 0.515 which is considered a moderate inter-rater reliability. The mean IRR for the EMR-specific communication checklist was 0.781 which is considered substantial [13]. 3.2. Wilcoxon signed ranks test Given the moderate results of the inter-rater reliability test (especially for the SEGUE), the Wilcoxon Signed Ranks Test was computed on each set of the ratings separately. All scores which reflect the number of ‘‘yes’’ ratings for each physician were divided by the number of observed behaviors (24 for SEGUE and 12 for the EMR-specific communication tool) in order to allow comparison between the two tools. Descriptive statistics are presented in Table 1, and show that mean scores for both raters on each of the tools are similar. Moreover, it can be seen that there is a consistent difference between the mean score on SEGUE (0.6927/0.6876) and the EMRspecific communication skills checklist (0.2761/0.2241) whereas for both raters scores on the latter tool are lower. The Wilcoxon Signed Ranks Test was applied to evaluate whether physicians scored differently on each assessment tool. The results indicate a significant difference for both rater respectively (z = 3.519, p < 0.05; z = 3.521, p < 0.05).

365

4. Discussion and conclusions 4.1. Discussion To this date, as far as we know, there are no validated tools for assessing physicians’ communication skills in a computerized environment and a paucity of practical teaching programs which address this topic. This study aimed to focus on the differences in physicians’ scores on two assessment tools: the SEGUE and Morrow’s checklist. The first tool ignores the presence of the EMR in the exam room while the second focuses on it. The ubiquity of the computer in clinical settings mandates an inquiry into how it can enhance health professionals’ communication while using the EMR, in order to positively impact patient health outcomes. As methods for training clinicians in this competency become available, the need to deploy valid and reliable measurements to assess these interventions is obvious. On this pilot study we have compared scores of a field tested PDC assessment tool with those of a new and immature one to assess patient–doctor–computer communication (PDCC). Though interpretation and generalizability of the results are compromised by the preliminary nature of the investigation, they are striking enough to warrant reporting. The Wilcoxon Signed Ranks Test shows a significant difference between physicians’ scores on both tools and for each of the raters with consistent lower scores on the EMR-specific communication skills checklist. Physicians’ scorings on the SEGUE may be interpreted as physicians’ effective communication during the medical encounter, however, when rated on the EMR-specific communication skills checklist, the same physicians receive significantly lower scores, possibly pointing at skill deficiency in this domain. In practice, most physicians greeted the SP and introduced themselves, then established the reason for the visit by attentively listening to the patient. However, once they turned to the computer, eye contact was lost, and quite often poor typing skills entailed long silence periods and disengagement of the physician which may negatively affect the rapport between patient and physician. This correlates with the patient–doctor communication literature which suggests that the demands of the computer while interacting with the patient may cause cognitive overload which negatively effects attentiveness to the patient [4,14,15]. Nevertheless, these results should be interpreted with caution due to the following limitations: (1) Moderate inter-rater reliability on the longer instrument compromises the validity of the rates, (2) Sample size – 16 physicians were evaluated on both tools which may compromise the generlizability of the results and (3) the validity of the lower scores of the EMR-specific communication skills instrument needs further elucidation as to both its reliability and clinical significance. In spite of the significant limitations, the results are striking enough to suggest that general patient–doctor communication skills and EMR-related ones may be constructs that fundamentally differ. This is a counter-intuitive result, as common sense may

Table 1 Descriptive statistics. Rater 1

N Mean Std. deviation Minimum Maximum

Rater 2

SEGUE

EMR-specific communication skills

SEGUE

EMR-specific communication skills

16 0.6927 0.1369 0.46 0.88

16 0.2761 0.2012 0.00 0.67

16 0.6876 0.0937 0.54 0.88

16 0.2241 0.2013 0.00 0.67

S. Assis-Hassid et al. / Patient Education and Counseling 93 (2013) 363–366

366

suggest that EMR-related skills are add-on behaviors that build to a large extent on the PDC ones. 4.2. Conclusion Our study shows that in spite of EMR use ubiquity, physicians still poorly integrate them into the encounter. The results of this study imply salient differences between traditional and EMR communication skills of physicians during the primary care medical encounter. In spite of the study’s limitations, it suggests that general communicative skill do not transfer into EMR related ones. Gaps between physicians’ scores on the SEGUE tool and EMRspecific communication skills checklist may be attributed to the fact that physicians who carry out general communication tasks in an effective manner, when required to effectively integrate the EMR into the encounter, do not perform as well as in a noncomputerized environment. Lack of specific instruction addressing EMR-related communication skills is well documented. This study should alert the medical education community to its urgent need. Therefore, we suggest that current communication assessment tools must be modified to fit the new medical environment to include EMR-specific communication skills in order to provide an accurate result. We propose this task for future research. Since EMR use is continuingly growing and since it has, without a doubt, changed the communication dynamics between the physician and patient, it seems necessary to implement communication skills instructions that will produce effective use of EMRs and maximization of their benefits to healthcare. Further research is needed to critically evaluate the results of this study as well as the development of tools and methods to evaluate and implement enhanced patient–doctor–computer communication. 4.3. Practice implications It appears that general communication skill do not transfer into EMR-related ones. As such, current communication assessment tools must be modified to fit the new medical environment to include EMR-specific communication skills. It seems necessary to implement communication skills instruction and assessment tools that will produce effective use of EMRs and maximization of their benefits to healthcare. Role of funding There was no funding source in this paper and research.

Conflict of interest This is to confirm that we, the authors of this manuscript, have no actual or potential conflict of interest including any financial, personal or other relationships with other people or organizations within three years of beginning the submitted work that could inappropriately influence, or be perceived to influence, their work. Acknowledgment The authors have worked on this paper and research with no external help. References [1] Stein T, Frankel RM, Krupat E. Enhancing clinician communication skills in a large healthcare organization: a longitudinal case study. Patient Educ Couns 2005;58:4–12. [2] Epstein RM, Campbell TL, Cohen-Cole SA, McWhinney IR, Smilkstein G. Perspectives on patient–doctor communication. J Fam Pract 1993;37:377–88. [3] Frankel R, Altschuler A, George S, Kinsman J, Jimison H, Robertson NR, Hsu J. Effects of exam-room computing on clinician–patient communication. J Gen Intern Med 2005;20:677–82. [4] Margalit RS, Roter D, Dunevant MA, Larson S, Reis S. Electronic medical record use and physician–patient communication: an observational study of Israeli primary care encounters. Patient Educ Couns 2006;61:134–41. [5] Cox M, Irby DM, Epstein RM. Assessment in medical education. N Engl J Med 2007;356:387–96. [6] Schirmer JM, Mauksch L, Lang F, Marvel MK, Zoppi K, Epstein RM, et al. Assessing communication competence: a review of current tools. Fam Med 2005;37:184–92. [7] Makoul G, The SEGUE. Framework for teaching and assessing communication skills. Patient Educ Couns 2001;45:23–34. [8] Pearce C, Dwan K, Arnold M, Phillips C, Trumble S. Doctor, patient and computer – a framework for the new consultation. Int J Med Inf 2009;78:32–8. [9] Morrow JB, Dobbie AE, Jenkins C, Long R, Mihalic A, Wagner J. First-year medical students can demonstrate EHR-specific communication skills: a control-group study. Fam Med 2009;41:28–33. [10] Reis S, Cohen-Tamir H, Eger-Dreyfuss L, Eisenburg O, Shachak A, Hasson-Gilad D, et al. The Israeli patient–doctor–computer communication study: an educational intervention pilot report and its implications for person-centered medicine. Int J Person Centered Med 2011;1:776–81. [11] Ventres W, Kooienga S, Vuckovic N, Marlin R, Nygren P, Stewart V. Physicians, patients, and the electronic health record: an ethnographic analysis. Ann Fam Med 2006;4:124–31. [12] Brennan RL, Prediger DJ. Coefficient kappa: some uses, misuses, and alternatives. Educ Psychol Meas 1981;41:687–99. [13] Landis JR, Koch GG. The measurement of observer agreement for categorical data. Biometrics 1977;33:159–74. [14] Makoul G, Curry RH, Tang PC. The use of electronic medical records. Brit Med J 2001;8:610–5. [15] Booth N, Robinson P, Kohannejad J. Identification of high-quality consultation practice in primary care: The effects of computer use on doctor–patient rapport. Inform Prim Care 2004;12:75–83.