Vol. 50 No. 5 November 2015
Journal of Pain and Symptom Management 615
Original Article
Usability and Acceptability of the QDACT-PC, an Electronic Point-of-Care System for Standardized Quality Monitoring in Palliative Care Arif H. Kamal, MD, MHS, Dio Kavalieratos, PhD, Janet Bull, MD, Charles S. Stinson, MD, Jonathan Nicolla, MBA, and Amy P. Abernethy, MD, PhD Division of Medical Oncology and Duke Cancer Institute (A.H.K., A.P.A.), Duke University Medical Center, Durham, North Carolina; Center for Learning Health Care (A.H.K., D.K., J.N., A.P.A.), Duke Clinical Research Institute, Durham, North Carolina; Four Seasons (J.B.), Flat Rock, North Carolina; Division of General Internal Medicine (D.K.), Department of Internal Medicine, University of Pittsburg School of Medicine, Pittsburg, Pennsylvania; and Forsyth Medical Center Palliative Care Services (C.S.S.), Winston-Salem, North Carolina, USA
Abstract Context. Few resources exist to support collaborative quality monitoring in palliative care. These tools, if proven efficient through technology-enabled methods, may begin to routinize data collection on quality during usual palliative care delivery. Usability testing is a common approach to assess how easily and effectively users can interact with a newly developed tool. Objectives. We performed usability testing of the Quality Data Collection Tool for Palliative Care (QDACT-PC) a novel, point-of-care quality monitoring tool for palliative care. Methods. We used a mixed methods approach to assess community palliative care clinicians’ evaluations of five domains of usability. These approaches included clinician surveys after recording mock patient data to assess satisfaction; review of entered data for accuracy and time to completion; and thematic review of ‘‘think aloud’’ protocols to determine issues, barriers, and advantages to the electronic system. Results. We enrolled 14 palliative care clinicians for the study. Testing the electronic system vs. paper-based methods demonstrated similar error rates and time to completion. Overall, 68% of the participants believed that the electronic interface would not pose a moderate or major burden during usual clinical activities, and 65% thought it would improve the care they provided. Thematic analysis revealed significant issues with paper-based methods alongside training needs for future participants on using novel technologies that support the QDACT-PC. Conclusion. The QDACT-PC is a usable electronic system for quality monitoring in palliative care. Testing reveals equivalence with paper for data collection time, but with less burden overall for electronic methods across other domains of usability. J Pain Symptom Manage 2015;50:615e621. Ó 2015 American Academy of Hospice and Palliative Medicine. Published by Elsevier Inc. All rights reserved. Key Words Quality monitoring, palliative care, health services delivery
Introduction Health care information technology (HIT) is a burgeoning area of development and study for deploying standardized distress assessments to patients with serious illnesses. Meeting expectations for performing
Address correspondence to: Arif H. Kamal, MD, MHS, Box 3436, Duke University Medical Center, Durham, NC 27710, USA. E-mail:
[email protected] Ó 2015 American Academy of Hospice and Palliative Medicine. Published by Elsevier Inc. All rights reserved.
and documenting comprehensive assessments is challenging in strained palliative care environments. Solutions that are usable and feasible are needed to increase the frequency of standardized assessments, thus assisting clinicians in performing their clinical
Accepted for publication: May 19, 2015.
0885-3924/$ - see front matter http://dx.doi.org/10.1016/j.jpainsymman.2015.05.013
616
Kamal et al.
evaluations of patient needs. Furthermore, these data can reflect on the quality of care provided, so that clinicians and leaders can understand what processes of care were delivered and what potential improvements can be made to increase the quality of care. Within a novel community/academic collaboration of four community-based clinical palliative care organizations and an academic research program at Duke Universitydthe Carolinas Palliative Care Consortiumdwe have been developing an HIT-based platform for conducing point-of-care assessments that simultaneously monitor patient distress and reflect on the quality of care delivered,1 called the Quality Data Collection Tool for Palliative Care (QDACTPC); this Web-based, provider-entered electronic tool queries patient-reported outcomes, information available through other clinical documentation methods (e.g., medical charts), and knowledge of patient outcomes (e.g., admission date, date of death) to identify and prioritize unmet needs for clinical intervention, while evaluating the quality of care delivered to inform quality improvement initiatives. For example, QDACTPC includes a standardized pain assessment from the Edmonton Symptom Assessment System.2 If a clinician enters that a patient’s pain score is ‘‘9 out of 10,’’ then information to drive clinical management is learned (e.g., need for opioid for severe pain). Furthermore, information on the quality of care delivered is also learned. In this situation, the clinician met two of the quality measures from the National Quality Forum (NQF)eendorsed set for palliative care (NQF #1634 Pain Screening and NQF #1637 Pain Assessment). Data on how these quality measures were met, as recorded in QDACT-PC, can then be aggregated to show adherence to measures over time (e.g., pain assessment frequency over 100 patients) or across clinician populations (pain assessment frequency across 10 clinicians in one organization). Using eight domains for quality palliative care defined by the NQF, we arrived at 92 questions that inform more than 80% of all quality measures for palliative care found in a recent systematic review.3,4 This questionnaire was then placed within the QDACT-PC software. We also developed processes for data transmission between the software and a central database.5 Although novel in design, a robust method to test its usability and acceptability among palliative care clinicians was a necessary first step toward use in live clinical environments. Usability testing is routinely incorporated6,7 as the initial procedure in assessing HIT software acceptability in its designed environment. Comprising five components (memorability, efficiency, error rate, ease of use, and satisfaction),8 usability testing is a critical check of how the software and its associated
Vol. 50 No. 5 November 2015
hardware platform perform (apart from issues of validity and reliability of the content itself). In general terms, usability can be thought of as a comprehensive assessment of a product’s features, functionality, visual appeal, and usefulness.9 Usability testing inherently serves as the first step of a systematic evaluation of the ability to implement HIT. The main purpose of usability testing is to identify significant problems early and to use the lessons learned from each round of testing to inform updates and changes before pilot testing in the clinical environment can occur. Other evaluations of data robustness (e.g., data validity and reliability) and questionnaire utility to inform care (e.g., piloting) require information learned from usability testing, making this step a critical initial testing procedure. We conducted this protocol to test the usability of the QDACT-PC software vs. the care standard for performing assessmentsdtraditional paper questionnaires. Using rigorous testing methods and relying on feedback obtained from both community and academic palliative medicine clinicians, we aimed to demonstrate how data on usability could inform refinements of HIT systems for palliative medicine. In conducting this protocol, we arrived at several important conclusions about how HIT compares to paper data collection methods and needs for future testing before future large-scale implementation.
Methods We conducted this study within the Carolinas Palliative Care Consortium. Members include four community-based palliative care organizations (Four Seasons, Flat Rock, NC; Hospice of Wake County, Raleigh, NC; Novant Health Forsyth Medical Center Palliative Care, Winston-Salem, NC; and Hospice and Palliative Care Charlotte Region) and one academic center (Duke Center for Learning Health Care, Duke University). Four rounds of usability testing were conducted between May and July 2011. All four organizations provided participants for each round of testing. Palliative care providers of all training backgrounds, experience, and comfort with use of HIT were recruited. A total of 20 participants were planned. Testing was performed with the QDACT-PC questionnaires presented across four different platforms: WindowsÒ laptop (Lenovo USA, Morrisville, NC), AppleÒ laptop (Apple, Inc, Cupertino, CA), Apple iPadÒ (Apple, Inc, Cupertino, CA), and the traditional paper form. Participants were randomized to one of these four platforms. The paper form of the QDACT-PC was 17 single-sided pages with the same content as found in the electronic versions.
Vol. 50 No. 5 November 2015
Usability of the QDACT-PC
Demographic information including familiarity with technology was collected. Institutional Review Board approval was obtained from Duke University Medical Center. Each round of usability testing followed the same protocol. After obtaining written, informed consent, the study procedure involved a verbal, 10-minute tutorial on using the QDACT-PC followed by observation of each participant completing two mock cases representative of typical palliative care encounters. Each mock case was presented in written format that contained the necessary information to complete all steps of the QDACT-PC, from patient registration through completion of the questionnaire. One case was performed using a randomly selected technology platform (iPad or laptop computer); the other case was completed using the paper form. Participants were asked to follow a ‘‘think aloud’’ protocol10 that involved speaking out loud all the thoughts and steps involved in taking information from mock cases and inputting that data into the QDACT-PC platforms. These observations were recorded using both the research personnel’s notes and an audio recorder. One week later, surveys addressing clinical utility and overall satisfaction were mailed to all participants; anonymous responses were collected and analyzed. A period of seven days was provided between using the system and assessing satisfaction to allow for subjects to reflect on how the system could potentially be integrated into their usual workflow as experienced over the next week and reduce social desirability bias between the research personnel and participants. After reach round of testing, the research notes were cataloged and summarized. All issues, criticisms, and suggestions were grouped. Any area of improvement shared by at least 30%11 of respondents was then forwarded to the research team to be addressed by software updates. On completion of each round and associated software update, the subsequent round was conducted. Repeated rounds of usability testing were planned until all issues were less than 30% prevalent. Each round tested the five domains of usability through quantitative and qualitative methods. For ease of learning, investigators assessed if the 10minute oral tutorial was adequate for participants to effectively use the QDACT-PC. For efficiency, the benchmark was completion of the QDACT-PC within 30 minutes, determined as <50% of the total average consultation time across the Consortium. To determine memorability, we observed if, during the second case, participants could open the software and start entering patient data within five minutes without any instruction. Error frequency was considered insignificant if in post hoc analysis <10% of queries were entered in error.
617
Subjective satisfaction would be achieved if >70% of respondents agreed that the QDACT-PC V2.0 was acceptable in the domains of clinical utility, quality measurement utility, provider burden, and patient burden, as well as the utility of the tutorial. Feedback not related to software usability, such as those concerning hardware or issues of data collection at point-ofcare, were recorded, but not included in analyses. P-values were considered significant if P < 0.05. After the completion of Round 1, three multidisciplinary investigators (A. H. K., a palliative care clinician; W. D., a social worker; and D. K., a health services researcher with qualitative expertise) jointly reviewed field notes to refine the cognitive interviewing strategy for future rounds of testing. Data for the final qualitative analysis consisted of field notes, audio recordings, and open-ended survey items. Template analysis was used to identify themes and to summarize our findings.12 After the completion of each successive round of testing, one investigator (D. K.) independently coded that round’s qualitative data using the schematic developed in Round 1. The three investigators met on a weekly basis to debrief and review coding. We retained an extensive audit trail during analysis for the sake of transparency and qualitative trustworthiness.13 Three study participants (J. B., C. S. S., and Debra Blue) performed ‘‘member checking’’ to confirm the credibility and reliability of our qualitative findings.14 Member checking is used to test interpretations and conclusions with members of those groups from whom the data were originally obtained. We used QSR NVivo 9 to conduct qualitative data management and coding (QSR International, Doncaster, Australia).
Results Quantitative Findings Twenty providers were approached to participate in usability testing. We enrolled 14 providers: nine physicians (MDs), two nurse practitioners, and three physician assistants (Table 1). The median age of our participants was 44 years, and eight were female. Participants reported a median of 12 years of overall clinical practice; three years was the median amount of time spent as a palliative care provider. Regarding technology, our sample appeared fairly computer literate: 79% (n ¼ 11) stated that they used computers ‘‘almost hourly,’’ and 71% (n ¼ 10) claimed that they were ‘‘absolutely comfortable’’ using computers for common business-related tasks (e.g., e-mail, word processing, Internet browsing). Per the randomization strategy, nine (32%) of the total 28 attempts (14 participants with two mock patient scenarios each) were executed on a Windows
618
Kamal et al.
Table 1 Provider Characteristics Characteristic Age in years, median (range) Gender Male Female Provider type Physician Nurse practitioner Physician assistant Total years as a clinician, median (range) Total years as a palliative care provider, median (range) Frequency of overall computer use Almost hourly Twice daily Daily Frequency of EMR use Always Usually Rarely Never Operating system used Windows (Microsoft) MacOS (Apple) Both Windows and MacOS Internet browser most frequently used Internet Explorer (Microsoft) Chrome (Google) Firefox (Mozilla) Both Internet Explorer and Firefox Frequency of tablet computer use Daily A few times per week Rarely Never Comfort level in using computer for general tasksa Absolutely comfortable Mostly comfortable
Vol. 50 No. 5 November 2015
Table 2 Median Time to Completion of Each Mock Case N ¼ 14 (%) 45 (31e66)
Comparison Mock case
6 (43) 8 (57) 9 2 3 12 3
(64) (14) (21) (1e40) (1e15)
11 (79) 2 (14) 1 (7) 7 2 3 2
(50) (14) (21) (14)
12 (86) 1 (7) 1 (7) 9 1 2 2
(64) (7) (14) (14)
2 3 3 6
(14) (21) (21) (43)
10 (71) 4 (29)
EMR ¼ electronic medical record. a Computer use comfort level was assessed by asking ‘‘How comfortable are you using your usual computer and Internet browser for word processing, surfing the internet, checking email, etc.?’’
laptop, four (14%) on an Apple laptop, eight (29%) on an iPad, and seven (25%) on the paper form. At the end of testing, most (n ¼ 8, 57%) of the participants noted that their preferred platform was the iPad, whereas five (36%) preferred a Windows laptop, and one (7%), an Apple laptop. Exit survey findings also indicated that most participants (68%) believed that the electronic interface would not pose a moderate or major burden during usual clinical activities for clinicians or patients, while also improving the care they provide (65%). Participants spent approximately one minute less time during the completion of their second mock case (Table 2). In general, the electronic interfaces of QDACT-PC required longer sessions (median electronic interface time ¼ 15 minutes 8 seconds) than did the paper version (median ¼ 14 minutes 25 seconds). The platform requiring the most amount of active time was the iPad (median ¼ 15 minutes 34 seconds).
Interface Preference
iPad vs. all other interfaces Tablet vs. laptop
Category
Time to Completion (Minute:Second)
First case Second case Paper Electronic Preferred electronic interface Nonpreferred electronic interface iPad All other iPad Laptop
15:54 14:05 14:25 15:08 15:34a 14:32b 15:34 14:48 15.34 14.53 14:52
Overall median
n ¼ 12, as two participants indicated a preference for a platform that they did not use during testing. All P-values are nonsignificant.
a b
Regarding errors, we observed an overall rate of only 8.33% (Table 3). Participants made slightly more errors during completion of the second mock case (8.93% vs. 7.14%). Although the paper interface showed the lowest rate of errors (5.95%), the iPad and Apple laptop both had error rates of only 8.33%.
Qualitative Findings We identified the following three overarching themes from transcripts of participants who used the paper version of QDACT-PC: 1) burden; 2) reduction of patient-centeredness; and, 3) interference with provider-patient rapport. Several participants commented on the bulk of the paper form. At 17 pages, the form was perceived to be burdensome, especially for a provider conducting outpatient visits: ‘‘It’s a huge amount of paper. It seems daunting’’ (an MD). Second, participants were concerned that the paper form would pose a ‘‘physical barrier’’ between provider and patient and that the act of flipping pages might be ‘‘intrusive.’’ As one provider noted, the presence of the form might damage patients’ perceptions of provider engagement, leading patients to believe that the visit ‘‘was about the stack of paper and not about them’’ (MD). In the same vein, participants expressed unease that the paper form could interfere with the patientTable 3 Median Error Rate (%) Comparison Mock case Testing platform
Category
Error Rate (%)
First case Second case Paper Windows laptop Apple laptop Apple iPad
7.14 8.93 5.95 7.14 8.33 8.33 8.33
Overall error rate Note: P > 0.05 for testing of differences between platforms.
Vol. 50 No. 5 November 2015
Usability of the QDACT-PC
619
Participants were mixed regarding their intentions to enter data at the point of care. Whereas some participants felt that patients would appreciate being actively involved in the data collection process, others were concerned that patients might feel that ‘‘data entry is more important than patient care’’ (MD). Furthermore, as with traditional clinical documentation, participants expressed varied preferences about charting at the bedside vs. waiting until the end of the day. Overall, support and enthusiasm were articulated for the utilization of a standardized data collection system in palliative care (Fig. 1). Believing that such systems could have ‘‘great implications’’ for the future practice of palliative care, terms such as ‘‘phenomenal’’ (MD) and ‘‘smart’’ (MD) were used to describe the digital QDACT-PC platforms. Despite such enthusiasm, trepidation was conveyed regarding the comprehensiveness of the questionnaire. Several participants expressed concerns about the potential for inferior data quality given the added commitment commanded by systems such as the QDACT-PC. Participants feared the potential for increased data missingness in busy settings such as hospital units.
provider connectionda connection so critical given the nature of palliative care. Although discussing the paper form as the previously mentioned ‘‘physical barrier,’’ one participant stressed the importance of forging a strong empathic bond with patients and remaining minimally distracted during a consultation: ‘‘Well . we [palliative care providers] elicit stories. Sometimes, I get a patient who’s not overly talkative, so I have to draw it out of them’’ (a physician assistant). Participants praised the advantages of the electronic versions of QDACT-PC (i.e., laptops and iPad), such as the color-coded item status indicator, and the conditional logic system to adaptively prompt or collapse items based on previous responses. Many stated that such features made the system efficient, intuitive, and easy to navigate. Several participants noted, however, that additional training was necessary to ensure that users would appreciate and use such enhancements. Although generally enthusiastic about the iPadbased version of QDACT-PC, some participants voiced frustrations regarding the precision of finger gestures. Specifically, several participants had difficulty with tapping and were interested in using a stylus. Nevertheless, overall support for the iPad-based version was very strong among participants who were confident that once the iPad-specific learning curve was overcome, its use during a clinical encounter would ‘‘flow quickly’’ (nurse practitioner). Participants were confident of the iPad’s relative superiority to a laptop-based version, citing advantages of its physical size, lighter weight, and portability.
Discussion By conducting an industry-standard usability assessment of the QDACT-PC software, we have learned several lessons related to the integration of an
10 9 9
8 7(50)
7
7(50)
7(50) 6 (43)
6 (43)
6 (43)
6 5 5 (36) (36)
5 (36)
5 (36)
5 (36)
4 4 (29) (29)
4 (29)
4 (29)
4 (29)
4 3 3 (21) (21)
3
3 (21) 2 (14)
2
2 2 (14) (14)
3 (21)
3 3 (21) (21)
3 (21)
2 (14)
1 (7)
3 (21)
2
1 (7)
1 1 (7) (7)
1 (7)
1 (7)
1 (7)
Perceived Does QDACT Does QDACT interference of present a present a QDACT with major burden major burden consultation during usual to patients? activities?*
Will QDACT help you provide clinical care?
Perceived ease of using QDACT in clinical encounters
Perceived Perceived ease of colleagues’ memorability of ease of use in QDACT clinical encounters
Extremely easy
Very much easy
Moderately easy
Not at all easy
Somewhat easy
Not at all easy
Somewhat easy
0 Extremely easy
0 Very much easy
0
Moderately easy
0 Very much easy
Moderately easy
Not at all easy
Somewhat easy
Very much easy
Somewhat easy
Moderately easy
Very much
Will QDACT Would help in QDACT self-reflection improve regarding clinical care? care, including quality?
Not at all easy
Moderately
Not at all
Somewhat
Very much
Moderately
Not at all
Somewhat
Very much
Moderately
Not at all
Somewhat
Very much
Moderately
Not at all
Somewhat
Very much
Moderately
Not at all
Somewhat
Very much
Moderately
0
Not at all
1
Somewhat
N (%)
5 (36) 5
Perceived ability to incorporate QDACT into clinical flow
Survey Item
Fig. 1. Exit survey results. *Values for this question will not sum to 100%, due to missing responses.
620
Kamal et al.
electronic quality monitoring system for palliative care. As compared to paper forms, we found comparable accuracy and time for completion and high satisfaction for using HIT-based QDACT-PC software and its associated hardware. We also noted moderate usability in its current form; feedback from users provided robust insight into how future iterations should evolve to meet the workflow demands of busy clinicians. There are several important conclusions presented by the results. We found that completion of a 92question assessment tool consistently required about 15 minutes. As most providers indicated that they would conduct the assessment integrated with usual questioning, this would provide minimal burden to how they usually perform an initial consultation. Furthermore, as there is no expectation that all questions will be queried within the span of one consult, the actual time spent required of providers in interacting with the QDACT-PC is expected to be much less. Generally, error rates less than 15% using HIT are considered excellent during usability testing15; the consistent rate of about 8% was surprising and reassuring of the accuracy of these methods. Also, most participants believed that the QDACT-PC tutorial was memorable, was not burdensome, did not interfere with usual workflow, and was relatively easy to use in clinical environments. Others have also demonstrated the development and formal usability of HIT-enabled palliative care assessments. Kallen et al.,16 from a research group at the University of Texas M. D. Anderson Cancer Center, have recently reported on the patient and caregiver usability of an electronic medical recordecompatible software prototype. Their prototype was created to allow typical clinical data and patient-reported outcomes to be entered and stored to improve provider practices in performing standardized assessments. Using mixed quantitative and qualitative approaches, they demonstrated usability of their system to address key areas of improvement identified prior, such as workflow inefficiency, data inaccuracy, difficulty in data interpretation, and challenges to patientprovider and provider-provider communication. Another group led by Dy et al.17 at Johns Hopkins University, in partnership with Medical Decisions Logic, Inc., recently described the development of a software-based product to collect routine symptoms and needs for care and present this information back to providers and patients. Citing that reporting mechanisms are key to driving improved care, this group has diligently created and refined an electronic feedback system to highlight unmet needs and bring these to providers’ attention. Using HIT-based assessment tools in palliative care inherently has its challenges. First, many
Vol. 50 No. 5 November 2015
palliative care providers are not equipped to integrate electronic documentation into clinical practice. A large variety of clinical documentation methods are used by palliative care programs (e.g., handwritten chart notes, dictation transcribed by third-party vendors, electronic selfentry into software-based medical records) that result in inconsistent familiarity with using HIT for documenting assessments. Second, using resources to implement standardized assessment methods requires cultural buy-in from clinicians to implement core evaluation techniques and tools. Versioning to meet individual provider requests becomes costly and ultimately fragments the data being collected. Third, although HIT can increase the efficiency of which data are entered while also increasing ease for pulling previous data elements forward (e.g., ‘‘pulling’’ the medication list from the previous visit), this may come at the expense of data validity. Finally, financial burdens for implementing electronic solutions may be substantial. Whether most palliative care organizations can implement such programs remains an important area of study. Several limitations exist. The number of research participants we used was relatively small; generally, numbers above 10 are adequate for robust usability testing by historical industry standards. We also performed analyses in aggregate of all four rounds of usability testing; thus, we could not confirm that iterative changes between rounds affected time for completion, satisfaction, or error rates. As sample sizes for each round were quite small, statistical analysis of this type would have been minimally informative. Last, as iPad-focused training was not provided (i.e., how to scroll, how to use touch screens), it remains difficult to parse inefficiencies related to unfamiliarity with using this type of tablet computing from limitations of the QDACTPC software itself.
Conclusions The QDACT-PC software is a usable platform for performing quality-based assessments in palliative care. Built through a novel collaboration between academic and community palliative care providers, the QDACT-PC is designed for incorporation into the usual workflow to perform comprehensive assessments using validated tools across quality of care domains. Future reliability, validity, and feasibility testing will establish a robust evidence base for regular use of the QDACT-PC in all palliative care settings.
Vol. 50 No. 5 November 2015
Usability of the QDACT-PC
Disclosures and Acknowledgments Funding for this work was provided by the Duke Endowment. Dr. Abernethy receives research funding from the Agency for Healthcare Quality and Research, National Cancer Institute, National Institute of Nursing Research (National Institutes of Health [NIH]), National Institute on Aging (NIH), and the Robert Wood Johnson Foundation. She receives industry funding for clinical research from Pfizer, Lilly, Bristol-Myers Squibb, Helsinn, Amgen, Kanglaite, and Abbott Laboratories. She is a consultant (<$10,000/ year) for Helsinn, Proventys, and GlaxoSmithKline. Dr. Bull is on the Scientific Advisory Board of Archimedes and Meda Pharmeceuticals, and the Speakers Bureau of Pfizer and Meda. All other authors have no disclosures. The authors acknowledge William Downey and Debra Blue for their assistance in reviewing field notes and data coding; and Donald T. Kirkendall, ELS, a Duke-employed editor, for his assistance in article preparation.
References 1. Bull J, Zafar SY, Wheeler JL, et al. Establishing a regional, multisite database for quality improvement and service planning in community-based palliative care and hospice. J Palliat Med 2010;13:1013e1020. 2. Bruera E, Kuehn N, Miller MJ, Selmser P, Macmillan K. The Edmonton Symptom Assessment System (ESAS): a simple method for the assessment of palliative care patients. J Palliat Care 1991;7:6e9. 3. Kamal AH, Bull J, Stinson C, et al. Collecting data on quality is feasible in community-based palliative care. J Pain Symptom Manage 2011;42:663e667. 4. Kamal AH, Gradison M, Maguire JM, Taylor D, Abernethy AP. Quality measures for palliative care in patients with cancer: a systematic review. J Oncol Pract 2014; 10:281e287. 5. Abernethy AP, Wheeler JL, Bull J. Development of a health information technology-based data system in
621
community-based hospice and palliative care. Am J Prev Med 2011;40:S217eS224. 6. Coons SJ, Gwaltney CJ, Hays RD, et al. ISPOR ePRO Task Force. Recommendations on evidence needed to support measurement equivalence between electronic and paperbased patient-reported outcome (PRO) measures: ISPOR ePRO Good Research Practices Task Force report. Value Health 2009;12:417e429. 7. Stinson JN, Petroz GC, Tait G, et al. e-Ouch: usability testing of an electronic chronic pain diary for adolescents with arthritis. Clin J Pain 2006;22:295e305. 8. Department of Health and Human Services. Usability. gov. Improving the user experience. 2014. Available at: http://www.usability.gov/. Accessed January 15, 2015. 9. Corrao NJ, Robinson AG, Swiernik MA, Naeim A. Importance of testing for usability when selecting and implementing an electronic health or medical record system. J Oncol Pract 2010;6:120e124. 10. Jaspers MW, Steen T, van den Bos C, Geenen M. The think aloud method: a guide to user interface design. Int J Med Inform 2004;73:781e795. 11. Hvannverg ET, Law EL-C, Larusdottir MK. Heuristic evaluation: comparing ways of finding and reporting usability problems. Interacting Comput 2007;19:225e240. 12. King N. Template analysis. In: Symon G, Cassell C, eds. Qualitative methods and analysis in organizational research. London: Sage Publications, 1998. 13. Patton MQ. Qualitative research and evaluation methods. Thousand Oaks, CA: Sage Publications, 2001. 14. Lincoln TG, Guba EG. Naturalistic inquiry. Beverly Hills, CA: Sage Publications, 1985. 15. Kushniruk A, Borycki E. Exploring the relationship between usability and technology-induced error: unraveling a complex interaction. Stud Health Technol Inform 2011; 166:48e56. 16. Kallen MA, Yang D, Haas N. A technical solution to improving palliative and hospice care. Support Care Cancer 2012;20:167e174. 17. Dy SM, Roy J, Ott GE, et al. Tell Us: a Web-based tool for improving communication among patients, families, and providers in hospice and palliative care through systematic data specification, collection, and use. J Pain Symptom Manage 2011;42:526e534.