p u b l i c h e a l t h 1 7 8 ( 2 0 2 0 ) 1 0 5 e1 1 1
Available online at www.sciencedirect.com
Public Health journal homepage: www.elsevier.com/puhe
Review Paper
A scoping review of studies evaluating the education of health professional students about public health* € c, M. Grivna d, R. Harrison e,* C. Evashwick a, D. Tao b, M. Perkio a
Milken Institute of Public Health, George Washington University, Washington, DC, United States Saint Louis University, St. Louis, United States c University of Tampere, Tampere, Finland d UAE University, Al Ain, United Arab Emirates e The University of Manchester, Manchester, United Kingdom b
article info
abstract
Article history:
Background: The purpose of this article is to identify and describe key components of
Received 1 February 2019
research into the teaching methods of public health to postgraduate students.
Received in revised form
Study design: This is a systematic review of the published literature.
19 August 2019
Methods: A detailed search of the literature based on keywords, Boolean operators, and
Accepted 26 August 2019
free-text terms, identified from PubMed, Scopus and ERIC, published in the English language, between January 2000 and December 2017, was made. Teams of independent pairs agreed studies eligible for the review and performed data extraction.
Keywords:
Results: Of the 2,442 potential studies on education of public health professionals, 86 met all
Public health
the inclusion criteria. Specific study designs, data collection, and techniques for data
Health profession education
analysis varied widely across the individual studies, and there was a lack of consistency on
Pedagogy
the whole. The number of students in each study ranged from ten to 1,300. Forty-seven
Evaluation
studies used quantitative methods to assess the effectiveness of teaching. Curriculum
Evidence-based practice
evaluation was the most common focus (n ¼ 33), followed by course evaluation (n ¼ 22).
Public health workforce capacity
Few studies considered inequalities in terms of the types of students registered on the
building
different courses/programs, with just three evaluating strategies to increase students from minority ethnic groups. Most studies evaluated short-term or medium-term outcomes rather than long-term impacts of education on students' careers or the relationship of education in meeting future public health workforce demands. Conclusions: This comprehensive systematic review identified a dearth of the literature on evaluations of approaches for teaching public health to health professions students. Those studies that had been published varied to such an extent in terms of their aims, methods, analysis, and results such that it was impossible to make any consistent comparisons of the observations reported in the studies. We conclude that evidence-based approaches for teaching public health to health professions students are either not sought by faculty and
* This article reports a scoping review of published studies that evaluate the teaching of health professional students about public health. It is of value to those providing and in receipt of education, along with those responsible for ensuring that there is a well-trained workforce to safeguard the health of the public around the world. * Corresponding author. E-mail address:
[email protected] (R. Harrison). https://doi.org/10.1016/j.puhe.2019.08.019 0033-3506/© 2019 The Royal Society for Public Health. Published by Elsevier Ltd. All rights reserved.
106
p u b l i c h e a l t h 1 7 8 ( 2 0 2 0 ) 1 0 5 e1 1 1
programs or, if conducted, not shared. As such, there are likely to be missed opportunities for ensuring that future graduates of health professions programs are as well prepared as possible to contribute to the health of the public. © 2019 The Royal Society for Public Health. Published by Elsevier Ltd. All rights reserved.
The purpose of this article is to identify and describe key characteristics of research pertaining to the evaluation of teaching the subject of public health to health professional students around the world (i.e., postgraduate level). Relative to other types of public health research, evaluation has been given little attention by educators and program leaders in health professionals' higher education. This is ironic, given the public health professionals' mantra about implementing evidence-based practice in all areas of their work.
Background In a 2010 report of a committee sponsored by the Chinese Medical Board, Frenk et al.1 examine the education of physicians, nurses, and public health professionals for the 21st century. They argue that a new type of education, based on a systems perspective, is imperative for the healthcare workforce of the coming century. As part of their assessment of the current status of health professionals' education, they contrast the extensive research conducted by the discipline of medicine about its pedagogy and the relative lack of research conducted by those educators training students for public health about their pedagogy. The research question driving the study reported here is as follows: ‘Do those who are educating health professional students about the subject of public health evaluate the curriculum and pedagogy/methods for whom they are teaching?’ Public health professionals work in an environment within which ‘complexity …is at least as great as that required for other professional groups.’2 In light of this, the quality of education is recognized as a key element in preparing health professionals to be part of the public health workforce of the future1 and reflected in a series of publications by the United Nations Educational, Scientific and Cultural Organization (UNESCO).3 The process and results of evaluation have the potential to be used in enhancing education in a number of different ways. This includes recognizing excellence; ensuring student and academic satisfaction; and meeting the expectations for internal review and external accreditation.4e7 Disseminating the results of evaluation across the academic teaching and learning community is imperative. Although many accrediting bodies require programs and institutions to assess their education according to set standards, the majority of evaluations conducted for such purposes probably remain proprietary or at least not widely shared. Consequently, the discipline is not advanced by identifying successes or finding failures. This applies to public health and other health professional disciplines. For example, the Commission on the Education of Public Health8 accredits more
than 200 schools and programs of public health in the USA, as well as a few in other countries. Although Centre for Epidemiology and Public Health (CEPH) has a rich body of information and many detailed examples of answers to the questions posed by the UNESCO, this information is not disseminated and not readily available.8 This is a missed opportunity. The present study was part of a long-term goal to enhance the effectiveness of educating the public health workforce based in different countries in our own practice and for educators across the board who are teaching public health. We started by identifying and characterizing what work has already been carried out with regard to educational evaluation. The intent was to establish a baseline of current practices that would form a foundation for promoting future adoption of evaluation as an integral component of the education of student health professionals about public health.
Methods This is a descriptive study of published evidence from peerreviewed journals. A comprehensive search strategy was perfected and applied to PubMed, Scopus, and ERIC. The search strategy made full use of controlled vocabularies, keywords, and Boolean operators. The first stage of identifying eligible articles was based on screening the title and abstract of each study identified from the search. The second stage included reading a full copy of the article for those meeting the first stage of the screening process and those for which a more detailed reading was necessary to ascertain eligibility. The data sought from each study included the following: year of publication, journal name, authors, country of origin (if not reported, then operationalized as the country of the institutional affiliation of the first author), and details of the methodology used for the educational evaluation. The subject of the evaluation was characterized according to the focus (curriculum, course, learning/teaching technique, and program), the purpose (informational, to shape change, and to comply with accreditation), the discipline (included all health professional disciplines in the search), and the number of institutions involved. Information regarding the evaluation itself included the methodology (coded as: quantitative, qualitative, mixed methods, and commentary/opinion/ perspective) and the number and background of study participants. To incorporate the heterogeneity of the public health workforce and related education and training, the review also collected information on any subdisciplines included in each selected study. Additional information in support of the review was collected, such as reference to
p u b l i c h e a l t h 1 7 8 ( 2 0 2 0 ) 1 0 5 e1 1 1
accreditation or external standards to frame the evaluation methodology or purpose and information referring to the study's funding source/provider. A combination of narrative and quantitative summaries was used to describe the relevant data and to help synthesize the results. The level of heterogeneity was such that formal statistical testing was untenable.
Results The search from January 2000 to December 2017 identified 2,442 articles for initial screening. Of these, 1,855 were excluded based on the title/abstract, 587 needed a full-text review with a further 501 rejected. Thus, 86 studies remained that met all the inclusion criteria (see the online supplementary material for a list of all articles meeting the full eligibility criteria).
General characteristics The 86 included studies that evaluated the education of health professional students about public health exhibited a wide range on almost every characteristic examined. The articles emanated from authors from a wide range of geographic areas (Table 1) across all continents except for South America. The number of authors of the 86 articles was 339, which ranged from one to twenty, with only four articles written by a single author (see Fig. 1). As shown in Fig. 2, articles were published in all years covered by the study, from 2000 to 2017. There was a positive trend with the number of publications increasing from three publications in 2000 and 2001 to a peak of 13 publications in 2014. Thereafter, the number of publications dropped. This could represent the time span for original articles to be published and indexed in databases used for the search. Disciplines represented included public health (n ¼ 49), medicine (n ¼ 23), nursing (n ¼ 2), dentistry (n ¼ 3), pharmacy (n ¼ 2), and law1 and dietetics (n ¼ 1). Five5 articles incorporated two or more disciplines or represented evaluation of interdisciplinary or multidisciplinary courses. Social work was included in two of the dual-discipline evaluations. Within public health, subdisciplines were health administration (n ¼ 5) and health education (n ¼ 4), and subdisciplines were apparent in one publication each for biostatistics, environmental health, and nutrition.
Table 1 e Geographic distribution of lead authors, by region. Country/region North America Europe Asia Mid-East Australia Africa Other (multicountry, no information) Total
Number of publications 49 20 7 4 2 2 2 86
107
The individual studies were dispersed across 61 different journals. Seventeen journals published more than one article. Two journals published five articles each: American Journal of Health Education and BMC Medical Education.
Evaluation methodology A total of 47 studies reported the use of quantitative methods, 15 were qualitative, a further 17 were mixed methods with both quantitative and qualitative methodologies, and seven were some form of commentary or descriptive study. The data collection methods varied widely. Quantitative studies ranged from online surveys of students asking about knowledge or attitudes to analysis of examination scores secured from university records to paper surveys sent to alumni. Qualitative data were obtained through focus groups, interviews, reflections, and thematic analysis of surveys using open-ended questions. Mixed-methods studies combined data from different samples, for example, quantitative data from student performance with opinions obtained from a smaller sample of students, alumni, or faculty. Research designs were similarly varied. A few studies used comparison groups: pre-post tests of self-assessments on competencies or objective knowledge questions were the most common methodology; several universities compared one another for student performance on examinations; student performance was compared with a national examination standard in two instances. Comparisons of cohorts at different times were absent, although several studies combined students from a number of years into a single sample. Techniques for data analysis reflected the broad range of data collection methods and sample sizes. A few studies reported sample sizes and survey instruments of sufficient magnitude to perform regression analyses. Chi-squared tests and t-tests were used to assess statistical significance between groups, primarily pre-post tests. Qualitative studies converted their findings to quantitative values using thematic analyses, by experts or using software programs, to note frequencies. Many studies reported only descriptive statistics.
Target audience The majority of the evaluation studies focused on university students (i.e., higher education institutions). A few studies contacted alumni to gather objective information about their career path/employment or for subjective assessment of the relevancy of their education to their job. The university was the subject of three studies that evaluated curricula for courses on particular subjects, with basic information obtained from a search of information posted on the Internet, followed by phone calls to administrative staff or program directors. Various combinations of students, alumni, and faculty were combined in mix-methods studies that attempted to triangulate information from multiple sources to assess the validity of a curriculum with respect to student outcomes. One study used an online survey to reach out to 1,300 alumni. Another used interviews and student reflections to solicit the opinion of the total class of 10 individuals. Five of the 86 studies included participants from multiple disciplines; ninety-four percent of the evaluations were specific to a single
108
p u b l i c h e a l t h 1 7 8 ( 2 0 2 0 ) 1 0 5 e1 1 1
Fig. 1 e Flow chart of the screening process.
Fig. 2 e Number of articles eligible for the review, by year of publication (2000e2017, n ¼ 86). health professional discipline, primarily medicine or public health.
Subject of the evaluation Table 2 shows the subject of the evaluations. Curriculum evaluation was the most common subject (n ¼ 33), followed by course evaluation (n ¼ 22). ‘Competencies’ and ‘competency frameworks’ tended to be imbedded in curriculum
evaluations. Nineteen studies examined the results of specific teaching techniques, including service learning, communitybased projects, and internships. Online and distance education were also evaluated and compared with traditional educational formats. The long-term perspective on matching education content and capacity with workforce demand was rarely mentioned and consisted primarily of a descriptive commentary style. Evaluation studies of curricula did include those that asked alumni to evaluate the curriculum based on
p u b l i c h e a l t h 1 7 8 ( 2 0 2 0 ) 1 0 5 e1 1 1
Table 2 e Subject of the evaluation studies. Subject Evaluation of the curriculum Evaluation of the individual course Evaluation of specific teaching methods Evaluation of administration Descriptive study or commentary including evaluation Total
Number of publications 33 21 19 7 6 86
their subsequent job experience. Three studies evaluated the results of a national program intended to increase the number of underrepresented ethnic groups in public health research.
Discussion The preparation of the future public health workforce is a major concern.9 The present study was in response to criticisms about the extent to which public health professionals pay attention to the relevance and outcomes of their training.1,10 The scoping review sought to identify and characterize studies in the peer-reviewed literature designed to evaluate the education of health professional students being trained to participate in the public health workforce. As far as we can ascertain, this is the first time that a systematic approach has been used to describe evaluations of pedagogy pertaining to the public health workforce. The longterm purpose of the study was to identify themes and examples of exemplary evaluations that could contribute to promote evaluation studies as an integral part of the education of health professionals about public health. We discuss the specifics of the findings first and then the larger consideration.
109
A number of articles evaluated the use of a specific, defined teaching method. Assessments of online and web-based trainings were conducted by nine studies. The findings consistently found online methods to be as effective as inclassroom teaching, and this observation is consistent with meta-analyses on online learning. The positive findings also offer reassurance that the public health workforce, which is highly dispersed geographically and ranges widely in its educational foundation and in-service needs, can be taught new material effectively through an online format. As a practical discipline, the outcome of education might be considered to be a public health workforce capable of meeting the community's need for effective and efficient public health functions. Albeit few, two types of studies evaluated curricula in the context of employer needs. Several studies followed up graduates to determine the impact of their education on their career. These studies tended to seek input on satisfaction from students' perspective rather than employers'. Several studies surveyed Higher Education Institutions (HEIs) themselves, to inquire about the offerings within the curricula, and then compared available contents with public health system competency needs. Only one study surveyed employers in the context of ascertaining competencies required as part of their recruitment strategy. The methods used to evaluate education clearly varied to such an extent that it was impossible to make anything other than broad classifications in terms of the study methods and/ or evaluation framework used. No consistent methodologies were used to gather data or to analyze them. The methodology reflected the constraints of evaluating educationdthe samples are people who choose to participate in the education. Small numbers, bias, inability to track over time, and the unavailability of exact control groups all hinder the rigor of individual evaluations.
Broader considerations affecting evaluation studies Critique of evaluation studies The number of studies meeting the eligibility criteria was few. Over the 18 years covered by this review, on average only five articles a year appeared in the search results. Some small encouragement is gleaned that the trend toward evaluation might be improving as the number of studies published annually increased slightly over time. However, given that the number of training programs has grown rapidly over the past decade and the number of students pursuing public health increased as well, the total number of evaluation studies pales in comparison with the educational offerings. The countries represented by the lead authors were limited in terms of their geographical spread around the world, with far more articles from North America and Europe than elsewhere, suggesting a bias to more economically advantaged countries. Evaluation of a course or a curriculum was the most frequent subject of evaluation studies. Courses and curricula are matters over which faculty have direct control; faculty thus seem to engage in a study for which they could apply the results. Other subjects, such as workforce analyses, might be beyond the influence of educators themselves. This might explain the very few studies pairing workforce demands with student outcomes.
The limited literature pertaining to educational evaluation reflects the challenges in the field of public health itself. Although public health is arguably a distinct discipline,11 effective public health practice relies on interdisciplinary practice of many disciplines. Conversely, many health disciplines incorporate teaching about public health into their education and competency models. Thus, identifying what is uniquely ‘public health’ in a curriculum and reporting the evaluation of that education in a way that makes it identifiable in a search such as the one conducted for this study is a challenge. A related practical problem is that, until the launch of Frontiers in Public Health Education and Promotion, no peerreviewed journals have actively solicited articles pertaining to public health workforce education, except perhaps on a special issue basis. The peer-reviewed literature is thus spread across many journals, making it more difficult to find and compile. Similarly, the authorship of the studies revealed no individual or groups of individuals consistently pursuing evaluation of the pedagogy pertaining to the public health workforce. Evaluation of public health pedagogy faces some barriers that are common in HEI education12 as well as that are more
110
p u b l i c h e a l t h 1 7 8 ( 2 0 2 0 ) 1 0 5 e1 1 1
distinct to public health and other health professionals.13 Motivating faculty to offer evaluations of their work and students to participate in course evaluations can be difficult. Similarly, it is reasonable that in some cases, faculty can be suspicious of evaluations as lower than expected performance could have implications on their performance review. Resources for evaluation of education and teaching programs are scarce. Very few studies reported having external funding. Most studies seemed to be conducted by the faculty of their own volition. This is in contrast to most biomedical or applied public health research studies that often rely on external agencies to fund research on a specific topic. The result is that faculty of health professions education, including public health, are commonly restricted to low resource evaluations. National standards are routinely set by a national health profession's governing council, such as the General Medical Council, UK. For public health, such an examination has recently been instituted in the United States as an optional professional credential.14 Other professional examinations exist as a requirement for membership of professional bodies such as the Faculty of Public Health (UK) (FPH UK). Achieving national professional agreement on the syllabus to be examined, the methods of examination, and the standard to be achieved is an extensive and complex task, which needs to be grounded on high-quality empirical investigations. External standards and accreditation were used as the basis of evaluation in only three articles. A number of articles mentioned competencies but used a self-defined set rather than a constellation of competencies defined by an external body. The findings from the rather eclectic studies included in this review, the varied methods, and methodological quality suggest that attaining national or international standards is a long way from coming to fruition. An additional barrier in evaluating public health education according to external standards is the fact that the multiple disciplines involved in public health already have a variety of discipline-specific accreditations. For example, health education programs adhere to the National Commission for Health Education Credentialing standards. Occupational medicine, seen as a subdiscipline of medicine, has an extensive set of accreditation requirements for programs and licensing requirements for individual graduates. To add to this, different accrediting bodies serve different geographic areas. Criteria established by the Public Health Institute of India differ from those of the Association of Schools of Public Health in Europe. Thus, constructing an evaluation to measure against an objective set of standards first requires the principal investigators to determine which set of standards apply.15 However, this is not an acceptable argument to avoid evaluation as a whole. A related question provoked by the relevancy of accreditation is the availability of data that are compiled for accreditation purposes. Accrediting bodies, as well as internal HEI assessment processes, request that a program provides information that could be used to evaluate the quality, effectiveness, or relevancy of a given training program. The accreditation reports ask for information about students' performance on competencies and alumni's reactions to workforce preparation. Surely, if shared, this mass of data
could be used to guide the field about what works and what does not work in training the future public health workforce.
Limitations As any systematic review of the literature, relevant publications might have been missed. To help offset this, we constructed a detailed search strategy and used back-tracing of publications found to further perfect the search. We could have hand searched journals, but given the wide scope of public health education, it was difficult to identify which journals were most relevant. We did, however, hand search the one journal dedicated to the education of public health professionals. Restricting the review to articles in the English language will almost undoubtedly have missed relevant publications. Unfortunately, we did not have resources to use other reviewers fluent in specific languages, most notably, Spanish. Finally, we did not search the unpublished literature, including policy reports, faculty reports, and other internal publications (the ‘grey literature’). The task of trying to obtain these from every institution that taught some aspect of public health was simply too immense.
Conclusions The current scoping review of the published literature about the evaluation of education of health professional students was modest in terms of the number of studies and the quality and the generalizability of those we found. Overall, the review did not identify exemplary evaluations or consistent themes that could be declared a standard or a template for evaluating the education about public health for a single discipline or that would cut across disciplines. We remain concerned about the lack of appetite for evaluation of teaching practice which might impact on the quality, relevance, and outcome of existing education received by this set of students. We strongly endorse the need for more rigorous investigations and, in particular, examining the relationship between what is taught, how it is taught, and how it then impacts the effectiveness of practice once the students are employed for a public healtherelated role. Conducting rigorous evaluations and sharing the results with transparency might lead to a workforce better able to respond positively to improve population health around the globe.
Author statements Ethical approval This article does not contain any studies with human participants or animals performed by any of the authors. It is of secondary analysis and does not require ethical consent.
Funding No funding was received for this study.
Competing interests None of the authors has a conflict of interest.
p u b l i c h e a l t h 1 7 8 ( 2 0 2 0 ) 1 0 5 e1 1 1
Availability of data and materials All data generated or analyzed during this study are included in this published article (and its Supplementary Information files).
references
1. Frenk J, Chen L, Bhutta Z, Cohen J, Crisp N, Evans T, et al. Health professions for a new century: transforming education to strengthen health systems in an interdependent world. The Lancet 2010;366:1923e58. 2. Birt CA, Foldspang A, Otok R. Meeting the population health challenge: what should you know, and what should you be able to do? Eur J Public Health 2018;28:789e90. 3. UNESCO. Safety, resilience, and social cohesion: a guide for curriculum developers. Assessment, and monitoring and evaluation. How will we know what students have learned? Paris: International Institute for Educational Planning; 2015, ISBN 978-92-803-1393-2. http://unesdoc.unesco.org/images/ 0023/002348/234819e.pdf. [Accessed 20 September 2018]. 4. Hanson L. Global citizenship, global health, and the internationalization of curriculum: a study of transformative potential. J Stud Int Educ 2010;14:70e88. 5. Hansen LW. Rethinking the student course evaluation. Lib Educ 2014;100:6e13. 6. Hammonds F, Mariano GJ, Ammons G, Chambers S. Student evaluations of teaching: improving teaching quality in higher education. Perspect Policy Pract Higher Ed 2017;21:26e33.
111
7. Looney J. Developing high-quality teachers: teacher evaluation for improvement. Eur J Educ 2011;46:440e55. 8. Commission on Education for Public Health. https://ceph.org/ constituents/schools/report-search-start/, 2019. [Accessed 21 May 2019]. 9. World Health Organization. reportThe World Health Report 2006dworking together for health. Chapter 3. www.who.int/ whr/2006/en/ISBN 92-4-156317-6. 10. Tao D, Evashwick C, Grivna M, Harrison R. Educating the public health workforce: a scoping review. Front Public Health 2018. https://doi.org/10.3389/fpubh.2018.00027/full. 11. Evashwick C, Finnegan J, Begun J. Public health as a discipline: has it arrived? JPHMP Sept/Oct 2013;15(5):412e9. 12. Schuh JH, Associates. Assessment methods for student affairs. John Wiley & Sons; 2009. 13. Keating SB, DeBoor SS, editors. Curriculum development and evaluation in nursing education. Springer Publishing Company; 2017. 14. National Board of Public Health Examiners. Certified in public health. https://www.nbphe.org/Accessed September 30, 2018. 15. Harrison RA, Gemmell I, Reed K. The effect of using different competence frameworks to audit the content of a masters program in public health. Front. Public Health 2015;3:143e7. https://doi.org/10.3389/fpubh.2015.00143.
Appendix A. Supplementary data Supplementary data to this article can be found online at https://doi.org/10.1016/j.puhe.2019.08.019.